Modeling shared resources with generalized synchronization within a Petri net bottom-up approach.
Ferrarini, L; Trioni, M
1996-01-01
This paper proposes a simple and effective way to represent shared resources in manufacturing systems within a Petri net model previously developed. Such a model relies on the bottom-up and modular approach to synthesis and analysis. The designer may define elementary tasks and then connect them with one another with three kinds of connections: self-loops, inhibitor arcs and simple synchronizations. A theoretical framework has been established for the analysis of liveness and reversibility of such models. The generalized synchronization, here formalized, represents an extension of the simple synchronization, allowing the merging of suitable subnets among elementary tasks. It is proved that under suitable, but not restrictive, hypotheses the generalized synchronization may be substituted for a simple one, thus being compatible with all the developed theoretical body.
Simplified aeroelastic modeling of horizontal axis wind turbines
NASA Technical Reports Server (NTRS)
Wendell, J. H.
1982-01-01
Certain aspects of the aeroelastic modeling and behavior of the horizontal axis wind turbine (HAWT) are examined. Two simple three degree of freedom models are described in this report, and tools are developed which allow other simple models to be derived. The first simple model developed is an equivalent hinge model to study the flap-lag-torsion aeroelastic stability of an isolated rotor blade. The model includes nonlinear effects, preconing, and noncoincident elastic axis, center of gravity, and aerodynamic center. A stability study is presented which examines the influence of key parameters on aeroelastic stability. Next, two general tools are developed to study the aeroelastic stability and response of a teetering rotor coupled to a flexible tower. The first of these tools is an aeroelastic model of a two-bladed rotor on a general flexible support. The second general tool is a harmonic balance solution method for the resulting second order system with periodic coefficients. The second simple model developed is a rotor-tower model which serves to demonstrate the general tools. This model includes nacelle yawing, nacelle pitching, and rotor teetering. Transient response time histories are calculated and compared to a similar model in the literature. Agreement between the two is very good, especially considering how few harmonics are used. Finally, a stability study is presented which examines the effects of support stiffness and damping, inflow angle, and preconing.
Complexity-aware simple modeling.
Gómez-Schiavon, Mariana; El-Samad, Hana
2018-02-26
Mathematical models continue to be essential for deepening our understanding of biology. On one extreme, simple or small-scale models help delineate general biological principles. However, the parsimony of detail in these models as well as their assumption of modularity and insulation make them inaccurate for describing quantitative features. On the other extreme, large-scale and detailed models can quantitatively recapitulate a phenotype of interest, but have to rely on many unknown parameters, making them often difficult to parse mechanistically and to use for extracting general principles. We discuss some examples of a new approach-complexity-aware simple modeling-that can bridge the gap between the small-scale and large-scale approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.
Bayesian analysis of volcanic eruptions
NASA Astrophysics Data System (ADS)
Ho, Chih-Hsiang
1990-10-01
The simple Poisson model generally gives a good fit to many volcanoes for volcanic eruption forecasting. Nonetheless, empirical evidence suggests that volcanic activity in successive equal time-periods tends to be more variable than a simple Poisson with constant eruptive rate. An alternative model is therefore examined in which eruptive rate(λ) for a given volcano or cluster(s) of volcanoes is described by a gamma distribution (prior) rather than treated as a constant value as in the assumptions of a simple Poisson model. Bayesian analysis is performed to link two distributions together to give the aggregate behavior of the volcanic activity. When the Poisson process is expanded to accomodate a gamma mixing distribution on λ, a consequence of this mixed (or compound) Poisson model is that the frequency distribution of eruptions in any given time-period of equal length follows the negative binomial distribution (NBD). Applications of the proposed model and comparisons between the generalized model and simple Poisson model are discussed based on the historical eruptive count data of volcanoes Mauna Loa (Hawaii) and Etna (Italy). Several relevant facts lead to the conclusion that the generalized model is preferable for practical use both in space and time.
A Model for General Parenting Skill is Too Simple: Mediational Models Work Better.
ERIC Educational Resources Information Center
Patterson, G. R.; Yoerger, K.
A study was designed to determine whether mediational models of parenting patterns account for significantly more variance in academic achievement than more general models. Two general models and two mediational models were considered. The first model identified five skills: (1) discipline; (2) monitoring; (3) family problem solving; (4) positive…
Calibrating the ECCO ocean general circulation model using Green's functions
NASA Technical Reports Server (NTRS)
Menemenlis, D.; Fu, L. L.; Lee, T.; Fukumori, I.
2002-01-01
Green's functions provide a simple, yet effective, method to test and calibrate General-Circulation-Model(GCM) parameterizations, to study and quantify model and data errors, to correct model biases and trends, and to blend estimates from different solutions and data products.
A comparison of simple global kinetic models for coal devolatilization with the CPD model
Richards, Andrew P.; Fletcher, Thomas H.
2016-08-01
Simulations of coal combustors and gasifiers generally cannot incorporate the complexities of advanced pyrolysis models, and hence there is interest in evaluating simpler models over ranges of temperature and heating rate that are applicable to the furnace of interest. In this paper, six different simple model forms are compared to predictions made by the Chemical Percolation Devolatilization (CPD) model. The model forms included three modified one-step models, a simple two-step model, and two new modified two-step models. These simple model forms were compared over a wide range of heating rates (5 × 10 3 to 10 6 K/s) at finalmore » temperatures up to 1600 K. Comparisons were made of total volatiles yield as a function of temperature, as well as the ultimate volatiles yield. Advantages and disadvantages for each simple model form are discussed. In conclusion, a modified two-step model with distributed activation energies seems to give the best agreement with CPD model predictions (with the fewest tunable parameters).« less
Commentary: Are Three Waves of Data Sufficient for Assessing Mediation?
ERIC Educational Resources Information Center
Reichardt, Charles S.
2011-01-01
Maxwell, Cole, and Mitchell (2011) demonstrated that simple structural equation models, when used with cross-sectional data, generally produce biased estimates of meditated effects. I extend those results by showing how simple structural equation models can produce biased estimates of meditated effects when used even with longitudinal data. Even…
Generalized estimators of avian abundance from count survey data
Royle, J. Andrew
2004-01-01
I consider modeling avian abundance from spatially referenced bird count data collected according to common protocols such as capture?recapture, multiple observer, removal sampling and simple point counts. Small sample sizes and large numbers of parameters have motivated many analyses that disregard the spatial indexing of the data, and thus do not provide an adequate treatment of spatial structure. I describe a general framework for modeling spatially replicated data that regards local abundance as a random process, motivated by the view that the set of spatially referenced local populations (at the sample locations) constitute a metapopulation. Under this view, attention can be focused on developing a model for the variation in local abundance independent of the sampling protocol being considered. The metapopulation model structure, when combined with the data generating model, define a simple hierarchical model that can be analyzed using conventional methods. The proposed modeling framework is completely general in the sense that broad classes of metapopulation models may be considered, site level covariates on detection and abundance may be considered, and estimates of abundance and related quantities may be obtained for sample locations, groups of locations, unsampled locations. Two brief examples are given, the first involving simple point counts, and the second based on temporary removal counts. Extension of these models to open systems is briefly discussed.
Generalized Born Models of Macromolecular Solvation Effects
NASA Astrophysics Data System (ADS)
Bashford, Donald; Case, David A.
2000-10-01
It would often be useful in computer simulations to use a simple description of solvation effects, instead of explicitly representing the individual solvent molecules. Continuum dielectric models often work well in describing the thermodynamic aspects of aqueous solvation, and approximations to such models that avoid the need to solve the Poisson equation are attractive because of their computational efficiency. Here we give an overview of one such approximation, the generalized Born model, which is simple and fast enough to be used for molecular dynamics simulations of proteins and nucleic acids. We discuss its strengths and weaknesses, both for its fidelity to the underlying continuum model and for its ability to replace explicit consideration of solvent molecules in macromolecular simulations. We focus particularly on versions of the generalized Born model that have a pair-wise analytical form, and therefore fit most naturally into conventional molecular mechanics calculations.
An overview of longitudinal data analysis methods for neurological research.
Locascio, Joseph J; Atri, Alireza
2011-01-01
The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models.
A simple dynamic engine model for use in a real-time aircraft simulation with thrust vectoring
NASA Technical Reports Server (NTRS)
Johnson, Steven A.
1990-01-01
A simple dynamic engine model was developed at the NASA Ames Research Center, Dryden Flight Research Facility, for use in thrust vectoring control law development and real-time aircraft simulation. The simple dynamic engine model of the F404-GE-400 engine (General Electric, Lynn, Massachusetts) operates within the aircraft simulator. It was developed using tabular data generated from a complete nonlinear dynamic engine model supplied by the manufacturer. Engine dynamics were simulated using a throttle rate limiter and low-pass filter. Included is a description of a method to account for axial thrust loss resulting from thrust vectoring. In addition, the development of the simple dynamic engine model and its incorporation into the F-18 high alpha research vehicle (HARV) thrust vectoring simulation. The simple dynamic engine model was evaluated at Mach 0.2, 35,000 ft altitude and at Mach 0.7, 35,000 ft altitude. The simple dynamic engine model is within 3 percent of the steady state response, and within 25 percent of the transient response of the complete nonlinear dynamic engine model.
Landau-Zener transitions and Dykhne formula in a simple continuum model
NASA Astrophysics Data System (ADS)
Dunham, Yujin; Garmon, Savannah
The Landau-Zener model describing the interaction between two linearly driven discrete levels is useful in describing many simple dynamical systems; however, no system is completely isolated from the surrounding environment. Here we examine a generalizations of the original Landau-Zener model to study simple environmental influences. We consider a model in which one of the discrete levels is replaced with a energy continuum, in which we find that the survival probability for the initially occupied diabatic level is unaffected by the presence of the continuum. This result can be predicted by assuming that each step in the evolution for the diabatic state evolves independently according to the Landau-Zener formula, even in the continuum limit. We also show that, at least for the simplest model, this result can also be predicted with the natural generalization of the Dykhne formula for open systems. We also observe dissipation as the non-escape probability from the discrete levels is no longer equal to one.
An Overview of Longitudinal Data Analysis Methods for Neurological Research
Locascio, Joseph J.; Atri, Alireza
2011-01-01
The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models. PMID:22203825
Testing the Simple Biosphere model (SiB) using point micrometeorological and biophysical data
NASA Technical Reports Server (NTRS)
Sellers, P. J.; Dorman, J. L.
1987-01-01
The suitability of the Simple Biosphere (SiB) model of Sellers et al. (1986) for calculation of the surface fluxes for use within general circulation models is assessed. The structure of the SiB model is described, and its performance is evaluated in terms of its ability to realistically and accurately simulate biophysical processes over a number of test sites, including Ruthe (Germany), South Carolina (U.S.), and Central Wales (UK), for which point biophysical and micrometeorological data were available. The model produced simulations of the energy balances of barley, wheat, maize, and Norway Spruce sites over periods ranging from 1 to 40 days. Generally, it was found that the model reproduced time series of latent, sensible, and ground-heat fluxes and surface radiative temperature comparable with the available data.
Microarray-based cancer prediction using soft computing approach.
Wang, Xiaosheng; Gotoh, Osamu
2009-05-26
One of the difficulties in using gene expression profiles to predict cancer is how to effectively select a few informative genes to construct accurate prediction models from thousands or ten thousands of genes. We screen highly discriminative genes and gene pairs to create simple prediction models involved in single genes or gene pairs on the basis of soft computing approach and rough set theory. Accurate cancerous prediction is obtained when we apply the simple prediction models for four cancerous gene expression datasets: CNS tumor, colon tumor, lung cancer and DLBCL. Some genes closely correlated with the pathogenesis of specific or general cancers are identified. In contrast with other models, our models are simple, effective and robust. Meanwhile, our models are interpretable for they are based on decision rules. Our results demonstrate that very simple models may perform well on cancerous molecular prediction and important gene markers of cancer can be detected if the gene selection approach is chosen reasonably.
An Inexpensive Robotics Laboratory.
ERIC Educational Resources Information Center
Inigo, R. M.; Angulo, J. M.
1985-01-01
Describes the design and implementation of a simple robot manipulator. The manipulator has three degrees of freedom and is controlled by a general purpose microcomputer. The basis for the manipulator (which costs under $100) is a simple working model of a crane. (Author/JN)
A simple reaction-rate model for turbulent diffusion flames
NASA Technical Reports Server (NTRS)
Bangert, L. H.
1975-01-01
A simple reaction rate model is proposed for turbulent diffusion flames in which the reaction rate is proportional to the turbulence mixing rate. The reaction rate is also dependent on the mean mass fraction and the mean square fluctuation of mass fraction of each reactant. Calculations are compared with experimental data and are generally successful in predicting the measured quantities.
A simple method for assessing occupational exposure via the one-way random effects model.
Krishnamoorthy, K; Mathew, Thomas; Peng, Jie
2016-11-01
A one-way random effects model is postulated for the log-transformed shift-long personal exposure measurements, where the random effect in the model represents an effect due to the worker. Simple closed-form confidence intervals are proposed for the relevant parameters of interest using the method of variance estimates recovery (MOVER). The performance of the confidence bounds is evaluated and compared with those based on the generalized confidence interval approach. Comparison studies indicate that the proposed MOVER confidence bounds are better than the generalized confidence bounds for the overall mean exposure and an upper percentile of the exposure distribution. The proposed methods are illustrated using a few examples involving industrial hygiene data.
A radio-frequency sheath model for complex waveforms
NASA Astrophysics Data System (ADS)
Turner, M. M.; Chabert, P.
2014-04-01
Plasma sheaths driven by radio-frequency voltages occur in contexts ranging from plasma processing to magnetically confined fusion experiments. An analytical understanding of such sheaths is therefore important, both intrinsically and as an element in more elaborate theoretical structures. Radio-frequency sheaths are commonly excited by highly anharmonic waveforms, but no analytical model exists for this general case. We present a mathematically simple sheath model that is in good agreement with earlier models for single frequency excitation, yet can be solved for arbitrary excitation waveforms. As examples, we discuss dual-frequency and pulse-like waveforms. The model employs the ansatz that the time-averaged electron density is a constant fraction of the ion density. In the cases we discuss, the error introduced by this approximation is small, and in general it can be quantified through an internal consistency condition of the model. This simple and accurate model is likely to have wide application.
Including resonances in the multiperipheral model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pinsky, S.S.; Snider, D.R.; Thomas, G.H.
1973-10-01
A simple generalization of the multiperipheral model (MPM) and the Mueller--Regge Model (MRM) is given which has improved phenomenological capabilities by explicitly incorporating resonance phenomena, and still is simple enough to be an important theoretical laboratory. The model is discussed both with and without charge. In addition, the one channel, two channel, three channel and N channel cases are explicitly treated. Particular attention is paid to the constraints of charge conservation and positivity in the MRM. The recently proven equivalence between the MRM and MPM is extended to this model, and is used extensively. (auth)
SimpleBox 4.0: Improving the model while keeping it simple….
Hollander, Anne; Schoorl, Marian; van de Meent, Dik
2016-04-01
Chemical behavior in the environment is often modeled with multimedia fate models. SimpleBox is one often-used multimedia fate model, firstly developed in 1986. Since then, two updated versions were published. Based on recent scientific developments and experience with SimpleBox 3.0, a new version of SimpleBox was developed and is made public here: SimpleBox 4.0. In this new model, eight major changes were implemented: removal of the local scale and vegetation compartments, addition of lake compartments and deep ocean compartments (including the thermohaline circulation), implementation of intermittent rain instead of drizzle and of depth dependent soil concentrations, adjustment of the partitioning behavior for organic acids and bases as well as of the value for enthalpy of vaporization. In this paper, the effects of the model changes in SimpleBox 4.0 on the predicted steady-state concentrations of chemical substances were explored for different substance groups (neutral organic substances, acids, bases, metals) in a standard emission scenario. In general, the largest differences between the predicted concentrations in the new and the old model are caused by the implementation of layered ocean compartments. Undesirable high model complexity caused by vegetation compartments and a local scale were removed to enlarge the simplicity and user friendliness of the model. Copyright © 2016 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Roberts, James S.; Bao, Han; Huang, Chun-Wei; Gagne, Phill
Characteristic curve approaches for linking parameters from the generalized partial credit model were examined for cases in which common (anchor) items are calibrated separately in two groups. Three of these approaches are simple extensions of the test characteristic curve (TCC), item characteristic curve (ICC), and operating characteristic curve…
Adaptive reduction of constitutive model-form error using a posteriori error estimation techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bishop, Joseph E.; Brown, Judith Alice
In engineering practice, models are typically kept as simple as possible for ease of setup and use, computational efficiency, maintenance, and overall reduced complexity to achieve robustness. In solid mechanics, a simple and efficient constitutive model may be favored over one that is more predictive, but is difficult to parameterize, is computationally expensive, or is simply not available within a simulation tool. In order to quantify the modeling error due to the choice of a relatively simple and less predictive constitutive model, we adopt the use of a posteriori model-form error-estimation techniques. Based on local error indicators in the energymore » norm, an algorithm is developed for reducing the modeling error by spatially adapting the material parameters in the simpler constitutive model. The resulting material parameters are not material properties per se, but depend on the given boundary-value problem. As a first step to the more general nonlinear case, we focus here on linear elasticity in which the “complex” constitutive model is general anisotropic elasticity and the chosen simpler model is isotropic elasticity. As a result, the algorithm for adaptive error reduction is demonstrated using two examples: (1) A transversely-isotropic plate with hole subjected to tension, and (2) a transversely-isotropic tube with two side holes subjected to torsion.« less
Adaptive reduction of constitutive model-form error using a posteriori error estimation techniques
Bishop, Joseph E.; Brown, Judith Alice
2018-06-15
In engineering practice, models are typically kept as simple as possible for ease of setup and use, computational efficiency, maintenance, and overall reduced complexity to achieve robustness. In solid mechanics, a simple and efficient constitutive model may be favored over one that is more predictive, but is difficult to parameterize, is computationally expensive, or is simply not available within a simulation tool. In order to quantify the modeling error due to the choice of a relatively simple and less predictive constitutive model, we adopt the use of a posteriori model-form error-estimation techniques. Based on local error indicators in the energymore » norm, an algorithm is developed for reducing the modeling error by spatially adapting the material parameters in the simpler constitutive model. The resulting material parameters are not material properties per se, but depend on the given boundary-value problem. As a first step to the more general nonlinear case, we focus here on linear elasticity in which the “complex” constitutive model is general anisotropic elasticity and the chosen simpler model is isotropic elasticity. As a result, the algorithm for adaptive error reduction is demonstrated using two examples: (1) A transversely-isotropic plate with hole subjected to tension, and (2) a transversely-isotropic tube with two side holes subjected to torsion.« less
Goychuk, I
2001-08-01
Stochastic resonance in a simple model of information transfer is studied for sensory neurons and ensembles of ion channels. An exact expression for the information gain is obtained for the Poisson process with the signal-modulated spiking rate. This result allows one to generalize the conventional stochastic resonance (SR) problem (with periodic input signal) to the arbitrary signals of finite duration (nonstationary SR). Moreover, in the case of a periodic signal, the rate of information gain is compared with the conventional signal-to-noise ratio. The paper establishes the general nonequivalence between both measures notwithstanding their apparent similarity in the limit of weak signals.
Osman, Magda; Wiegmann, Alex
2017-03-01
In this review we make a simple theoretical argument which is that for theory development, computational modeling, and general frameworks for understanding moral psychology researchers should build on domain-general principles from reasoning, judgment, and decision-making research. Our approach is radical with respect to typical models that exist in moral psychology that tend to propose complex innate moral grammars and even evolutionarily guided moral principles. In support of our argument we show that by using a simple value-based decision model we can capture a range of core moral behaviors. Crucially, the argument we propose is that moral situations per se do not require anything specialized or different from other situations in which we have to make decisions, inferences, and judgments in order to figure out how to act.
A simple biosphere model (SiB) for use within general circulation models
NASA Technical Reports Server (NTRS)
Sellers, P. J.; Mintz, Y.; Sud, Y. C.; Dalcher, A.
1986-01-01
A simple realistic biosphere model for calculating the transfer of energy, mass and momentum between the atmosphere and the vegetated surface of the earth has been developed for use in atmospheric general circulation models. The vegetation in each terrestrial model grid is represented by an upper level, representing the perennial canopy of trees and shrubs, and a lower level, representing the annual cover of grasses and other heraceous species. The vegetation morphology and the physical and physiological properties of the vegetation layers determine such properties as: the reflection, transmission, absorption and emission of direct and diffuse radiation; the infiltration, drainage, and storage of the residual rainfall in the soil; and the control over the stomatal functioning. The model, with prescribed vegetation parameters and soil interactive soil moisture, can be used for prediction of the atmospheric circulation and precipitaion fields for short periods of up to a few weeks.
Modeling Population and Ecosystem Response to Sublethal Toxicant Exposure
2001-09-30
mutualism utilized modified Lotka - Volterra (L-V) competition equations in which the sign of the interspecific interaction term was changed from...within complex communities and ecosystems. Prior to the current award, the PIs formulated and tested general dynamic energy budget models...Nisbet, 1998; chapter 7) make a convincing case that ecosystems do truly have dynamics that can be described by relatively simple, general , models
Monotone Properties of a General Diagnostic Model. Research Report. ETS RR-07-25
ERIC Educational Resources Information Center
Xu, Xueli
2007-01-01
Monotonicity properties of a general diagnostic model (GDM) are considered in this paper. Simple data summaries are identified to inform about the ordered categories of latent traits. The findings are very much in accordance with the statements made about the GPCM (Hemker, Sijtsma, Molenaar, & Junker, 1996, 1997). On the one hand, by fitting a…
ERIC Educational Resources Information Center
Gibbons, Robert D.; And Others
In the process of developing a conditionally-dependent item response theory (IRT) model, the problem arose of modeling an underlying multivariate normal (MVN) response process with general correlation among the items. Without the assumption of conditional independence, for which the underlying MVN cdf takes on comparatively simple forms and can be…
Soulis, Konstantinos X; Valiantzas, John D; Ntoulas, Nikolaos; Kargas, George; Nektarios, Panayiotis A
2017-09-15
In spite of the well-known green roof benefits, their widespread adoption in the management practices of urban drainage systems requires the use of adequate analytical and modelling tools. In the current study, green roof runoff modeling was accomplished by developing, testing, and jointly using a simple conceptual model and a physically based numerical simulation model utilizing HYDRUS-1D software. The use of such an approach combines the advantages of the conceptual model, namely simplicity, low computational requirements, and ability to be easily integrated in decision support tools with the capacity of the physically based simulation model to be easily transferred in conditions and locations other than those used for calibrating and validating it. The proposed approach was evaluated with an experimental dataset that included various green roof covers (either succulent plants - Sedum sediforme, or xerophytic plants - Origanum onites, or bare substrate without any vegetation) and two substrate depths (either 8 cm or 16 cm). Both the physically based and the conceptual models matched very closely the observed hydrographs. In general, the conceptual model performed better than the physically based simulation model but the overall performance of both models was sufficient in most cases as it is revealed by the Nash-Sutcliffe Efficiency index which was generally greater than 0.70. Finally, it was showcased how a physically based and a simple conceptual model can be jointly used to allow the use of the simple conceptual model for a wider set of conditions than the available experimental data and in order to support green roof design. Copyright © 2017 Elsevier Ltd. All rights reserved.
A general consumer-resource population model
Lafferty, Kevin D.; DeLeo, Giulio; Briggs, Cheryl J.; Dobson, Andrew P.; Gross, Thilo; Kuris, Armand M.
2015-01-01
Food-web dynamics arise from predator-prey, parasite-host, and herbivore-plant interactions. Models for such interactions include up to three consumer activity states (questing, attacking, consuming) and up to four resource response states (susceptible, exposed, ingested, resistant). Articulating these states into a general model allows for dissecting, comparing, and deriving consumer-resource models. We specify this general model for 11 generic consumer strategies that group mathematically into predators, parasites, and micropredators and then derive conditions for consumer success, including a universal saturating functional response. We further show how to use this framework to create simple models with a common mathematical lineage and transparent assumptions. Underlying assumptions, missing elements, and composite parameters are revealed when classic consumer-resource models are derived from the general model.
A generalized model via random walks for information filtering
NASA Astrophysics Data System (ADS)
Ren, Zhuo-Ming; Kong, Yixiu; Shang, Ming-Sheng; Zhang, Yi-Cheng
2016-08-01
There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation.
A simple model for indentation creep
NASA Astrophysics Data System (ADS)
Ginder, Ryan S.; Nix, William D.; Pharr, George M.
2018-03-01
A simple model for indentation creep is developed that allows one to directly convert creep parameters measured in indentation tests to those observed in uniaxial tests through simple closed-form relationships. The model is based on the expansion of a spherical cavity in a power law creeping material modified to account for indentation loading in a manner similar to that developed by Johnson for elastic-plastic indentation (Johnson, 1970). Although only approximate in nature, the simple mathematical form of the new model makes it useful for general estimation purposes or in the development of other deformation models in which a simple closed-form expression for the indentation creep rate is desirable. Comparison to a more rigorous analysis which uses finite element simulation for numerical evaluation shows that the new model predicts uniaxial creep rates within a factor of 2.5, and usually much better than this, for materials creeping with stress exponents in the range 1 ≤ n ≤ 7. The predictive capabilities of the model are evaluated by comparing it to the more rigorous analysis and several sets of experimental data in which both the indentation and uniaxial creep behavior have been measured independently.
Towards a General Model of Temporal Discounting
ERIC Educational Resources Information Center
van den Bos, Wouter; McClure, Samuel M.
2013-01-01
Psychological models of temporal discounting have now successfully displaced classical economic theory due to the simple fact that many common behavior patterns, such as impulsivity, were unexplainable with classic models. However, the now dominant hyperbolic model of discounting is itself becoming increasingly strained. Numerous factors have…
Stratospheric General Circulation with Chemistry Model (SGCCM)
NASA Technical Reports Server (NTRS)
Rood, Richard B.; Douglass, Anne R.; Geller, Marvin A.; Kaye, Jack A.; Nielsen, J. Eric; Rosenfield, Joan E.; Stolarski, Richard S.
1990-01-01
In the past two years constituent transport and chemistry experiments have been performed using both simple single constituent models and more complex reservoir species models. Winds for these experiments have been taken from the data assimilation effort, Stratospheric Data Analysis System (STRATAN).
Poverty trap formed by the ecology of infectious diseases
Bonds, Matthew H.; Keenan, Donald C.; Rohani, Pejman; Sachs, Jeffrey D.
2010-01-01
While most of the world has enjoyed exponential economic growth, more than one-sixth of the world is today roughly as poor as their ancestors were many generations ago. Widely accepted general explanations for the persistence of such poverty have been elusive and are needed by the international development community. Building on a well-established model of human infectious diseases, we show how formally integrating simple economic and disease ecology models can naturally give rise to poverty traps, where initial economic and epidemiological conditions determine the long-term trajectory of the health and economic development of a society. This poverty trap may therefore be broken by improving health conditions of the population. More generally, we demonstrate that simple human ecological models can help explain broad patterns of modern economic organization. PMID:20007179
Why do things fall? How to explain why gravity is not a force
NASA Astrophysics Data System (ADS)
Stannard, Warren B.
2018-03-01
In most high school physics classes, gravity is described as an attractive force between two masses as formulated by Newton over 300 years ago. Einstein’s general theory of relativity implies that gravitational effects are instead the result of a ‘curvature’ of space-time. However, explaining why things fall without resorting to Newton’s gravitational force can be difficult. This paper introduces some simple graphical and visual analogies and models which are suitable for the introduction of Einstein’s theory of general relativity at a high school level. These models provide an alternative to Newton’s gravitational force and help answer the simple question: why do things fall?
A Generalized Simple Formulation of Convective Adjustment ...
Convective adjustment timescale (τ) for cumulus clouds is one of the most influential parameters controlling parameterized convective precipitation in climate and weather simulation models at global and regional scales. Due to the complex nature of deep convection, a prescribed value or ad hoc representation of τ is used in most global and regional climate/weather models making it a tunable parameter and yet still resulting in uncertainties in convective precipitation simulations. In this work, a generalized simple formulation of τ for use in any convection parameterization for shallow and deep clouds is developed to reduce convective precipitation biases at different grid spacing. Unlike existing other methods, our new formulation can be used with field campaign measurements to estimate τ as demonstrated by using data from two different special field campaigns. Then, we implemented our formulation into a regional model (WRF) for testing and evaluation. Results indicate that our simple τ formulation can give realistic temporal and spatial variations of τ across continental U.S. as well as grid-scale and subgrid scale precipitation. We also found that as the grid spacing decreases (e.g., from 36 to 4-km grid spacing), grid-scale precipitation dominants over subgrid-scale precipitation. The generalized τ formulation works for various types of atmospheric conditions (e.g., continental clouds due to heating and large-scale forcing over la
A simple inertial model for Neptune's zonal circulation
NASA Technical Reports Server (NTRS)
Allison, Michael; Lumetta, James T.
1990-01-01
Voyager imaging observations of zonal cloud-tracked winds on Neptune revealed a strongly subrotational equatorial jet with a speed approaching 500 m/s and generally decreasing retrograde motion toward the poles. The wind data are interpreted with a speculative but revealingly simple model based on steady gradient flow balance and an assumed global homogenization of potential vorticity for shallow layer motion. The prescribed model flow profile relates the equatorial velocity to the mid-latitude shear, in reasonable agreement with the available data, and implies a global horizontal deformation scale L(D) of about 3000 km.
ECOLOGICAL THEORY. A general consumer-resource population model.
Lafferty, Kevin D; DeLeo, Giulio; Briggs, Cheryl J; Dobson, Andrew P; Gross, Thilo; Kuris, Armand M
2015-08-21
Food-web dynamics arise from predator-prey, parasite-host, and herbivore-plant interactions. Models for such interactions include up to three consumer activity states (questing, attacking, consuming) and up to four resource response states (susceptible, exposed, ingested, resistant). Articulating these states into a general model allows for dissecting, comparing, and deriving consumer-resource models. We specify this general model for 11 generic consumer strategies that group mathematically into predators, parasites, and micropredators and then derive conditions for consumer success, including a universal saturating functional response. We further show how to use this framework to create simple models with a common mathematical lineage and transparent assumptions. Underlying assumptions, missing elements, and composite parameters are revealed when classic consumer-resource models are derived from the general model. Copyright © 2015, American Association for the Advancement of Science.
On the Bayesian Nonparametric Generalization of IRT-Type Models
ERIC Educational Resources Information Center
San Martin, Ernesto; Jara, Alejandro; Rolin, Jean-Marie; Mouchart, Michel
2011-01-01
We study the identification and consistency of Bayesian semiparametric IRT-type models, where the uncertainty on the abilities' distribution is modeled using a prior distribution on the space of probability measures. We show that for the semiparametric Rasch Poisson counts model, simple restrictions ensure the identification of a general…
A VARIABLE REACTIVITY MODEL FOR ION BINDING TO ENVIRONMENTAL SORBENTS
The conceptual and mathematical basis for a new general-composite modeling approach for ion binding to environmental sorbents is presented. The work extends the Simple Metal Sorption (SiMS) model previously presented for metal and proton binding to humic substances. A surface com...
Using the PLUM procedure of SPSS to fit unequal variance and generalized signal detection models.
DeCarlo, Lawrence T
2003-02-01
The recent addition of aprocedure in SPSS for the analysis of ordinal regression models offers a simple means for researchers to fit the unequal variance normal signal detection model and other extended signal detection models. The present article shows how to implement the analysis and how to interpret the SPSS output. Examples of fitting the unequal variance normal model and other generalized signal detection models are given. The approach offers a convenient means for applying signal detection theory to a variety of research.
Generalized Tavis-Cummings models and quantum networks
NASA Astrophysics Data System (ADS)
Gorokhov, A. V.
2018-04-01
The properties of quantum networks based on generalized Tavis-Cummings models are theoretically investigated. We have calculated the information transfer success rate from one node to another in a simple model of a quantum network realized with two-level atoms placed in the cavities and interacting with an external laser field and cavity photons. The method of dynamical group of the Hamiltonian and technique of corresponding coherent states were used for investigation of the temporal dynamics of the two nodes model.
A dynamical systems approach to actin-based motility in Listeria monocytogenes
NASA Astrophysics Data System (ADS)
Hotton, S.
2010-11-01
A simple kinematic model for the trajectories of Listeria monocytogenes is generalized to a dynamical system rich enough to exhibit the resonant Hopf bifurcation structure of excitable media and simple enough to be studied geometrically. It is shown how L. monocytogenes trajectories and meandering spiral waves are organized by the same type of attracting set.
General Relativity in (1 + 1) Dimensions
ERIC Educational Resources Information Center
Boozer, A. D.
2008-01-01
We describe a theory of gravity in (1 + 1) dimensions that can be thought of as a toy model of general relativity. The theory should be a useful pedagogical tool, because it is mathematically much simpler than general relativity but shares much of the same conceptual structure; in particular, it gives a simple illustration of how gravity arises…
Maximum efficiency of state-space models of nanoscale energy conversion devices
NASA Astrophysics Data System (ADS)
Einax, Mario; Nitzan, Abraham
2016-07-01
The performance of nano-scale energy conversion devices is studied in the framework of state-space models where a device is described by a graph comprising states and transitions between them represented by nodes and links, respectively. Particular segments of this network represent input (driving) and output processes whose properly chosen flux ratio provides the energy conversion efficiency. Simple cyclical graphs yield Carnot efficiency for the maximum conversion yield. We give general proof that opening a link that separate between the two driving segments always leads to reduced efficiency. We illustrate these general result with simple models of a thermoelectric nanodevice and an organic photovoltaic cell. In the latter an intersecting link of the above type corresponds to non-radiative carriers recombination and the reduced maximum efficiency is manifested as a smaller open-circuit voltage.
Maximum efficiency of state-space models of nanoscale energy conversion devices.
Einax, Mario; Nitzan, Abraham
2016-07-07
The performance of nano-scale energy conversion devices is studied in the framework of state-space models where a device is described by a graph comprising states and transitions between them represented by nodes and links, respectively. Particular segments of this network represent input (driving) and output processes whose properly chosen flux ratio provides the energy conversion efficiency. Simple cyclical graphs yield Carnot efficiency for the maximum conversion yield. We give general proof that opening a link that separate between the two driving segments always leads to reduced efficiency. We illustrate these general result with simple models of a thermoelectric nanodevice and an organic photovoltaic cell. In the latter an intersecting link of the above type corresponds to non-radiative carriers recombination and the reduced maximum efficiency is manifested as a smaller open-circuit voltage.
Foxes and Rabbits - and a Spreadsheet.
ERIC Educational Resources Information Center
Carson, S. R.
1996-01-01
Presents a numerical simulation of a simple food chain together with a set of mathematical rules generalizing the model to a food web of any complexity. Discusses some of the model's interesting features and its use by students. (Author/JRH)
Operator priming and generalization of practice in adults' simple arithmetic.
Chen, Yalin; Campbell, Jamie I D
2016-04-01
There is a renewed debate about whether educated adults solve simple addition problems (e.g., 2 + 3) by direct fact retrieval or by fast, automatic counting-based procedures. Recent research testing adults' simple addition and multiplication showed that a 150-ms preview of the operator (+ or ×) facilitated addition, but not multiplication, suggesting that a general addition procedure was primed by the + sign. In Experiment 1 (n = 36), we applied this operator-priming paradigm to rule-based problems (0 + N = N, 1 × N = N, 0 × N = 0) and 1 + N problems with N ranging from 0 to 9. For the rule-based problems, we found both operator-preview facilitation and generalization of practice (e.g., practicing 0 + 3 sped up unpracticed 0 + 8), the latter being a signature of procedure use; however, we also found operator-preview facilitation for 1 + N in the absence of generalization, which implies the 1 + N problems were solved by fact retrieval but nonetheless were facilitated by an operator preview. Thus, the operator preview effect does not discriminate procedure use from fact retrieval. Experiment 2 (n = 36) investigated whether a population with advanced mathematical training-engineering and computer science students-would show generalization of practice for nonrule-based simple addition problems (e.g., 1 + 4, 4 + 7). The 0 + N problems again presented generalization, whereas no nonzero problem type did; but all nonzero problems sped up when the identical problems were retested, as predicted by item-specific fact retrieval. The results pose a strong challenge to the generality of the proposal that skilled adults' simple addition is based on fast procedural algorithms, and instead support a fact-retrieval model of fast addition performance. (c) 2016 APA, all rights reserved).
A Generalized Information Theoretical Model for Quantum Secret Sharing
NASA Astrophysics Data System (ADS)
Bai, Chen-Ming; Li, Zhi-Hui; Xu, Ting-Ting; Li, Yong-Ming
2016-11-01
An information theoretical model for quantum secret sharing was introduced by H. Imai et al. (Quantum Inf. Comput. 5(1), 69-80 2005), which was analyzed by quantum information theory. In this paper, we analyze this information theoretical model using the properties of the quantum access structure. By the analysis we propose a generalized model definition for the quantum secret sharing schemes. In our model, there are more quantum access structures which can be realized by our generalized quantum secret sharing schemes than those of the previous one. In addition, we also analyse two kinds of important quantum access structures to illustrate the existence and rationality for the generalized quantum secret sharing schemes and consider the security of the scheme by simple examples.
Interpersonal distance modeling during fighting activities.
Dietrich, Gilles; Bredin, Jonathan; Kerlirzin, Yves
2010-10-01
The aim of this article is to elaborate a general framework for modeling dual opposition activities, or more generally, dual interaction. The main hypothesis is that opposition behavior can be measured directly from a global variable and that the relative distance between the two subjects can be this parameter. Moreover, this parameter should be considered as multidimensional parameter depending not only on the dynamics of the subjects but also on the "internal" parameters of the subjects, such as sociological and/or emotional states. Standard and simple mechanical formalization will be used to model this multifactorial distance. To illustrate such a general modeling methodology, this model was compared with actual data from an opposition activity like Japanese fencing (kendo). This model captures not only coupled coordination, but more generally interaction in two-subject activities.
Psychophysics of time perception and intertemporal choice models
NASA Astrophysics Data System (ADS)
Takahashi, Taiki; Oono, Hidemi; Radford, Mark H. B.
2008-03-01
Intertemporal choice and psychophysics of time perception have been attracting attention in econophysics and neuroeconomics. Several models have been proposed for intertemporal choice: exponential discounting, general hyperbolic discounting (exponential discounting with logarithmic time perception of the Weber-Fechner law, a q-exponential discount model based on Tsallis's statistics), simple hyperbolic discounting, and Stevens' power law-exponential discounting (exponential discounting with Stevens' power time perception). In order to examine the fitness of the models for behavioral data, we estimated the parameters and AICc (Akaike Information Criterion with small sample correction) of the intertemporal choice models by assessing the points of subjective equality (indifference points) at seven delays. Our results have shown that the orders of the goodness-of-fit for both group and individual data were [Weber-Fechner discounting (general hyperbola) > Stevens' power law discounting > Simple hyperbolic discounting > Exponential discounting], indicating that human time perception in intertemporal choice may follow the Weber-Fechner law. Indications of the results for neuropsychopharmacological treatments of addiction and biophysical processing underlying temporal discounting and time perception are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cimpoesu, Dorin, E-mail: cdorin@uaic.ro; Stoleriu, Laurentiu; Stancu, Alexandru
2013-12-14
We propose a generalized Stoner-Wohlfarth (SW) type model to describe various experimentally observed angular dependencies of the switching field in non-single-domain magnetic particles. Because the nonuniform magnetic states are generally characterized by complicated spin configurations with no simple analytical description, we maintain the macrospin hypothesis and we phenomenologically include the effects of nonuniformities only in the anisotropy energy, preserving as much as possible the elegance of SW model, the concept of critical curve and its geometric interpretation. We compare the results obtained with our model with full micromagnetic simulations in order to evaluate the performance and limits of our approach.
Division of Attention Relative to Response Between Attended and Unattended Stimuli.
ERIC Educational Resources Information Center
Kantowitz, Barry H.
Research was conducted to investigate two general classes of human attention models, early-selection models which claim that attentional selecting precedes memory and meaning extraction mechanisms, and late-selection models which posit the reverse. This research involved two components: (1) the development of simple, efficient, computer-oriented…
Receptor Surface Models in the Classroom: Introducing Molecular Modeling to Students in a 3-D World
ERIC Educational Resources Information Center
Geldenhuys, Werner J.; Hayes, Michael; Van der Schyf, Cornelis J.; Allen, David D.; Malan, Sarel F.
2007-01-01
A simple, novel and generally applicable method to demonstrate structure-activity associations of a group of biologically interesting compounds in relation to receptor binding is described. This method is useful for undergraduates and graduate students in medicinal chemistry and computer modeling programs.
Tsou, Tsung-Shan
2007-03-30
This paper introduces an exploratory way to determine how variance relates to the mean in generalized linear models. This novel method employs the robust likelihood technique introduced by Royall and Tsou.A urinary data set collected by Ginsberg et al. and the fabric data set analysed by Lee and Nelder are considered to demonstrate the applicability and simplicity of the proposed technique. Application of the proposed method could easily reveal a mean-variance relationship that would generally be left unnoticed, or that would require more complex modelling to detect. Copyright (c) 2006 John Wiley & Sons, Ltd.
A Hilbert Space Representation of Generalized Observables and Measurement Processes in the ESR Model
NASA Astrophysics Data System (ADS)
Sozzo, Sandro; Garola, Claudio
2010-12-01
The extended semantic realism ( ESR) model recently worked out by one of the authors embodies the mathematical formalism of standard (Hilbert space) quantum mechanics in a noncontextual framework, reinterpreting quantum probabilities as conditional instead of absolute. We provide here a Hilbert space representation of the generalized observables introduced by the ESR model that satisfy a simple physical condition, propose a generalization of the projection postulate, and suggest a possible mathematical description of the measurement process in terms of evolution of the compound system made up of the measured system and the measuring apparatus.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skrypnyk, T.
2009-10-15
We analyze symmetries of the integrable generalizations of Jaynes-Cummings and Dicke models associated with simple Lie algebras g and their reductive subalgebras g{sub K}[T. Skrypnyk, 'Generalized n-level Jaynes-Cummings and Dicke models, classical rational r-matrices and nested Bethe ansatz', J. Phys. A: Math. Theor. 41, 475202 (2008)]. We show that their symmetry algebras contain commutative subalgebras isomorphic to the Cartan subalgebras of g, which can be added to the commutative algebras of quantum integrals generated with the help of the quantum Lax operators. We diagonalize additional commuting integrals and constructed with their help the most general integrable quantum Hamiltonian of themore » generalized n-level many-mode Jaynes-Cummings and Dicke-type models using nested algebraic Bethe ansatz.« less
NASA Astrophysics Data System (ADS)
Baird, M. E.; Walker, S. J.; Wallace, B. B.; Webster, I. T.; Parslow, J. S.
2003-03-01
A simple model of estuarine eutrophication is built on biomechanical (or mechanistic) descriptions of a number of the key ecological processes in estuaries. Mechanistically described processes include the nutrient uptake and light capture of planktonic and benthic autotrophs, and the encounter rates of planktonic predators and prey. Other more complex processes, such as sediment biogeochemistry, detrital processes and phosphate dynamics, are modelled using empirical descriptions from the Port Phillip Bay Environmental Study (PPBES) ecological model. A comparison is made between the mechanistically determined rates of ecological processes and the analogous empirically determined rates in the PPBES ecological model. The rates generally agree, with a few significant exceptions. Model simulations were run at a range of estuarine depths and nutrient loads, with outputs presented as the annually averaged biomass of autotrophs. The simulations followed a simple conceptual model of eutrophication, suggesting a simple biomechanical understanding of estuarine processes can provide a predictive tool for ecological processes in a wide range of estuarine ecosystems.
Receptors as a master key for synchronization of rhythms
NASA Astrophysics Data System (ADS)
Nagano, Seido
2004-03-01
A simple, but general scheme to achieve synchronization of rhythms was derived. The scheme has been inductively generalized from the modelling study of cellular slime mold. It was clarified that biological receptors work as apparatuses that can convert external stimulus to the form of nonlinear interaction within individual oscillators. Namely, the mathematical model receptor works as a nonlinear coupling apparatus between nonlinear oscillators. Thus, synchronization is achieved as a result of competition between two kinds of non-linearities, and to achieve synchronization, even a small external stimulation via model receptors can change the characteristics of individual oscillators significantly. The derived scheme is very simple mathematically, but it is a very powerful scheme as numerically demonstrated. The biological receptor scheme should significantly help understanding of synchronization phenomena in biology since groups of limit cycle oscillators and receptors are ubiquitous in biological systems. Reference: S. Nagano, Phys Rev. E67, 056215(2003)
A geostationary Earth orbit satellite model using Easy Java Simulation
NASA Astrophysics Data System (ADS)
Wee, Loo Kang; Hwee Goh, Giam
2013-01-01
We develop an Easy Java Simulation (EJS) model for students to visualize geostationary orbits near Earth, modelled using a Java 3D implementation of the EJS 3D library. The simplified physics model is described and simulated using a simple constant angular velocity equation. We discuss four computer model design ideas: (1) a simple and realistic 3D view and associated learning in the real world; (2) comparative visualization of permanent geostationary satellites; (3) examples of non-geostationary orbits of different rotation senses, periods and planes; and (4) an incorrect physics model for conceptual discourse. General feedback from the students has been relatively positive, and we hope teachers will find the computer model useful in their own classes.
Center for Parallel Optimization.
1996-03-19
A NEW OPTIMIZATION BASED APPROACH TO IMPROVING GENERALIZATION IN MACHINE LEARNING HAS BEEN PROPOSED AND COMPUTATIONALLY VALIDATED ON SIMPLE LINEAR MODELS AS WELL AS ON HIGHLY NONLINEAR SYSTEMS SUCH AS NEURAL NETWORKS.
Effective Biot theory and its generalization to poroviscoelastic models
NASA Astrophysics Data System (ADS)
Liu, Xu; Greenhalgh, Stewart; Zhou, Bing; Greenhalgh, Mark
2018-02-01
A method is suggested to express the effective bulk modulus of the solid frame of a poroelastic material as a function of the saturated bulk modulus. This method enables effective Biot theory to be described through the use of seismic dispersion measurements or other models developed for the effective saturated bulk modulus. The effective Biot theory is generalized to a poroviscoelastic model of which the moduli are represented by the relaxation functions of the generalized fractional Zener model. The latter covers the general Zener and the Cole-Cole models as special cases. A global search method is described to determine the parameters of the relaxation functions, and a simple deterministic method is also developed to find the defining parameters of the single Cole-Cole model. These methods enable poroviscoelastic models to be constructed, which are based on measured seismic attenuation functions, and ensure that the model dispersion characteristics match the observations.
NASA Technical Reports Server (NTRS)
Matthews, E.
1984-01-01
A simple method was developed for improved prescription of seasonal surface characteristics and parameterization of land-surface processes in climate models. This method, developed for the Goddard Institute for Space Studies General Circulation Model II (GISS GCM II), maintains the spatial variability of fine-resolution land-cover data while restricting to 8 the number of vegetation types handled in the model. This was achieved by: redefining the large number of vegetation classes in the 1 deg x 1 deg resolution Matthews (1983) vegetation data base as percentages of 8 simple types; deriving roughness length, field capacity, masking depth and seasonal, spectral reflectivity for the 8 types; and aggregating these surface features from the 1 deg x 1 deg resolution to coarser model resolutions, e.g., 8 deg latitude x 10 deg longitude or 4 deg latitude x 5 deg longitude.
A survey of commercial object-oriented database management systems
NASA Technical Reports Server (NTRS)
Atkins, John
1992-01-01
The object-oriented data model is the culmination of over thirty years of database research. Initially, database research focused on the need to provide information in a consistent and efficient manner to the business community. Early data models such as the hierarchical model and the network model met the goal of consistent and efficient access to data and were substantial improvements over simple file mechanisms for storing and accessing data. However, these models required highly skilled programmers to provide access to the data. Consequently, in the early 70's E.F. Codd, an IBM research computer scientists, proposed a new data model based on the simple mathematical notion of the relation. This model is known as the Relational Model. In the relational model, data is represented in flat tables (or relations) which have no physical or internal links between them. The simplicity of this model fostered the development of powerful but relatively simple query languages that now made data directly accessible to the general database user. Except for large, multi-user database systems, a database professional was in general no longer necessary. Database professionals found that traditional data in the form of character data, dates, and numeric data were easily represented and managed via the relational model. Commercial relational database management systems proliferated and performance of relational databases improved dramatically. However, there was a growing community of potential database users whose needs were not met by the relational model. These users needed to store data with data types not available in the relational model and who required a far richer modelling environment than that provided by the relational model. Indeed, the complexity of the objects to be represented in the model mandated a new approach to database technology. The Object-Oriented Model was the result.
The probability heuristics model of syllogistic reasoning.
Chater, N; Oaksford, M
1999-03-01
A probability heuristic model (PHM) for syllogistic reasoning is proposed. An informational ordering over quantified statements suggests simple probability based heuristics for syllogistic reasoning. The most important is the "min-heuristic": choose the type of the least informative premise as the type of the conclusion. The rationality of this heuristic is confirmed by an analysis of the probabilistic validity of syllogistic reasoning which treats logical inference as a limiting case of probabilistic inference. A meta-analysis of past experiments reveals close fits with PHM. PHM also compares favorably with alternative accounts, including mental logics, mental models, and deduction as verbal reasoning. Crucially, PHM extends naturally to generalized quantifiers, such as Most and Few, which have not been characterized logically and are, consequently, beyond the scope of current mental logic and mental model theories. Two experiments confirm the novel predictions of PHM when generalized quantifiers are used in syllogistic arguments. PHM suggests that syllogistic reasoning performance may be determined by simple but rational informational strategies justified by probability theory rather than by logic. Copyright 1999 Academic Press.
40 CFR 80.65 - General requirements for refiners and importers.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., 1995 through December 31, 1997, either as being subject to the simple model standards, or to the complex model standards; (v) For each of the following parameters, either gasoline or RBOB which meets the...; (B) NOX emissions performance in the case of gasoline certified using the complex model. (C) Benzene...
40 CFR 80.65 - General requirements for refiners and importers.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., 1995 through December 31, 1997, either as being subject to the simple model standards, or to the complex model standards; (v) For each of the following parameters, either gasoline or RBOB which meets the...; (B) NOX emissions performance in the case of gasoline certified using the complex model. (C) Benzene...
Stratospheric chemistry and transport
NASA Technical Reports Server (NTRS)
Prather, Michael; Garcia, Maria M.
1990-01-01
A Chemical Tracer Model (CTM) that can use wind field data generated by the General Circulation Model (GCM) is developed to implement chemistry in the three dimensional GCM of the middle atmosphere. Initially, chemical tracers with simple first order losses such as N2O are used. Successive models are to incorporate more complex ozone chemistry.
NASA Astrophysics Data System (ADS)
Andrianov, A. A.; Cannata, F.; Kamenshchik, A. Yu.
2012-11-01
We show that the simple extension of the method of obtaining the general exact solution for the cosmological model with the exponential scalar-field potential to the case when the dust is present fails, and we discuss the reasons of this puzzling phenomenon.
Simple neck pain questions used in surveys, evaluated in relation to health outcomes: a cohort study
2012-01-01
Background The high prevalence of pain reported in many epidemiological studies, and the degree to which this prevalence reflects severe pain is under discussion in the literature. The aim of the present study was to evaluate use of the simple neck pain questions commonly included in large epidemiological survey studies with respect to aspects of health. We investigated if and how an increase in number of days with pain is associated with reduction in health outcomes. Methods A cohort of university students (baseline age 19–25 years) were recruited in 2002 and followed annually for 4 years. The baseline response rate was 69% which resulted in 1200 respondents (627 women, 573 men). Participants were asked about present and past pain and perceptions of their general health, sleep disturbance, stress and energy levels, and general performance. The data were analyzed using a mixed model for repeated measurements and a random intercept logistic model. Results When reporting present pain, participants also reported lower prevalence of very good health, higher stress and sleep disturbance scores and lower energy score. Among those with current neck pain, additional questions characterizing the pain such as duration (categorized), additional pain sites and decreased general performance were associated with lower probability of very good health and higher amounts of sleep disturbance. Knowing about the presence or not of pain explains more of the variation in health between individuals, than within individuals. Conclusion This study of young university students has demonstrated that simple neck pain survey questions capture features of pain that affect aspects of health such as perceived general health, sleep disturbance, mood in terms of stress and energy. Simple pain questions are more useful for group descriptions than for describing or following pain in an individual. PMID:23102060
Grimby-Ekman, Anna; Hagberg, Mats
2012-10-26
The high prevalence of pain reported in many epidemiological studies, and the degree to which this prevalence reflects severe pain is under discussion in the literature. The aim of the present study was to evaluate use of the simple neck pain questions commonly included in large epidemiological survey studies with respect to aspects of health. We investigated if and how an increase in number of days with pain is associated with reduction in health outcomes. A cohort of university students (baseline age 19-25 years) were recruited in 2002 and followed annually for 4 years. The baseline response rate was 69% which resulted in 1200 respondents (627 women, 573 men). Participants were asked about present and past pain and perceptions of their general health, sleep disturbance, stress and energy levels, and general performance. The data were analyzed using a mixed model for repeated measurements and a random intercept logistic model. When reporting present pain, participants also reported lower prevalence of very good health, higher stress and sleep disturbance scores and lower energy score. Among those with current neck pain, additional questions characterizing the pain such as duration (categorized), additional pain sites and decreased general performance were associated with lower probability of very good health and higher amounts of sleep disturbance. Knowing about the presence or not of pain explains more of the variation in health between individuals, than within individuals. This study of young university students has demonstrated that simple neck pain survey questions capture features of pain that affect aspects of health such as perceived general health, sleep disturbance, mood in terms of stress and energy. Simple pain questions are more useful for group descriptions than for describing or following pain in an individual.
Perspective: Sloppiness and emergent theories in physics, biology, and beyond.
Transtrum, Mark K; Machta, Benjamin B; Brown, Kevin S; Daniels, Bryan C; Myers, Christopher R; Sethna, James P
2015-07-07
Large scale models of physical phenomena demand the development of new statistical and computational tools in order to be effective. Many such models are "sloppy," i.e., exhibit behavior controlled by a relatively small number of parameter combinations. We review an information theoretic framework for analyzing sloppy models. This formalism is based on the Fisher information matrix, which is interpreted as a Riemannian metric on a parameterized space of models. Distance in this space is a measure of how distinguishable two models are based on their predictions. Sloppy model manifolds are bounded with a hierarchy of widths and extrinsic curvatures. The manifold boundary approximation can extract the simple, hidden theory from complicated sloppy models. We attribute the success of simple effective models in physics as likewise emerging from complicated processes exhibiting a low effective dimensionality. We discuss the ramifications and consequences of sloppy models for biochemistry and science more generally. We suggest that the reason our complex world is understandable is due to the same fundamental reason: simple theories of macroscopic behavior are hidden inside complicated microscopic processes.
Towards a model of pion generalized parton distributions from Dyson-Schwinger equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moutarde, H.
2015-04-10
We compute the pion quark Generalized Parton Distribution H{sup q} and Double Distributions F{sup q} and G{sup q} in a coupled Bethe-Salpeter and Dyson-Schwinger approach. We use simple algebraic expressions inspired by the numerical resolution of Dyson-Schwinger and Bethe-Salpeter equations. We explicitly check the support and polynomiality properties, and the behavior under charge conjugation or time invariance of our model. We derive analytic expressions for the pion Double Distributions and Generalized Parton Distribution at vanishing pion momentum transfer at a low scale. Our model compares very well to experimental pion form factor or parton distribution function data.
A powerful and flexible approach to the analysis of RNA sequence count data.
Zhou, Yi-Hui; Xia, Kai; Wright, Fred A
2011-10-01
A number of penalization and shrinkage approaches have been proposed for the analysis of microarray gene expression data. Similar techniques are now routinely applied to RNA sequence transcriptional count data, although the value of such shrinkage has not been conclusively established. If penalization is desired, the explicit modeling of mean-variance relationships provides a flexible testing regimen that 'borrows' information across genes, while easily incorporating design effects and additional covariates. We describe BBSeq, which incorporates two approaches: (i) a simple beta-binomial generalized linear model, which has not been extensively tested for RNA-Seq data and (ii) an extension of an expression mean-variance modeling approach to RNA-Seq data, involving modeling of the overdispersion as a function of the mean. Our approaches are flexible, allowing for general handling of discrete experimental factors and continuous covariates. We report comparisons with other alternate methods to handle RNA-Seq data. Although penalized methods have advantages for very small sample sizes, the beta-binomial generalized linear model, combined with simple outlier detection and testing approaches, appears to have favorable characteristics in power and flexibility. An R package containing examples and sample datasets is available at http://www.bios.unc.edu/research/genomic_software/BBSeq yzhou@bios.unc.edu; fwright@bios.unc.edu Supplementary data are available at Bioinformatics online.
Greenhouse effect: temperature of a metal sphere surrounded by a glass shell and heated by sunlight
NASA Astrophysics Data System (ADS)
Nguyen, Phuc H.; Matzner, Richard A.
2012-01-01
We study the greenhouse effect on a model satellite consisting of a tungsten sphere surrounded by a thin spherical, concentric glass shell, with a small gap between the sphere and the shell. The system sits in vacuum and is heated by sunlight incident along the z-axis. This development is a generalization of the simple treatment of the greenhouse effect given by Kittel and Kroemer (1980 Thermal Physics (San Francisco: Freeman)) and can serve as a very simple model demonstrating the much more complex Earth greenhouse effect. Solution of the model problem provides an excellent pedagogical tool at the Junior/Senior undergraduate level.
Forecasting paratransit services demand : review and recommendations.
DOT National Transportation Integrated Search
2013-06-01
Travel demand forecasting tools for Floridas paratransit services are outdated, utilizing old national trip : generation rate generalities and simple linear regression models. In its guidance for the development of : mandated Transportation Disadv...
NASA Astrophysics Data System (ADS)
Ingebrigtsen, Trond S.; Schrøder, Thomas B.; Dyre, Jeppe C.
2012-01-01
This paper is an attempt to identify the real essence of simplicity of liquids in John Locke’s understanding of the term. Simple liquids are traditionally defined as many-body systems of classical particles interacting via radially symmetric pair potentials. We suggest that a simple liquid should be defined instead by the property of having strong correlations between virial and potential-energy equilibrium fluctuations in the NVT ensemble. There is considerable overlap between the two definitions, but also some notable differences. For instance, in the new definition simplicity is not a direct property of the intermolecular potential because a liquid is usually only strongly correlating in part of its phase diagram. Moreover, not all simple liquids are atomic (i.e., with radially symmetric pair potentials) and not all atomic liquids are simple. The main part of the paper motivates the new definition of liquid simplicity by presenting evidence that a liquid is strongly correlating if and only if its intermolecular interactions may be ignored beyond the first coordination shell (FCS). This is demonstrated by NVT simulations of the structure and dynamics of several atomic and three molecular model liquids with a shifted-forces cutoff placed at the first minimum of the radial distribution function. The liquids studied are inverse power-law systems (r-n pair potentials with n=18,6,4), Lennard-Jones (LJ) models (the standard LJ model, two generalized Kob-Andersen binary LJ mixtures, and the Wahnstrom binary LJ mixture), the Buckingham model, the Dzugutov model, the LJ Gaussian model, the Gaussian core model, the Hansen-McDonald molten salt model, the Lewis-Wahnstrom ortho-terphenyl model, the asymmetric dumbbell model, and the single-point charge water model. The final part of the paper summarizes properties of strongly correlating liquids, emphasizing that these are simpler than liquids in general. Simple liquids, as defined here, may be characterized in three quite different ways: (1) chemically by the fact that the liquid’s properties are fully determined by interactions from the molecules within the FCS, (2) physically by the fact that there are isomorphs in the phase diagram, i.e., curves along which several properties like excess entropy, structure, and dynamics, are invariant in reduced units, and (3) mathematically by the fact that throughout the phase diagram the reduced-coordinate constant-potential-energy hypersurfaces define a one-parameter family of compact Riemannian manifolds. No proof is given that the chemical characterization follows from the strong correlation property, but we show that this FCS characterization is consistent with the existence of isomorphs in strongly correlating liquids’ phase diagram. Finally, we note that the FCS characterization of simple liquids calls into question the physical basis of standard perturbation theory, according to which the repulsive and attractive forces play fundamentally different roles for the physics of liquids.
The Monash University Interactive Simple Climate Model
NASA Astrophysics Data System (ADS)
Dommenget, D.
2013-12-01
The Monash university interactive simple climate model is a web-based interface that allows students and the general public to explore the physical simulation of the climate system with a real global climate model. It is based on the Globally Resolved Energy Balance (GREB) model, which is a climate model published by Dommenget and Floeter [2011] in the international peer review science journal Climate Dynamics. The model simulates most of the main physical processes in the climate system in a very simplistic way and therefore allows very fast and simple climate model simulations on a normal PC computer. Despite its simplicity the model simulates the climate response to external forcings, such as doubling of the CO2 concentrations very realistically (similar to state of the art climate models). The Monash simple climate model web-interface allows you to study the results of more than a 2000 different model experiments in an interactive way and it allows you to study a number of tutorials on the interactions of physical processes in the climate system and solve some puzzles. By switching OFF/ON physical processes you can deconstruct the climate and learn how all the different processes interact to generate the observed climate and how the processes interact to generate the IPCC predicted climate change for anthropogenic CO2 increase. The presentation will illustrate how this web-base tool works and what are the possibilities in teaching students with this tool are.
Developing a Conceptual Architecture for a Generalized Agent-based Modeling Environment (GAME)
2008-03-01
4. REPAST (Java, Python , C#, Open Source) ........28 5. MASON: Multi-Agent Modeling Language (Swarm Extension... Python , C#, Open Source) Repast (Recursive Porous Agent Simulation Toolkit) was designed for building agent-based models and simulations in the...Repast makes it easy for inexperienced users to build models by including a built-in simple model and provide interfaces through which menus and Python
A general method for radio spectrum efficiency defining
NASA Astrophysics Data System (ADS)
Ramadanovic, Ljubomir M.
1986-08-01
A general method for radio spectrum efficiency defining is proposed. Although simple it can be applied to various radio services. The concept of spectral elements, as information carriers, is introduced to enable the organization of larger spectral spaces - radio network models - characteristic for a particular radio network. The method is applied to some radio network models, concerning cellular radio telephone systems and digital radio relay systems, to verify its unified approach capability. All discussed radio services operate continuously.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao Yajun
A previously established Hauser-Ernst-type extended double-complex linear system is slightly modified and used to develop an inverse scattering method for the stationary axisymmetric general symplectic gravity model. The reduction procedures in this inverse scattering method are found to be fairly simple, which makes the inverse scattering method applied fine and effective. As an application, a concrete family of soliton double solutions for the considered theory is obtained.
Model compilation: An approach to automated model derivation
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo
1990-01-01
An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.
Simple protocols for oblivious transfer and secure identification in the noisy-quantum-storage model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaffner, Christian
2010-09-15
We present simple protocols for oblivious transfer and password-based identification which are secure against general attacks in the noisy-quantum-storage model as defined in R. Koenig, S. Wehner, and J. Wullschleger [e-print arXiv:0906.1030]. We argue that a technical tool from Koenig et al. suffices to prove security of the known protocols. Whereas the more involved protocol for oblivious transfer from Koenig et al. requires less noise in storage to achieve security, our ''canonical'' protocols have the advantage of being simpler to implement and the security error is easier control. Therefore, our protocols yield higher OT rates for many realistic noise parameters.more » Furthermore, a proof of security of a direct protocol for password-based identification against general noisy-quantum-storage attacks is given.« less
Solving da Vinci stereopsis with depth-edge-selective V2 cells
Assee, Andrew; Qian, Ning
2007-01-01
We propose a new model for da Vinci stereopsis based on a coarse-to-fine disparity-energy computation in V1 and disparity-boundary-selective units in V2. Unlike previous work, our model contains only binocular cells, relies on distributed representations of disparity, and has a simple V1-to-V2 feedforward structure. We demonstrate with random dot stereograms that the V2 stage of our model is able to determine the location and the eye-of-origin of monocularly occluded regions and improve disparity map computation. We also examine a few related issues. First, we argue that since monocular regions are binocularly defined, they cannot generally be detected by monocular cells. Second, we show that our coarse-to-fine V1 model for conventional stereopsis explains double matching in Panum’s limiting case. This provides computational support to the notion that the perceived depth of a monocular bar next to a binocular rectangle may not be da Vinci stereopsis per se (Gillam et al., 2003). Third, we demonstrate that some stimuli previously deemed invalid have simple, valid geometric interpretations. Our work suggests that studies of da Vinci stereopsis should focus on stimuli more general than the bar-and-rectangle type and that disparity-boundary-selective V2 cells may provide a simple physiological mechanism for da Vinci stereopsis. PMID:17698163
The magnetisation distribution of the Ising model - a new approach
NASA Astrophysics Data System (ADS)
Hakan Lundow, Per; Rosengren, Anders
2010-03-01
A completely new approach to the Ising model in 1 to 5 dimensions is developed. We employ a generalisation of the binomial coefficients to describe the magnetisation distributions of the Ising model. For the complete graph this distribution is exact. For simple lattices of dimensions d=1 and d=5 the magnetisation distributions are remarkably well-fitted by the generalized binomial distributions. For d=4 we are only slightly less successful, while for d=2,3 we see some deviations (with exceptions!) between the generalized binomial and the Ising distribution. The results speak in favour of the generalized binomial distribution's correctness regarding their general behaviour in comparison to the Ising model. A theoretical analysis of the distribution's moments also lends support their being correct asymptotically, including the logarithmic corrections in d=4. The full extent to which they correctly model the Ising distribution, and for which graph families, is not settled though.
Climatic impact of Amazon deforestation - a mechanistic model study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ning Zeng; Dickinson, R.E.; Xubin Zeng
1996-04-01
Recent general circulation model (GCM) experiments suggest a drastic change in the regional climate, especially the hydrological cycle, after hypothesized Amazon basinwide deforestation. To facilitate the theoretical understanding os such a change, we develop an intermediate-level model for tropical climatology, including atmosphere-land-ocean interaction. The model consists of linearized steady-state primitive equations with simplified thermodynamics. A simple hydrological cycle is also included. Special attention has been paid to land-surface processes. It generally better simulates tropical climatology and the ENSO anomaly than do many of the previous simple models. The climatic impact of Amazon deforestation is studied in the context of thismore » model. Model results show a much weakened Atlantic Walker-Hadley circulation as a result of the existence of a strong positive feedback loop in the atmospheric circulation system and the hydrological cycle. The regional climate is highly sensitive to albedo change and sensitive to evapotranspiration change. The pure dynamical effect of surface roughness length on convergence is small, but the surface flow anomaly displays intriguing features. Analysis of the thermodynamic equation reveals that the balance between convective heating, adiabatic cooling, and radiation largely determines the deforestation response. Studies of the consequences of hypothetical continuous deforestation suggest that the replacement of forest by desert may be able to sustain a dry climate. Scaling analysis motivated by our modeling efforts also helps to interpret the common results of many GCM simulations. When a simple mixed-layer ocean model is coupled with the atmospheric model, the results suggest a 1{degrees}C decrease in SST gradient across the equatorial Atlantic Ocean in response to Amazon deforestation. The magnitude depends on the coupling strength. 66 refs., 16 figs., 4 tabs.« less
Macroscopic Fluctuation Theory for Stationary Non-Equilibrium States
NASA Astrophysics Data System (ADS)
Bertini, L.; de Sole, A.; Gabrielli, D.; Jona-Lasinio, G.; Landim, C.
2002-05-01
We formulate a dynamical fluctuation theory for stationary non-equilibrium states (SNS) which is tested explicitly in stochastic models of interacting particles. In our theory a crucial role is played by the time reversed dynamics. Within this theory we derive the following results: the modification of the Onsager-Machlup theory in the SNS; a general Hamilton-Jacobi equation for the macroscopic entropy; a non-equilibrium, nonlinear fluctuation dissipation relation valid for a wide class of systems; an H theorem for the entropy. We discuss in detail two models of stochastic boundary driven lattice gases: the zero range and the simple exclusion processes. In the first model the invariant measure is explicitly known and we verify the predictions of the general theory. For the one dimensional simple exclusion process, as recently shown by Derrida, Lebowitz, and Speer, it is possible to express the macroscopic entropy in terms of the solution of a nonlinear ordinary differential equation; by using the Hamilton-Jacobi equation, we obtain a logically independent derivation of this result.
Some properties of correlations of quantum lattice systems in thermal equilibrium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fröhlich, Jürg, E-mail: juerg@phys.ethz.ch; Ueltschi, Daniel, E-mail: daniel@ueltschi.org
Simple proofs of uniqueness of the thermodynamic limit of KMS states and of the decay of equilibrium correlations are presented for a large class of quantum lattice systems at high temperatures. New quantum correlation inequalities for general Heisenberg models are described. Finally, a simplified derivation of a general result on power-law decay of correlations in 2D quantum lattice systems with continuous symmetries is given, extending results of McBryan and Spencer for the 2D classical XY model.
Active earth pressure model tests versus finite element analysis
NASA Astrophysics Data System (ADS)
Pietrzak, Magdalena
2017-06-01
The purpose of the paper is to compare failure mechanisms observed in small scale model tests on granular sample in active state, and simulated by finite element method (FEM) using Plaxis 2D software. Small scale model tests were performed on rectangular granular sample retained by a rigid wall. Deformation of the sample resulted from simple wall translation in the direction `from the soil" (active earth pressure state. Simple Coulomb-Mohr model for soil can be helpful in interpreting experimental findings in case of granular materials. It was found that the general alignment of strain localization pattern (failure mechanism) may belong to macro scale features and be dominated by a test boundary conditions rather than the nature of the granular sample.
Shanableh, A
2005-01-01
The main objective of this study was to develop generalized first-order kinetic models to represent hydrothermal decomposition and oxidation of biosolids within a wide range of temperatures (200-450 degrees C). A lumping approach was used in which oxidation of the various organic ingredients was characterized by the chemical oxygen demand (COD), and decomposition was characterized by the particulate (i.e., nonfilterable) chemical oxygen demand (PCOD). Using the Arrhenius equation (k = k(o)e(-Ea/RT)), activation energy (Ea) levels were derived from 42 continuous-flow hydrothermal treatment experiments conducted at temperatures in the range of 200-450 degrees C. Using predetermined values for k(o) in the Arrhenius equation, the activation energies of the various organic ingredients were separated into 42 values for oxidation and a similar number for decomposition. The activation energy values were then classified into levels representing the relative ease at which the organic ingredients of the biosolids were oxidized or decomposed. The resulting simple first-order kinetic models adequately represented, within the experimental data range, hydrothermal decomposition of the organic particles as measured by PCOD and oxidation of the organic content as measured by COD. The modeling approach presented in the paper provide a simple and general framework suitable for assessing the relative reaction rates of the various organic ingredients of biosolids.
Silva, Fabyano Fonseca; Tunin, Karen P.; Rosa, Guilherme J.M.; da Silva, Marcos V.B.; Azevedo, Ana Luisa Souza; da Silva Verneque, Rui; Machado, Marco Antonio; Packer, Irineu Umberto
2011-01-01
Now a days, an important and interesting alternative in the control of tick-infestation in cattle is to select resistant animals, and identify the respective quantitative trait loci (QTLs) and DNA markers, for posterior use in breeding programs. The number of ticks/animal is characterized as a discrete-counting trait, which could potentially follow Poisson distribution. However, in the case of an excess of zeros, due to the occurrence of several noninfected animals, zero-inflated Poisson and generalized zero-inflated distribution (GZIP) may provide a better description of the data. Thus, the objective here was to compare through simulation, Poisson and ZIP models (simple and generalized) with classical approaches, for QTL mapping with counting phenotypes under different scenarios, and to apply these approaches to a QTL study of tick resistance in an F2 cattle (Gyr × Holstein) population. It was concluded that, when working with zero-inflated data, it is recommendable to use the generalized and simple ZIP model for analysis. On the other hand, when working with data with zeros, but not zero-inflated, the Poisson model or a data-transformation-approach, such as square-root or Box-Cox transformation, are applicable. PMID:22215960
Liquid-liquid critical point in a simple analytical model of water.
Urbic, Tomaz
2016-10-01
A statistical model for a simple three-dimensional Mercedes-Benz model of water was used to study phase diagrams. This model on a simple level describes the thermal and volumetric properties of waterlike molecules. A molecule is presented as a soft sphere with four directions in which hydrogen bonds can be formed. Two neighboring waters can interact through a van der Waals interaction or an orientation-dependent hydrogen-bonding interaction. For pure water, we explored properties such as molar volume, density, heat capacity, thermal expansion coefficient, and isothermal compressibility and found that the volumetric and thermal properties follow the same trends with temperature as in real water and are in good general agreement with Monte Carlo simulations. The model exhibits also two critical points for liquid-gas transition and transition between low-density and high-density fluid. Coexistence curves and a Widom line for the maximum and minimum in thermal expansion coefficient divides the phase space of the model into three parts: in one part we have gas region, in the second a high-density liquid, and the third region contains low-density liquid.
Liquid-liquid critical point in a simple analytical model of water
NASA Astrophysics Data System (ADS)
Urbic, Tomaz
2016-10-01
A statistical model for a simple three-dimensional Mercedes-Benz model of water was used to study phase diagrams. This model on a simple level describes the thermal and volumetric properties of waterlike molecules. A molecule is presented as a soft sphere with four directions in which hydrogen bonds can be formed. Two neighboring waters can interact through a van der Waals interaction or an orientation-dependent hydrogen-bonding interaction. For pure water, we explored properties such as molar volume, density, heat capacity, thermal expansion coefficient, and isothermal compressibility and found that the volumetric and thermal properties follow the same trends with temperature as in real water and are in good general agreement with Monte Carlo simulations. The model exhibits also two critical points for liquid-gas transition and transition between low-density and high-density fluid. Coexistence curves and a Widom line for the maximum and minimum in thermal expansion coefficient divides the phase space of the model into three parts: in one part we have gas region, in the second a high-density liquid, and the third region contains low-density liquid.
Mathematical modeling of high-pH chemical flooding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhuyan, D.; Lake, L.W.; Pope, G.A.
1990-05-01
This paper describes a generalized compositional reservoir simulator for high-pH chemical flooding processes. This simulator combines the reaction chemistry associated with these processes with the extensive physical- and flow-property modeling schemes of an existing micellar/polymer flood simulator, UTCHEM. Application of the model is illustrated for cases from a simple alkaline preflush to surfactant-enhanced alkaline-polymer flooding.
Coupled Particle Transport and Pattern Formation in a Nonlinear Leaky-Box Model
NASA Technical Reports Server (NTRS)
Barghouty, A. F.; El-Nemr, K. W.; Baird, J. K.
2009-01-01
Effects of particle-particle coupling on particle characteristics in nonlinear leaky-box type descriptions of the acceleration and transport of energetic particles in space plasmas are examined in the framework of a simple two-particle model based on the Fokker-Planck equation in momentum space. In this model, the two particles are assumed coupled via a common nonlinear source term. In analogy with a prototypical mathematical system of diffusion-driven instability, this work demonstrates that steady-state patterns with strong dependence on the magnetic turbulence but a rather weak one on the coupled particles attributes can emerge in solutions of a nonlinearly coupled leaky-box model. The insight gained from this simple model may be of wider use and significance to nonlinearly coupled leaky-box type descriptions in general.
A powerful and flexible approach to the analysis of RNA sequence count data
Zhou, Yi-Hui; Xia, Kai; Wright, Fred A.
2011-01-01
Motivation: A number of penalization and shrinkage approaches have been proposed for the analysis of microarray gene expression data. Similar techniques are now routinely applied to RNA sequence transcriptional count data, although the value of such shrinkage has not been conclusively established. If penalization is desired, the explicit modeling of mean–variance relationships provides a flexible testing regimen that ‘borrows’ information across genes, while easily incorporating design effects and additional covariates. Results: We describe BBSeq, which incorporates two approaches: (i) a simple beta-binomial generalized linear model, which has not been extensively tested for RNA-Seq data and (ii) an extension of an expression mean–variance modeling approach to RNA-Seq data, involving modeling of the overdispersion as a function of the mean. Our approaches are flexible, allowing for general handling of discrete experimental factors and continuous covariates. We report comparisons with other alternate methods to handle RNA-Seq data. Although penalized methods have advantages for very small sample sizes, the beta-binomial generalized linear model, combined with simple outlier detection and testing approaches, appears to have favorable characteristics in power and flexibility. Availability: An R package containing examples and sample datasets is available at http://www.bios.unc.edu/research/genomic_software/BBSeq Contact: yzhou@bios.unc.edu; fwright@bios.unc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21810900
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Qing; Shi, Chaowei; Yu, Lu
Internal backbone dynamic motions are essential for different protein functions and occur on a wide range of time scales, from femtoseconds to seconds. Molecular dynamic (MD) simulations and nuclear magnetic resonance (NMR) spin relaxation measurements are valuable tools to gain access to fast (nanosecond) internal motions. However, there exist few reports on correlation analysis between MD and NMR relaxation data. Here, backbone relaxation measurements of {sup 15}N-labeled SH3 (Src homology 3) domain proteins in aqueous buffer were used to generate general order parameters (S{sup 2}) using a model-free approach. Simultaneously, 80 ns MD simulations of SH3 domain proteins in amore » defined hydrated box at neutral pH were conducted and the general order parameters (S{sup 2}) were derived from the MD trajectory. Correlation analysis using the Gromos force field indicated that S{sup 2} values from NMR relaxation measurements and MD simulations were significantly different. MD simulations were performed on models with different charge states for three histidine residues, and with different water models, which were SPC (simple point charge) water model and SPC/E (extended simple point charge) water model. S{sup 2} parameters from MD simulations with charges for all three histidines and with the SPC/E water model correlated well with S{sup 2} calculated from the experimental NMR relaxation measurements, in a site-specific manner. - Highlights: • Correlation analysis between NMR relaxation measurements and MD simulations. • General order parameter (S{sup 2}) as common reference between the two methods. • Different protein dynamics with different Histidine charge states in neutral pH. • Different protein dynamics with different water models.« less
NASA Astrophysics Data System (ADS)
Chouika, N.; Mezrag, C.; Moutarde, H.; Rodríguez-Quintero, J.
2018-05-01
A systematic approach for the model building of Generalized Parton Distributions (GPDs), based on their overlap representation within the DGLAP kinematic region and a further covariant extension to the ERBL one, is applied to the valence-quark pion's case, using light-front wave functions inspired by the Nakanishi representation of the pion Bethe-Salpeter amplitudes (BSA). This simple but fruitful pion GPD model illustrates the general model building technique and, in addition, allows for the ambiguities related to the covariant extension, grounded on the Double Distribution (DD) representation, to be constrained by requiring a soft-pion theorem to be properly observed.
General Blending Models for Data From Mixture Experiments
Brown, L.; Donev, A. N.; Bissett, A. C.
2015-01-01
We propose a new class of models providing a powerful unification and extension of existing statistical methodology for analysis of data obtained in mixture experiments. These models, which integrate models proposed by Scheffé and Becker, extend considerably the range of mixture component effects that may be described. They become complex when the studied phenomenon requires it, but remain simple whenever possible. This article has supplementary material online. PMID:26681812
Potter, Adam W; Blanchard, Laurie A; Friedl, Karl E; Cadarette, Bruce S; Hoyt, Reed W
2017-02-01
Physiological models provide useful summaries of complex interrelated regulatory functions. These can often be reduced to simple input requirements and simple predictions for pragmatic applications. This paper demonstrates this modeling efficiency by tracing the development of one such simple model, the Heat Strain Decision Aid (HSDA), originally developed to address Army needs. The HSDA, which derives from the Givoni-Goldman equilibrium body core temperature prediction model, uses 16 inputs from four elements: individual characteristics, physical activity, clothing biophysics, and environmental conditions. These inputs are used to mathematically predict core temperature (T c ) rise over time and can estimate water turnover from sweat loss. Based on a history of military applications such as derivation of training and mission planning tools, we conclude that the HSDA model is a robust integration of physiological rules that can guide a variety of useful predictions. The HSDA model is limited to generalized predictions of thermal strain and does not provide individualized predictions that could be obtained from physiological sensor data-driven predictive models. This fully transparent physiological model should be improved and extended with new findings and new challenging scenarios. Published by Elsevier Ltd.
Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.
Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M
2016-12-01
Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.
A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading
NASA Astrophysics Data System (ADS)
Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo
A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).
Adding Temporal Characteristics to Geographical Schemata and Instances: A General Framework
NASA Astrophysics Data System (ADS)
Ota, Morishige
2018-05-01
This paper proposes the temporal general feature model (TGFM) as a meta-model for application schemata representing changes of real-world phenomena. It is not very easy to determine history directly from the current application schemata, even if the revision notes are attached to the specification. To solve this problem, the rules for description of the succession between previous and posterior components are added to the general feature model, thus resulting in TGFM. After discussing the concepts associated with the new model, simple examples of application schemata are presented as instances of TGFM. Descriptors for changing properties, the succession of changing properties in moving features, and the succession of features and associations are introduced. The modeling methods proposed in this paper will contribute to the acquisition of consistent and reliable temporal geospatial data.
Dynamics of Social Group Competition: Modeling the Decline of Religious Affiliation
NASA Astrophysics Data System (ADS)
Abrams, Daniel M.; Yaple, Haley A.; Wiener, Richard J.
2011-08-01
When social groups compete for members, the resulting dynamics may be understandable with mathematical models. We demonstrate that a simple ordinary differential equation (ODE) model is a good fit for religious shift by comparing it to a new international data set tracking religious nonaffiliation. We then generalize the model to include the possibility of nontrivial social interaction networks and examine the limiting case of a continuous system. Analytical and numerical predictions of this generalized system, which is robust to polarizing perturbations, match those of the original ODE model and justify its agreement with real-world data. The resulting predictions highlight possible causes of social shift and suggest future lines of research in both physics and sociology.
Pragmatic model of patient satisfaction in general practice: progress towards a theory.
Baker, R
1997-01-01
A major problem in the measurement of patient satisfaction is the lack of an adequate theory to explain the meaning of satisfaction, and hence how it should be measured and how the findings are interpreted. Because of the lack of a fully developed theory, when developing patient satisfaction questionnaires for use in general practice, a simple model was used. This model was pragmatic in that it linked together empirical evidence about patient satisfaction without recourse to more general social or psychological theory of behaviour, other than to define satisfaction as an attitude. Several studies with the questionnaires confirm in general the components of the model. However, the importance of personal care had not been sufficiently emphasised, and therefore the model has been revised. It can now serve as a basis for future research into patient satisfaction, in particular as a stimulus for investigating the links between components of the model and underlying psychological or other behavioural theories. PMID:10177036
Time-dependent inhomogeneous jet models for BL Lac objects
NASA Technical Reports Server (NTRS)
Marlowe, A. T.; Urry, C. M.; George, I. M.
1992-01-01
Relativistic beaming can explain many of the observed properties of BL Lac objects (e.g., rapid variability, high polarization, etc.). In particular, the broadband radio through X-ray spectra are well modeled by synchrotron-self Compton emission from an inhomogeneous relativistic jet. We have done a uniform analysis on several BL Lac objects using a simple but plausible inhomogeneous jet model. For all objects, we found that the assumed power-law distribution of the magnetic field and the electron density can be adjusted to match the observed BL Lac spectrum. While such models are typically unconstrained, consideration of spectral variability strongly restricts the allowed parameters, although to date the sampling has generally been too sparse to constrain the current models effectively. We investigate the time evolution of the inhomogeneous jet model for a simple perturbation propagating along the jet. The implications of this time evolution model and its relevance to observed data are discussed.
Time-dependent inhomogeneous jet models for BL Lac objects
NASA Astrophysics Data System (ADS)
Marlowe, A. T.; Urry, C. M.; George, I. M.
1992-05-01
Relativistic beaming can explain many of the observed properties of BL Lac objects (e.g., rapid variability, high polarization, etc.). In particular, the broadband radio through X-ray spectra are well modeled by synchrotron-self Compton emission from an inhomogeneous relativistic jet. We have done a uniform analysis on several BL Lac objects using a simple but plausible inhomogeneous jet model. For all objects, we found that the assumed power-law distribution of the magnetic field and the electron density can be adjusted to match the observed BL Lac spectrum. While such models are typically unconstrained, consideration of spectral variability strongly restricts the allowed parameters, although to date the sampling has generally been too sparse to constrain the current models effectively. We investigate the time evolution of the inhomogeneous jet model for a simple perturbation propagating along the jet. The implications of this time evolution model and its relevance to observed data are discussed.
Stöckl, Anna L; Kihlström, Klara; Chandler, Steven; Sponberg, Simon
2017-04-05
Flight control in insects is heavily dependent on vision. Thus, in dim light, the decreased reliability of visual signal detection also prompts consequences for insect flight. We have an emerging understanding of the neural mechanisms that different species employ to adapt the visual system to low light. However, much less explored are comparative analyses of how low light affects the flight behaviour of insect species, and the corresponding links between physiological adaptations and behaviour. We investigated whether the flower tracking behaviour of three hawkmoth species with different diel activity patterns revealed luminance-dependent adaptations, using a system identification approach. We found clear luminance-dependent differences in flower tracking in all three species, which were explained by a simple luminance-dependent delay model, which generalized across species. We discuss physiological and anatomical explanations for the variance in tracking responses, which could not be explained by such simple models. Differences between species could not be explained by the simple delay model. However, in several cases, they could be explained through the addition on a second model parameter, a simple scaling term, that captures the responsiveness of each species to flower movements. Thus, we demonstrate here that much of the variance in the luminance-dependent flower tracking responses of hawkmoths with different diel activity patterns can be captured by simple models of neural processing.This article is part of the themed issue 'Vision in dim light'. © 2017 The Author(s).
Oral History of Coastal Engineering Activities in Southern California, 1930-1981,
1986-01-01
COASTAL POWERPLANT PROJECTS ........................... 10-1 Diablo Canyon Powerplant Project ...................... 10-1 Edison Mandalay Steam Generating ...Station .............. 10-2 San Onofre Nuclear Powerplant ......................... 10-6 Agua Hedionda Powerplant .............................. 10-7...retired as a Major General . The Santa Barbara model was small. It was built to determine whether or not very small, inexpensive, and simple models could
Spatial structures in a simple model of population dynamics for parasite-host interactions
NASA Astrophysics Data System (ADS)
Dong, J. J.; Skinner, B.; Breecher, N.; Schmittmann, B.; Zia, R. K. P.
2015-08-01
Spatial patterning can be crucially important for understanding the behavior of interacting populations. Here we investigate a simple model of parasite and host populations in which parasites are random walkers that must come into contact with a host in order to reproduce. We focus on the spatial arrangement of parasites around a single host, and we derive using analytics and numerical simulations the necessary conditions placed on the parasite fecundity and lifetime for the population's long-term survival. We also show that the parasite population can be pushed to extinction by a large drift velocity, but, counterintuitively, a small drift velocity generally increases the parasite population.
Algebraic perturbation theory for dense liquids with discrete potentials
NASA Astrophysics Data System (ADS)
Adib, Artur B.
2007-06-01
A simple theory for the leading-order correction g1(r) to the structure of a hard-sphere liquid with discrete (e.g., square-well) potential perturbations is proposed. The theory makes use of a general approximation that effectively eliminates four-particle correlations from g1(r) with good accuracy at high densities. For the particular case of discrete perturbations, the remaining three-particle correlations can be modeled with a simple volume-exclusion argument, resulting in an algebraic and surprisingly accurate expression for g1(r) . The structure of a discrete “core-softened” model for liquids with anomalous thermodynamic properties is reproduced as an application.
NASA Technical Reports Server (NTRS)
Miller, R. D.; Rogers, J. T.
1975-01-01
General requirements for dynamic loads analyses are described. The indicial lift growth function unsteady subsonic aerodynamic representation is reviewed, and the FLEXSTAB CPS is evaluated with respect to these general requirements. The effects of residual flexibility techniques on dynamic loads analyses are also evaluated using a simple dynamic model.
ERIC Educational Resources Information Center
Vardeman, Stephen B.; Wendelberger, Joanne R.
2005-01-01
There is a little-known but very simple generalization of the standard result that for uncorrelated random variables with common mean [mu] and variance [sigma][superscript 2], the expected value of the sample variance is [sigma][superscript 2]. The generalization justifies the use of the usual standard error of the sample mean in possibly…
Structure of the conversion laws in quantum integrable spin chains with short range interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grabowski, M.P.; Mathieu, P.
1995-11-01
The authors present a detailed analysis of the structure of the conservation laws in quantum integrable chains of the XYZ-type and in the Hubbard model. The essential tool for the former class of models is the boost operator, which provides a recursive way of calculating the integrals of motion. With its help, they establish the general form of the XYZ conserved charges in terms of simple polynomials in spin variables and derive recursion relations for the relative coefficients of these polynomials. Although these relations are difficult to solve in general, a subset of the coefficients can be determined. Moreover, formore » two submodels of the XYZ chain, namely the XXX and XY cases, all the charges can be calculated in closed form. Using this approach, the authors rederive the known expressions for the XY charges in a novel way. For the XXX case. a simple description of conserved charges is found in terms of a Catalan tree. This construction is generalized for the su(M) invariant integrable chain. They also investigate the circumstances permitting the existence of a recursive (ladder) operator in general quantum integrable systems. They indicate that a quantum ladder operator can be traced back to the presence of a Hamiltonian mastersymmetry of degree one in the classical continuous version of the model. In this way, quantum chains endowed with a recursive structure can be identified from the properties of their classical relatives. The authors also show that in the quantum continuous limits of the XYZ model, the ladder property of the boost operator disappears. For the Hubbard model they demonstrate the nonexistence of a ladder operator. Nevertheless, the general structure of the conserved charges is indicated, and the expression for the terms linear in the model`s free parameter for all charges is derived in closed form. 62 refs., 4 figs.« less
Efficient polarimetric BRDF model.
Renhorn, Ingmar G E; Hallberg, Tomas; Boreman, Glenn D
2015-11-30
The purpose of the present manuscript is to present a polarimetric bidirectional reflectance distribution function (BRDF) model suitable for hyperspectral and polarimetric signature modelling. The model is based on a further development of a previously published four-parameter model that has been generalized in order to account for different types of surface structures (generalized Gaussian distribution). A generalization of the Lambertian diffuse model is presented. The pBRDF-functions are normalized using numerical integration. Using directional-hemispherical reflectance (DHR) measurements, three of the four basic parameters can be determined for any wavelength. This simplifies considerably the development of multispectral polarimetric BRDF applications. The scattering parameter has to be determined from at least one BRDF measurement. The model deals with linear polarized radiation; and in similarity with e.g. the facet model depolarization is not included. The model is very general and can inherently model extreme surfaces such as mirrors and Lambertian surfaces. The complex mixture of sources is described by the sum of two basic models, a generalized Gaussian/Fresnel model and a generalized Lambertian model. Although the physics inspired model has some ad hoc features, the predictive power of the model is impressive over a wide range of angles and scattering magnitudes. The model has been applied successfully to painted surfaces, both dull and glossy and also on metallic bead blasted surfaces. The simple and efficient model should be attractive for polarimetric simulations and polarimetric remote sensing.
Modeling the radiation pattern of LEDs.
Moreno, Ivan; Sun, Ching-Cherng
2008-02-04
Light-emitting diodes (LEDs) come in many varieties and with a wide range of radiation patterns. We propose a general, simple but accurate analytic representation for the radiation pattern of the light emitted from an LED. To accurately render both the angular intensity distribution and the irradiance spatial pattern, a simple phenomenological model takes into account the emitting surfaces (chip, chip array, or phosphor surface), and the light redirected by both the reflecting cup and the encapsulating lens. Mathematically, the pattern is described as the sum of a maximum of two or three Gaussian or cosine-power functions. The resulting equation is widely applicable for any kind of LED of practical interest. We accurately model a wide variety of radiation patterns from several world-class manufacturers.
Time-independent models of asset returns revisited
NASA Astrophysics Data System (ADS)
Gillemot, L.; Töyli, J.; Kertesz, J.; Kaski, K.
2000-07-01
In this study we investigate various well-known time-independent models of asset returns being simple normal distribution, Student t-distribution, Lévy, truncated Lévy, general stable distribution, mixed diffusion jump, and compound normal distribution. For this we use Standard and Poor's 500 index data of the New York Stock Exchange, Helsinki Stock Exchange index data describing a small volatile market, and artificial data. The results indicate that all models, excluding the simple normal distribution, are, at least, quite reasonable descriptions of the data. Furthermore, the use of differences instead of logarithmic returns tends to make the data looking visually more Lévy-type distributed than it is. This phenomenon is especially evident in the artificial data that has been generated by an inflated random walk process.
General model and control of an n rotor helicopter
NASA Astrophysics Data System (ADS)
Sidea, A. G.; Yding Brogaard, R.; Andersen, N. A.; Ravn, O.
2014-12-01
The purpose of this study was to create a dynamic, nonlinear mathematical model of a multirotor that would be valid for different numbers of rotors. Furthermore, a set of Single Input Single Output (SISO) controllers were implemented for attitude control. Both model and controllers were tested experimentally on a quadcopter. Using the combined model and controllers, simple system simulation and control is possible, by replacing the physical values for the individual systems.
NASA Technical Reports Server (NTRS)
Poole, L. R.; Huckins, E. K., III
1972-01-01
A general theory on mathematical modeling of elastic parachute suspension lines during the unfurling process was developed. Massless-spring modeling of suspension-line elasticity was evaluated in detail. For this simple model, equations which govern the motion were developed and numerically integrated. The results were compared with flight test data. In most regions, agreement was satisfactory. However, poor agreement was obtained during periods of rapid fluctuations in line tension.
Modeling the CAPTEX Vertical Tracer Concentration Profiles.
NASA Astrophysics Data System (ADS)
Draxler, Roland R.; Stunder, Barbara J. B.
1988-05-01
Perfluorocarbon tracer concentration profiles measured by aircraft 600-900 km downwind of the release locations during CAPTEX are discussed and compared with some model results. In general, the concentrations decreased with height in the upper half of the boundary layer where the aircraft measurements were made. The results of a model sensitivity study suggested that the shape of the profile was primarily due to winds increasing with height and relative position of the sampling with respect to the upwind and downwind edge of the plume. Further modeling studies showed that relatively simple vertical mixing parameterizations could account for the complex vertical plume structure when the model had sufficient vertical resolution. In general, the model performed better with slower winds and corresponding longer transport times.
Agent Model Development for Assessing Climate-Induced Geopolitical Instability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boslough, Mark B.; Backus, George A.
2005-12-01
We present the initial stages of development of new agent-based computational methods to generate and test hypotheses about linkages between environmental change and international instability. This report summarizes the first year's effort of an originally proposed three-year Laboratory Directed Research and Development (LDRD) project. The preliminary work focused on a set of simple agent-based models and benefited from lessons learned in previous related projects and case studies of human response to climate change and environmental scarcity. Our approach was to define a qualitative model using extremely simple cellular agent models akin to Lovelock's Daisyworld and Schelling's segregation model. Such modelsmore » do not require significant computing resources, and users can modify behavior rules to gain insights. One of the difficulties in agent-based modeling is finding the right balance between model simplicity and real-world representation. Our approach was to keep agent behaviors as simple as possible during the development stage (described herein) and to ground them with a realistic geospatial Earth system model in subsequent years. This work is directed toward incorporating projected climate data--including various C02 scenarios from the Intergovernmental Panel on Climate Change (IPCC) Third Assessment Report--and ultimately toward coupling a useful agent-based model to a general circulation model.3« less
A systems approach to theoretical fluid mechanics: Fundamentals
NASA Technical Reports Server (NTRS)
Anyiwo, J. C.
1978-01-01
A preliminary application of the underlying principles of the investigator's general system theory to the description and analyses of the fluid flow system is presented. An attempt is made to establish practical models, or elements of the general fluid flow system from the point of view of the general system theory fundamental principles. Results obtained are applied to a simple experimental fluid flow system, as test case, with particular emphasis on the understanding of fluid flow instability, transition and turbulence.
Integrated Reconfigurable Intelligent Systems (IRIS) for Complex Naval Systems
2011-02-23
INTRODUCTION 35 2.2 GENERAL MODEL SETUP 36 2.2.1 Co-Simulation Principles 36 2.2.2 Double pendulum : a simple example 38 2.2.3 Description of numerical... pendulum sample problem 45 2.3 DISCUSSION OF APPROACH WITH RESPECT TO PROPOSED SUBTASKS 49 2.4 RESULTS DISCUSSION AND FUTURE WORK 49 TASK 3...Kim and Praehofer 2000]. 2.2.2 Double pendulum : a simple example In order to be able to evaluate co-simulation principles, specifically an
Extended Poisson process modelling and analysis of grouped binary data.
Faddy, Malcolm J; Smith, David M
2012-05-01
A simple extension of the Poisson process results in binomially distributed counts of events in a time interval. A further extension generalises this to probability distributions under- or over-dispersed relative to the binomial distribution. Substantial levels of under-dispersion are possible with this modelling, but only modest levels of over-dispersion - up to Poisson-like variation. Although simple analytical expressions for the moments of these probability distributions are not available, approximate expressions for the mean and variance are derived, and used to re-parameterise the models. The modelling is applied in the analysis of two published data sets, one showing under-dispersion and the other over-dispersion. More appropriate assessment of the precision of estimated parameters and reliable model checking diagnostics follow from this more general modelling of these data sets. © 2012 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Xu, Chonggang; Gertner, George
2013-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037
Xu, Chonggang; Gertner, George
2011-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.
Modeling fibrous biological tissues with a general invariant that excludes compressed fibers
NASA Astrophysics Data System (ADS)
Li, Kewei; Ogden, Ray W.; Holzapfel, Gerhard A.
2018-01-01
Dispersed collagen fibers in fibrous soft biological tissues have a significant effect on the overall mechanical behavior of the tissues. Constitutive modeling of the detailed structure obtained by using advanced imaging modalities has been investigated extensively in the last decade. In particular, our group has previously proposed a fiber dispersion model based on a generalized structure tensor. However, the fiber tension-compression switch described in that study is unable to exclude compressed fibers within a dispersion and the model requires modification so as to avoid some unphysical effects. In a recent paper we have proposed a method which avoids such problems, but in this present study we introduce an alternative approach by using a new general invariant that only depends on the fibers under tension so that compressed fibers within a dispersion do not contribute to the strain-energy function. We then provide expressions for the associated Cauchy stress and elasticity tensors in a decoupled form. We have also implemented the proposed model in a finite element analysis program and illustrated the implementation with three representative examples: simple tension and compression, simple shear, and unconfined compression on articular cartilage. We have obtained very good agreement with the analytical solutions that are available for the first two examples. The third example shows the efficacy of the fibrous tissue model in a larger scale simulation. For comparison we also provide results for the three examples with the compressed fibers included, and the results are completely different. If the distribution of collagen fibers is such that it is appropriate to exclude compressed fibers then such a model should be adopted.
Meesters, Johannes A J; Koelmans, Albert A; Quik, Joris T K; Hendriks, A Jan; van de Meent, Dik
2014-05-20
Screening level models for environmental assessment of engineered nanoparticles (ENP) are not generally available. Here, we present SimpleBox4Nano (SB4N) as the first model of this type, assess its validity, and evaluate it by comparisons with a known material flow model. SB4N expresses ENP transport and concentrations in and across air, rain, surface waters, soil, and sediment, accounting for nanospecific processes such as aggregation, attachment, and dissolution. The model solves simultaneous mass balance equations (MBE) using simple matrix algebra. The MBEs link all concentrations and transfer processes using first-order rate constants for all processes known to be relevant for ENPs. The first-order rate constants are obtained from the literature. The output of SB4N is mass concentrations of ENPs as free dispersive species, heteroaggregates with natural colloids, and larger natural particles in each compartment in time and at steady state. Known scenario studies for Switzerland were used to demonstrate the impact of the transport processes included in SB4N on the prediction of environmental concentrations. We argue that SB4N-predicted environmental concentrations are useful as background concentrations in environmental risk assessment.
Des Roches, Carrie A; Vallila-Rohter, Sofia; Villard, Sarah; Tripodis, Yorghos; Caplan, David; Kiran, Swathi
2016-12-01
The current study examined treatment outcomes and generalization patterns following 2 sentence comprehension therapies: object manipulation (OM) and sentence-to-picture matching (SPM). Findings were interpreted within the framework of specific deficit and resource reduction accounts, which were extended in order to examine the nature of generalization following treatment of sentence comprehension deficits in aphasia. Forty-eight individuals with aphasia were enrolled in 1 of 8 potential treatment assignments that varied by task (OM, SPM), complexity of trained sentences (complex, simple), and syntactic movement (noun phrase, wh-movement). Comprehension of trained and untrained sentences was probed before and after treatment using stimuli that differed from the treatment stimuli. Linear mixed-model analyses demonstrated that, although both OM and SPM treatments were effective, OM resulted in greater improvement than SPM. Analyses of covariance revealed main effects of complexity in generalization; generalization from complex to simple linguistically related sentences was observed both across task and across movement. Results are consistent with the complexity account of treatment efficacy, as generalization effects were consistently observed from complex to simpler structures. Furthermore, results provide support for resource reduction accounts that suggest that generalization can extend across linguistic boundaries, such as across movement type.
Acoustical and Other Physical Properties of Marine Sediments
1991-01-01
Granular Structure of Rocks 4. Anisotropic Poroelasticity and Biot’s Parameters PART 1 A simple analytical model has been developed to describe the...mentioned properties. PART 4 Prediction of wave propagation in a submarine environment re- quires modeling the acoustic response of ocean bottom...Biot’s theory is a promising approach for modelling acoustic wave propa- gation in ocean sediments which generally consist of elastic or viscoelastic
NASA Technical Reports Server (NTRS)
Sato, N.; Sellers, P. J.; Randall, D. A.; Schneider, E. K.; Shukla, J.; Kinter, J. L., III; Hou, Y.-T.; Albertazzi, E.
1989-01-01
The Simple Biosphere MOdel (SiB) of Sellers et al., (1986) was designed to simulate the interactions between the Earth's land surface and the atmosphere by treating the vegetation explicitly and relistically, thereby incorporating biophysical controls on the exchanges of radiation, momentum, sensible and latent heat between the two systems. The steps taken to implement SiB in a modified version of the National Meteorological Center's spectral GCM are described. The coupled model (SiB-GCM) was used with a conventional hydrological model (Ctl-GCM) to produce summer and winter simulations. The same GCM was used with a conventional hydrological model (Ctl-GCM) to produce comparable 'control' summer and winter variations. It was found that SiB-GCM produced a more realistic partitioning of energy at the land surface than Ctl-GCM. Generally, SiB-GCM produced more sensible heat flux and less latent heat flux over vegetated land than did Ctl-GCM and this resulted in the development of a much deeper daytime planetary boundary and reduced precipitation rates over the continents in SiB-GCM. In the summer simulation, the 200 mb jet stream and the wind speed at 850 mb were slightly weakened in the SiB-GCM relative to the Ctl-GCM results and equivalent analyses from observations.
Modeling and experimental characterization of electromigration in interconnect trees
NASA Astrophysics Data System (ADS)
Thompson, C. V.; Hau-Riege, S. P.; Andleigh, V. K.
1999-11-01
Most modeling and experimental characterization of interconnect reliability is focussed on simple straight lines terminating at pads or vias. However, laid-out integrated circuits often have interconnects with junctions and wide-to-narrow transitions. In carrying out circuit-level reliability assessments it is important to be able to assess the reliability of these more complex shapes, generally referred to as `trees.' An interconnect tree consists of continuously connected high-conductivity metal within one layer of metallization. Trees terminate at diffusion barriers at vias and contacts, and, in the general case, can have more than one terminating branch when they include junctions. We have extended the understanding of `immortality' demonstrated and analyzed for straight stud-to-stud lines, to trees of arbitrary complexity. This leads to a hierarchical approach for identifying immortal trees for specific circuit layouts and models for operation. To complete a circuit-level-reliability analysis, it is also necessary to estimate the lifetimes of the mortal trees. We have developed simulation tools that allow modeling of stress evolution and failure in arbitrarily complex trees. We are testing our models and simulations through comparisons with experiments on simple trees, such as lines broken into two segments with different currents in each segment. Models, simulations and early experimental results on the reliability of interconnect trees are shown to be consistent.
Convective adjustment timescale (τ) for cumulus clouds is one of the most influential parameters controlling parameterized convective precipitation in climate and weather simulation models at global and regional scales. Due to the complex nature of deep convection, a pres...
Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology
Gao, Fei; Li, Ye; Novak, Igor L.; Slepchenko, Boris M.
2016-01-01
Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium ‘sparks’ as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell. PMID:27959915
Clarifying the Dynamics of the General Circulation: Phillips's 1956 Experiment.
NASA Astrophysics Data System (ADS)
Lewis, John M.
1998-01-01
In the mid-1950s, amid heated debate over the physical mechanisms that controlled the known features of the atmosphere's general circulation, Norman Phillips simulated hemispheric motion on the high-speed computer at the Institute for Advanced Study. A simple energetically consistent model was integrated for a simulated time of approximately 1 month. Analysis of the model results clarified the respective roles of the synoptic-scale eddies (cyclones-anticyclones) and mean meridional circulation in the maintenance of the upper-level westerlies and the surface wind regimes. Furthermore, the modeled cyclones clearly linked surface frontogenesis with the upper-level Charney-Eady wave. In addition to discussing the model results in light of the controversy and ferment that surrounded general circulation theory in the 1940s-1950s, an effort is made to follow Phillips's scientific path to the experiment.
Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang
2010-07-01
We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root- n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided.
Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang
2013-01-01
We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root-n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided. PMID:24790286
The practical use of simplicity in developing ground water models
Hill, M.C.
2006-01-01
The advantages of starting with simple models and building complexity slowly can be significant in the development of ground water models. In many circumstances, simpler models are characterized by fewer defined parameters and shorter execution times. In this work, the number of parameters is used as the primary measure of simplicity and complexity; the advantages of shorter execution times also are considered. The ideas are presented in the context of constructing ground water models but are applicable to many fields. Simplicity first is put in perspective as part of the entire modeling process using 14 guidelines for effective model calibration. It is noted that neither very simple nor very complex models generally produce the most accurate predictions and that determining the appropriate level of complexity is an ill-defined process. It is suggested that a thorough evaluation of observation errors is essential to model development. Finally, specific ways are discussed to design useful ground water models that have fewer parameters and shorter execution times.
Rosenblum, Michael; van der Laan, Mark J.
2010-01-01
Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636
Mathematical modeling of spinning elastic bodies for modal analysis.
NASA Technical Reports Server (NTRS)
Likins, P. W.; Barbera, F. J.; Baddeley, V.
1973-01-01
The problem of modal analysis of an elastic appendage on a rotating base is examined to establish the relative advantages of various mathematical models of elastic structures and to extract general inferences concerning the magnitude and character of the influence of spin on the natural frequencies and mode shapes of rotating structures. In realization of the first objective, it is concluded that except for a small class of very special cases the elastic continuum model is devoid of useful results, while for constant nominal spin rate the distributed-mass finite-element model is quite generally tractable, since in the latter case the governing equations are always linear, constant-coefficient, ordinary differential equations. Although with both of these alternatives the details of the formulation generally obscure the essence of the problem and permit very little engineering insight to be gained without extensive computation, this difficulty is not encountered when dealing with simple concentrated mass models.
Basinwide response of the Atlantic Meridional Overturning Circulation to interannual wind forcing
NASA Astrophysics Data System (ADS)
Zhao, Jian
2017-12-01
An eddy-resolving Ocean general circulation model For the Earth Simulator (OFES) and a simple wind-driven two-layer model are used to investigate the role of momentum fluxes in driving the Atlantic Meridional Overturning Circulation (AMOC) variability throughout the Atlantic basin from 1950 to 2010. Diagnostic analysis using the OFES results suggests that interior baroclinic Rossby waves and coastal topographic waves play essential roles in modulating the AMOC interannual variability. The proposed mechanisms are verified in the context of a simple two-layer model with realistic topography and only forced by surface wind. The topographic waves communicate high-latitude anomalies into lower latitudes and account for about 50% of the AMOC interannual variability in the subtropics. In addition, the large scale Rossby waves excited by wind forcing together with topographic waves set up coherent AMOC interannual variability patterns across the tropics and subtropics. The comparisons between the simple model and OFES results suggest that a large fraction of the AMOC interannual variability in the Atlantic basin can be explained by wind-driven dynamics.
Principles of protein folding--a perspective from simple exact models.
Dill, K. A.; Bromberg, S.; Yue, K.; Fiebig, K. M.; Yee, D. P.; Thomas, P. D.; Chan, H. S.
1995-01-01
General principles of protein structure, stability, and folding kinetics have recently been explored in computer simulations of simple exact lattice models. These models represent protein chains at a rudimentary level, but they involve few parameters, approximations, or implicit biases, and they allow complete explorations of conformational and sequence spaces. Such simulations have resulted in testable predictions that are sometimes unanticipated: The folding code is mainly binary and delocalized throughout the amino acid sequence. The secondary and tertiary structures of a protein are specified mainly by the sequence of polar and nonpolar monomers. More specific interactions may refine the structure, rather than dominate the folding code. Simple exact models can account for the properties that characterize protein folding: two-state cooperativity, secondary and tertiary structures, and multistage folding kinetics--fast hydrophobic collapse followed by slower annealing. These studies suggest the possibility of creating "foldable" chain molecules other than proteins. The encoding of a unique compact chain conformation may not require amino acids; it may require only the ability to synthesize specific monomer sequences in which at least one monomer type is solvent-averse. PMID:7613459
Preliminary model for high-power-waveguide arcing and arc protection
NASA Technical Reports Server (NTRS)
Yen, H. C.
1978-01-01
The arc protection subsystems that are implemented in the DSN high power transmitters are discussed. The status of present knowledge about waveguide arcs is reviewed in terms of a simple engineering model. A fairly general arc detection scheme is also discussed. Areas where further studies are needed are pointed out along with proposed approaches to the solutions of these problems.
Hong, Chuan; Chen, Yong; Ning, Yang; Wang, Shuang; Wu, Hao; Carroll, Raymond J
2017-01-01
Motivated by analyses of DNA methylation data, we propose a semiparametric mixture model, namely the generalized exponential tilt mixture model, to account for heterogeneity between differentially methylated and non-differentially methylated subjects in the cancer group, and capture the differences in higher order moments (e.g. mean and variance) between subjects in cancer and normal groups. A pairwise pseudolikelihood is constructed to eliminate the unknown nuisance function. To circumvent boundary and non-identifiability problems as in parametric mixture models, we modify the pseudolikelihood by adding a penalty function. In addition, the test with simple asymptotic distribution has computational advantages compared with permutation-based test for high-dimensional genetic or epigenetic data. We propose a pseudolikelihood based expectation-maximization test, and show the proposed test follows a simple chi-squared limiting distribution. Simulation studies show that the proposed test controls Type I errors well and has better power compared to several current tests. In particular, the proposed test outperforms the commonly used tests under all simulation settings considered, especially when there are variance differences between two groups. The proposed test is applied to a real data set to identify differentially methylated sites between ovarian cancer subjects and normal subjects.
NMR signals within the generalized Langevin model for fractional Brownian motion
NASA Astrophysics Data System (ADS)
Lisý, Vladimír; Tóthová, Jana
2018-03-01
The methods of Nuclear Magnetic Resonance belong to the best developed and often used tools for studying random motion of particles in different systems, including soft biological tissues. In the long-time limit the current mathematical description of the experiments allows proper interpretation of measurements of normal and anomalous diffusion. The shorter-time dynamics is however correctly considered only in a few works that do not go beyond the standard memoryless Langevin description of the Brownian motion (BM). In the present work, the attenuation function S (t) for an ensemble of spin-bearing particles in a magnetic-field gradient, expressed in a form applicable for any kind of stationary stochastic dynamics of spins with or without a memory, is calculated in the frame of the model of fractional BM. The solution of the model for particles trapped in a harmonic potential is obtained in an exceedingly simple way and used for the calculation of S (t). In the limit of free particles coupled to a fractal heat bath, the results compare favorably with experiments acquired in human neuronal tissues. The effect of the trap is demonstrated by introducing a simple model for the generalized diffusion coefficient of the particle.
Models for forecasting hospital bed requirements in the acute sector.
Farmer, R D; Emami, J
1990-01-01
STUDY OBJECTIVE--The aim was to evaluate the current approach to forecasting hospital bed requirements. DESIGN--The study was a time series and regression analysis. The time series for mean duration of stay for general surgery in the age group 15-44 years (1969-1982) was used in the evaluation of different methods of forecasting future values of mean duration of stay and its subsequent use in the formation of hospital bed requirements. RESULTS--It has been suggested that the simple trend fitting approach suffers from model specification error and imposes unjustified restrictions on the data. Time series approach (Box-Jenkins method) was shown to be a more appropriate way of modelling the data. CONCLUSION--The simple trend fitting approach is inferior to the time series approach in modelling hospital bed requirements. PMID:2277253
Swanson, Jon; Audie, Joseph
2018-01-01
A fundamental and unsolved problem in biophysical chemistry is the development of a computationally simple, physically intuitive, and generally applicable method for accurately predicting and physically explaining protein-protein binding affinities from protein-protein interaction (PPI) complex coordinates. Here, we propose that the simplification of a previously described six-term PPI scoring function to a four term function results in a simple expression of all physically and statistically meaningful terms that can be used to accurately predict and explain binding affinities for a well-defined subset of PPIs that are characterized by (1) crystallographic coordinates, (2) rigid-body association, (3) normal interface size, and hydrophobicity and hydrophilicity, and (4) high quality experimental binding affinity measurements. We further propose that the four-term scoring function could be regarded as a core expression for future development into a more general PPI scoring function. Our work has clear implications for PPI modeling and structure-based drug design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ajami, N K; Duan, Q; Gao, X
2005-04-11
This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less
Zhang, Guoqiang; Yan, Zhenya; Wen, Xiao-Yong
2017-07-01
The integrable coupled nonlinear Schrödinger equations with four-wave mixing are investigated. We first explore the conditions for modulational instability of continuous waves of this system. Secondly, based on the generalized N -fold Darboux transformation (DT), beak-shaped higher-order rogue waves (RWs) and beak-shaped higher-order rogue wave pairs are derived for the coupled model with attractive interaction in terms of simple determinants. Moreover, we derive the simple multi-dark-dark and kink-shaped multi-dark-dark solitons for the coupled model with repulsive interaction through the generalizing DT. We explore their dynamics and classifications by different kinds of spatial-temporal distribution structures including triangular, pentagonal, 'claw-like' and heptagonal patterns. Finally, we perform the numerical simulations to predict that some dark solitons and RWs are stable enough to develop within a short time. The results would enrich our understanding on nonlinear excitations in many coupled nonlinear wave systems with transition coupling effects.
NASA Astrophysics Data System (ADS)
Fendley, Paul; Hagendorf, Christian
2010-10-01
We conjecture exact and simple formulas for some physical quantities in two quantum chains. A classic result of this type is Onsager, Kaufman and Yang's formula for the spontaneous magnetization in the Ising model, subsequently generalized to the chiral Potts models. We conjecture that analogous results occur in the XYZ chain when the couplings obey JxJy + JyJz + JxJz = 0, and in a related fermion chain with strong interactions and supersymmetry. We find exact formulas for the magnetization and gap in the former, and the staggered density in the latter, by exploiting the fact that certain quantities are independent of finite-size effects.
Deterministic diffusion in flower-shaped billiards.
Harayama, Takahisa; Klages, Rainer; Gaspard, Pierre
2002-08-01
We propose a flower-shaped billiard in order to study the irregular parameter dependence of chaotic normal diffusion. Our model is an open system consisting of periodically distributed obstacles in the shape of a flower, and it is strongly chaotic for almost all parameter values. We compute the parameter dependent diffusion coefficient of this model from computer simulations and analyze its functional form using different schemes, all generalizing the simple random walk approximation of Machta and Zwanzig. The improved methods we use are based either on heuristic higher-order corrections to the simple random walk model, on lattice gas simulation methods, or they start from a suitable Green-Kubo formula for diffusion. We show that dynamical correlations, or memory effects, are of crucial importance in reproducing the precise parameter dependence of the diffusion coefficent.
Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.
2009-01-01
In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.
Three dimensional hair model by means particles using Blender
NASA Astrophysics Data System (ADS)
Alvarez-Cedillo, Jesús Antonio; Almanza-Nieto, Roberto; Herrera-Lozada, Juan Carlos
2010-09-01
The simulation and modeling of human hair is a process whose computational complexity is very large, this due to the large number of factors that must be calculated to give a realistic appearance. Generally, the method used in the film industry to simulate hair is based on particle handling graphics. In this paper we present a simple approximation of how to model human hair using particles in Blender. [Figure not available: see fulltext.
Architecture with GIDEON, A Program for Design in Structural DNA Nanotechnology
Birac, Jeffrey J.; Sherman, William B.; Kopatsch, Jens; Constantinou, Pamela E.; Seeman, Nadrian C.
2012-01-01
We present geometry based design strategies for DNA nanostructures. The strategies have been implemented with GIDEON – a Graphical Integrated Development Environment for OligoNucleotides. GIDEON has a highly flexible graphical user interface that facilitates the development of simple yet precise models, and the evaluation of strains therein. Models are built on a simple model of undistorted B-DNA double-helical domains. Simple point and click manipulations of the model allow the minimization of strain in the phosphate-backbone linkages between these domains and the identification of any steric clashes that might occur as a result. Detailed analysis of 3D triangles yields clear predictions of the strains associated with triangles of different sizes. We have carried out experiments that confirm that 3D triangles form well only when their geometrical strain is less than 4% deviation from the estimated relaxed structure. Thus geometry-based techniques alone, without energetic considerations, can be used to explain general trends in DNA structure formation. We have used GIDEON to build detailed models of double crossover and triple crossover molecules, evaluating the non-planarity associated with base tilt and junction mis-alignments. Computer modeling using a graphical user interface overcomes the limited precision of physical models for larger systems, and the limited interaction rate associated with earlier, command-line driven software. PMID:16630733
NASA Astrophysics Data System (ADS)
Shi, Jinfei; Zhu, Songqing; Chen, Ruwen
2017-12-01
An order selection method based on multiple stepwise regressions is proposed for General Expression of Nonlinear Autoregressive model which converts the model order problem into the variable selection of multiple linear regression equation. The partial autocorrelation function is adopted to define the linear term in GNAR model. The result is set as the initial model, and then the nonlinear terms are introduced gradually. Statistics are chosen to study the improvements of both the new introduced and originally existed variables for the model characteristics, which are adopted to determine the model variables to retain or eliminate. So the optimal model is obtained through data fitting effect measurement or significance test. The simulation and classic time-series data experiment results show that the method proposed is simple, reliable and can be applied to practical engineering.
NASA Astrophysics Data System (ADS)
Wei, Jiangfeng; Dirmeyer, Paul A.; Yang, Zong-Liang; Chen, Haishan
2017-10-01
Through a series of model simulations with an atmospheric general circulation model coupled to three different land surface models, this study investigates the impacts of land model ensembles and coupled model ensemble on precipitation simulation. It is found that coupling an ensemble of land models to an atmospheric model has a very minor impact on the improvement of precipitation climatology and variability, but a simple ensemble average of the precipitation from three individually coupled land-atmosphere models produces better results, especially for precipitation variability. The generally weak impact of land processes on precipitation should be the main reason that the land model ensembles do not improve precipitation simulation. However, if there are big biases in the land surface model or land surface data set, correcting them could improve the simulated climate, especially for well-constrained regional climate simulations.
The muon g - 2 for low-mass pseudoscalar Higgs in the general 2HDM
NASA Astrophysics Data System (ADS)
Cherchiglia, Adriano; Stöckinger, Dominik; Stöckinger-Kim, Hyejung
2018-05-01
The two-Higgs doublet model is a simple and attractive extension of the Standard Model. It provides a possibility to explain the large deviation between theory and experiment in the muon g - 2 in an interesting parameter region: light pseudoscalar Higgs A, large Yukawa coupling to τ-leptons, and general, non-type II Yukawa couplings are preferred. This parameter region is explored, experimental limits on the relevant Yukawa couplings are obtained, and the maximum possible contributions to the muon g - 2 are discussed. Presented at Workshop on Flavour Changing and Conserving Processes (FCCP2017), September 2017
On nonlocally interacting metrics, and a simple proposal for cosmic acceleration
NASA Astrophysics Data System (ADS)
Vardanyan, Valeri; Akrami, Yashar; Amendola, Luca; Silvestri, Alessandra
2018-03-01
We propose a simple, nonlocal modification to general relativity (GR) on large scales, which provides a model of late-time cosmic acceleration in the absence of the cosmological constant and with the same number of free parameters as in standard cosmology. The model is motivated by adding to the gravity sector an extra spin-2 field interacting nonlocally with the physical metric coupled to matter. The form of the nonlocal interaction is inspired by the simplest form of the Deser-Woodard (DW) model, α R1/squareR, with one of the Ricci scalars being replaced by a constant m2, and gravity is therefore modified in the infrared by adding a simple term of the form m21/squareR to the Einstein-Hilbert term. We study cosmic expansion histories, and demonstrate that the new model can provide background expansions consistent with observations if m is of the order of the Hubble expansion rate today, in contrast to the simple DW model with no viable cosmology. The model is best fit by w0~‑1.075 and wa~0.045. We also compare the cosmology of the model to that of Maggiore and Mancarella (MM), m2R1/square2R, and demonstrate that the viable cosmic histories follow the standard-model evolution more closely compared to the MM model. We further demonstrate that the proposed model possesses the same number of physical degrees of freedom as in GR. Finally, we discuss the appearance of ghosts in the local formulation of the model, and argue that they are unphysical and harmless to the theory, keeping the physical degrees of freedom healthy.
Robert R. Ziemer
1979-01-01
For years, the principal objective of evapotranspiration research has been to calculate the loss of water under varying conditions of climate, soil, and vegetation. The early simple empirical methods have generally been replaced by more detailed models which more closely represent the physical and biological processes involved. Monteith's modification of the...
Game-Theoretic Models of Information Overload in Social Networks
NASA Astrophysics Data System (ADS)
Borgs, Christian; Chayes, Jennifer; Karrer, Brian; Meeder, Brendan; Ravi, R.; Reagans, Ray; Sayedi, Amin
We study the effect of information overload on user engagement in an asymmetric social network like Twitter. We introduce simple game-theoretic models that capture rate competition between celebrities producing updates in such networks where users non-strategically choose a subset of celebrities to follow based on the utility derived from high quality updates as well as disutility derived from having to wade through too many updates. Our two variants model the two behaviors of users dropping some potential connections (followership model) or leaving the network altogether (engagement model). We show that under a simple formulation of celebrity rate competition, there is no pure strategy Nash equilibrium under the first model. We then identify special cases in both models when pure rate equilibria exist for the celebrities: For the followership model, we show existence of a pure rate equilibrium when there is a global ranking of the celebrities in terms of the quality of their updates to users. This result also generalizes to the case when there is a partial order consistent with all the linear orders of the celebrities based on their qualities to the users. Furthermore, these equilibria can be computed in polynomial time. For the engagement model, pure rate equilibria exist when all users are interested in the same number of celebrities, or when they are interested in at most two. Finally, we also give a finite though inefficient procedure to determine if pure equilibria exist in the general case of the followership model.
Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.
2014-12-01
Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.
Investigating the Effect of Damage Progression Model Choice on Prognostics Performance
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Roychoudhury, Indranil; Narasimhan, Sriram; Saha, Sankalita; Saha, Bhaskar; Goebel, Kai
2011-01-01
The success of model-based approaches to systems health management depends largely on the quality of the underlying models. In model-based prognostics, it is especially the quality of the damage progression models, i.e., the models describing how damage evolves as the system operates, that determines the accuracy and precision of remaining useful life predictions. Several common forms of these models are generally assumed in the literature, but are often not supported by physical evidence or physics-based analysis. In this paper, using a centrifugal pump as a case study, we develop different damage progression models. In simulation, we investigate how model changes influence prognostics performance. Results demonstrate that, in some cases, simple damage progression models are sufficient. But, in general, the results show a clear need for damage progression models that are accurate over long time horizons under varied loading conditions.
An Application of the H-Function to Curve-Fitting and Density Estimation.
1983-12-01
equations into a model that is linear in its coefficients. Nonlinear least squares estimation is a relatively new area developed to accomodate models which...to converge on a solution (10:9-10). For the simple linear model and when general assump- tions are made, the Gauss-Markov theorem states that the...distribution. For example, if the analyst wants to model the time between arrivals to a queue for a computer simulation, he infers the true probability
Complex Autocatalysis in Simple Chemistries.
Virgo, Nathaniel; Ikegami, Takashi; McGregor, Simon
2016-01-01
Life on Earth must originally have arisen from abiotic chemistry. Since the details of this chemistry are unknown, we wish to understand, in general, which types of chemistry can lead to complex, lifelike behavior. Here we show that even very simple chemistries in the thermodynamically reversible regime can self-organize to form complex autocatalytic cycles, with the catalytic effects emerging from the network structure. We demonstrate this with a very simple but thermodynamically reasonable artificial chemistry model. By suppressing the direct reaction from reactants to products, we obtain the simplest kind of autocatalytic cycle, resulting in exponential growth. When these simple first-order cycles are prevented from forming, the system achieves superexponential growth through more complex, higher-order autocatalytic cycles. This leads to nonlinear phenomena such as oscillations and bistability, the latter of which is of particular interest regarding the origins of life.
Framework for adaptive multiscale analysis of nonhomogeneous point processes.
Helgason, Hannes; Bartroff, Jay; Abry, Patrice
2011-01-01
We develop the methodology for hypothesis testing and model selection in nonhomogeneous Poisson processes, with an eye toward the application of modeling and variability detection in heart beat data. Modeling the process' non-constant rate function using templates of simple basis functions, we develop the generalized likelihood ratio statistic for a given template and a multiple testing scheme to model-select from a family of templates. A dynamic programming algorithm inspired by network flows is used to compute the maximum likelihood template in a multiscale manner. In a numerical example, the proposed procedure is nearly as powerful as the super-optimal procedures that know the true template size and true partition, respectively. Extensions to general history-dependent point processes is discussed.
Improved modeling of turbulent forced convection heat transfer in straight ducts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rokni, M.; Sunden, B.
1999-08-01
This investigation concerns numerical calculation of turbulent forced convective heat transfer and fluid flow in their fully developed state at low Reynolds number. The authors have developed a low Reynolds number version of the nonlinear {kappa}-{epsilon} model combined with the heat flux models of simple eddy diffusivity (SED), low Reynolds number version of generalized gradient diffusion hypothesis (GGDH), and wealth {proportional_to} earning {times} time (WET) in general three-dimensional geometries. The numerical approach is based on the finite volume technique with a nonstaggered grid arrangement and the SIMPLEC algorithm. Results have been obtained with the nonlinear {kappa}-{epsilon} model, combined with themore » Lam-Bremhorst and the Abe-Kondoh-Nagano damping functions for low Reynolds numbers.« less
Generalized gauge U(1) family symmetry for quarks and leptons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kownacki, Corey; Ma, Ernest; Pollard, Nicholas
2017-01-11
If the standard model of quarks and leptons is extended to include three singlet right-handed neutrinos, then the resulting fermion structure admits an infinite number of anomaly-free solutions with just one simple constraint. Well-known examples satisfying this constraint are B–L, L μ–Lτ, B–3Lτ, etc. Here, we derive this simple constraint, and discuss two new examples which offer some insights to the structure of mixing among quark and lepton families, together with their possible verification at the Large Hadron Collider.
Hadron-nucleus interactions at high energies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiu, C.B.; He, Z.; Tow, D.M.
1982-06-01
A simple space-time description of high-energy hadron-nucleus interactions is presented. The model is based on the DTU (dual topologial unitarization)-parton-model description of soft multiparticle production in hadron-hadron interactions. The essentially parameter-free model agrees well with the general features of high-energy data for hadron-nucleus interactions; in particular, this DTU-parton model has a natural explanation for an approximate nu-bar universality. The expansion to high-energy nucleus-nucleus interactions is presented. We also compare and contrast this model with several previously proposed models.
Hadron-nucleus interactions at high energies
NASA Astrophysics Data System (ADS)
Chiu, Charles B.; He, Zuoxiu; Tow, Don M.
1982-06-01
A simple space-time description of high-energy hadron-nucleus interactions is presented. The model is based on the DTU (dual topological unitarization) -parton-model description of soft multiparticle production in hadron-hadron interactions. The essentially parameter-free model agrees well with the general features of high-energy data for hadron-nucleus interactions; in particular, this DTU-parton model has a natural explanation for an approximate ν¯ universality. The extension to high-energy nucleus-nucleus interactions is presented. We also compare and contrast this model with several previously proposed models.
Spectral flow as a map between N = (2 , 0)-models
NASA Astrophysics Data System (ADS)
Athanasopoulos, P.; Faraggi, A. E.; Gepner, D.
2014-07-01
The space of (2 , 0) models is of particular interest among all heterotic-string models because it includes the models with the minimal SO (10) unification structure, which is well motivated by the Standard Model of particle physics data. The fermionic Z2 ×Z2 heterotic-string models revealed the existence of a new symmetry in the space of string configurations under the exchange of spinors and vectors of the SO (10) GUT group, dubbed spinor-vector duality. In this paper we generalize this idea to arbitrary internal rational conformal field theories (RCFTs). We explain how the spectral flow operator normally acting within a general (2 , 2) theory can be used as a map between (2 , 0) models. We describe the details, give an example and propose more simple currents that can be used in a similar way.
A discrete Markov metapopulation model for persistence and extinction of species.
Thompson, Colin J; Shtilerman, Elad; Stone, Lewi
2016-09-07
A simple discrete generation Markov metapopulation model is formulated for studying the persistence and extinction dynamics of a species in a given region which is divided into a large number of sites or patches. Assuming a linear site occupancy probability from one generation to the next we obtain exact expressions for the time evolution of the expected number of occupied sites and the mean-time to extinction (MTE). Under quite general conditions we show that the MTE, to leading order, is proportional to the logarithm of the initial number of occupied sites and in precise agreement with similar expressions for continuous time-dependent stochastic models. Our key contribution is a novel application of generating function techniques and simple asymptotic methods to obtain a second order asymptotic expression for the MTE which is extremely accurate over the entire range of model parameter values. Copyright © 2016 Elsevier Ltd. All rights reserved.
Des Roches, Carrie A.; Vallila-Rohter, Sofia; Villard, Sarah; Tripodis, Yorghos; Caplan, David
2016-01-01
Purpose The current study examined treatment outcomes and generalization patterns following 2 sentence comprehension therapies: object manipulation (OM) and sentence-to-picture matching (SPM). Findings were interpreted within the framework of specific deficit and resource reduction accounts, which were extended in order to examine the nature of generalization following treatment of sentence comprehension deficits in aphasia. Method Forty-eight individuals with aphasia were enrolled in 1 of 8 potential treatment assignments that varied by task (OM, SPM), complexity of trained sentences (complex, simple), and syntactic movement (noun phrase, wh-movement). Comprehension of trained and untrained sentences was probed before and after treatment using stimuli that differed from the treatment stimuli. Results Linear mixed-model analyses demonstrated that, although both OM and SPM treatments were effective, OM resulted in greater improvement than SPM. Analyses of covariance revealed main effects of complexity in generalization; generalization from complex to simple linguistically related sentences was observed both across task and across movement. Conclusions Results are consistent with the complexity account of treatment efficacy, as generalization effects were consistently observed from complex to simpler structures. Furthermore, results provide support for resource reduction accounts that suggest that generalization can extend across linguistic boundaries, such as across movement type. PMID:27997950
1990-11-01
1 = Q- 1 - 1 QlaaQ- 1.1 + a’Q-1a This is a simple case of a general formula called Woodbury’s formula by some authors; see, for example, Phadke and...1 2. The First-Order Moving Average Model ..... .................. 3. Some Approaches to the Iterative...the approximate likelihood function in some time series models. Useful suggestions have been the Cholesky decomposition of the covariance matrix and
Generative models for discovering sparse distributed representations.
Hinton, G E; Ghahramani, Z
1997-01-01
We describe a hierarchical, generative model that can be viewed as a nonlinear generalization of factor analysis and can be implemented in a neural network. The model uses bottom-up, top-down and lateral connections to perform Bayesian perceptual inference correctly. Once perceptual inference has been performed the connection strengths can be updated using a very simple learning rule that only requires locally available information. We demonstrate that the network learns to extract sparse, distributed, hierarchical representations. PMID:9304685
Review of Statistical Methods for Analysing Healthcare Resources and Costs
Mihaylova, Borislava; Briggs, Andrew; O'Hagan, Anthony; Thompson, Simon G
2011-01-01
We review statistical methods for analysing healthcare resource use and costs, their ability to address skewness, excess zeros, multimodality and heavy right tails, and their ease for general use. We aim to provide guidance on analysing resource use and costs focusing on randomised trials, although methods often have wider applicability. Twelve broad categories of methods were identified: (I) methods based on the normal distribution, (II) methods following transformation of data, (III) single-distribution generalized linear models (GLMs), (IV) parametric models based on skewed distributions outside the GLM family, (V) models based on mixtures of parametric distributions, (VI) two (or multi)-part and Tobit models, (VII) survival methods, (VIII) non-parametric methods, (IX) methods based on truncation or trimming of data, (X) data components models, (XI) methods based on averaging across models, and (XII) Markov chain methods. Based on this review, our recommendations are that, first, simple methods are preferred in large samples where the near-normality of sample means is assured. Second, in somewhat smaller samples, relatively simple methods, able to deal with one or two of above data characteristics, may be preferable but checking sensitivity to assumptions is necessary. Finally, some more complex methods hold promise, but are relatively untried; their implementation requires substantial expertise and they are not currently recommended for wider applied work. Copyright © 2010 John Wiley & Sons, Ltd. PMID:20799344
Interaction Analysis in MANOVA.
ERIC Educational Resources Information Center
Betz, M. Austin
Simultaneous test procedures (STPS for short) in the context of the unrestricted full rank general linear multivariate model for population cell means are introduced and utilized to analyze interactions in factorial designs. By appropriate choice of an implying hypothesis, it is shown how to test overall main effects, interactions, simple main,…
Adverse health risks from environmental agents are generally related to average (long term) exposures. We used results from a series of controlled human exposure tests and classical first order rate kinetics calculations to estimate how well spot measurements of methyl tertiary ...
Teaching Mendelian Genetics with the Computer.
ERIC Educational Resources Information Center
Small, James W., Jr.
Students in general undergraduate courses in both biology and genetics seem to have great difficulty mastering the basic concepts of Mendelian Genetics and solving even simple problems. In an attempt to correct this situation, students in both courses at Rollins College were introduced to three simulation models of the genetics of the fruit…
A simple enrichment correction factor for improving erosion estimation by rare earth oxide tracers
USDA-ARS?s Scientific Manuscript database
Spatially distributed soil erosion data are needed to better understanding soil erosion processes and validating distributed erosion models. Rare earth element (REE) oxides were used to generate spatial erosion data. However, a general concern on the accuracy of the technique arose due to selective ...
Multiporosity flow in fractured low-permeability rocks: Extension to shale hydrocarbon reservoirs
Kuhlman, Kristopher L.; Malama, Bwalya; Heath, Jason E.
2015-02-05
We presented a multiporosity extension of classical double and triple-porosity fractured rock flow models for slightly compressible fluids. The multiporosity model is an adaptation of the multirate solute transport model of Haggerty and Gorelick (1995) to viscous flow in fractured rock reservoirs. It is a generalization of both pseudo steady state and transient interporosity flow double-porosity models. The model includes a fracture continuum and an overlapping distribution of multiple rock matrix continua, whose fracture-matrix exchange coefficients are specified through a discrete probability mass function. Semianalytical cylindrically symmetric solutions to the multiporosity mathematical model are developed using the Laplace transform tomore » illustrate its behavior. Furthermore, the multiporosity model presented here is conceptually simple, yet flexible enough to simulate common conceptualizations of double and triple-porosity flow. This combination of generality and simplicity makes the multiporosity model a good choice for flow modelling in low-permeability fractured rocks.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fasham, M.J.R.; Sarmiento, J.L.; Slater, R.D.
1993-06-01
One important theme of modern biological oceanography has been the attempt to develop models of how the marine ecosystem responds to variations in the physical forcing functions such as solar radiation and the wind field. The authors have addressed the problem by embedding simple ecosystem models into a seasonally forced three-dimensional general circulation model of the North Atlantic ocean. In this paper first, some of the underlying biological assumptions of the ecosystem model are presented, followed by an analysis of how well the model predicts the seasonal cycle of the biological variables at Bermuda Station s' and Ocean Weather Stationmore » India. The model gives a good overall fit to the observations but does not faithfully model the whole seasonal ecosystem model. 57 refs., 25 figs., 5 tabs.« less
Character expansion methods for matrix models of dually weighted graphs
NASA Astrophysics Data System (ADS)
Kazakov, Vladimir A.; Staudacher, Matthias; Wynter, Thomas
1996-04-01
We consider generalized one-matrix models in which external fields allow control over the coordination numbers on both the original and dual lattices. We rederive in a simple fashion a character expansion formula for these models originally due to Itzykson and Di Francesco, and then demonstrate how to take the large N limit of this expansion. The relationship to the usual matrix model resolvent is elucidated. Our methods give as a by-product an extremely simple derivation of the Migdal integral equation describing the large N limit of the Itzykson-Zuber formula. We illustrate and check our methods by analysing a number of models solvable by traditional means. We then proceed to solve a new model: a sum over planar graphys possessing even coordination numbers on both the original and the dual lattice. We conclude by formulating the equations for the case of arbitrary sets of even, self-dual coupling constants. This opens the way for studying the deep problems of phase transitions from random to flat lattices. January 1995
Nonequilibrium Green's functions and atom-surface dynamics: Simple views from a simple model system
NASA Astrophysics Data System (ADS)
Boström, E.; Hopjan, M.; Kartsev, A.; Verdozzi, C.; Almbladh, C.-O.
2016-03-01
We employ Non-equilibrium Green's functions (NEGF) to describe the real-time dynamics of an adsorbate-surface model system exposed to ultrafast laser pulses. For a finite number of electronic orbitals, the system is solved exactly and within different levels of approximation. Specifically i) the full exact quantum mechanical solution for electron and nuclear degrees of freedom is used to benchmark ii) the Ehrenfest approximation (EA) for the nuclei, with the electron dynamics still treated exactly. Then, using the EA, electronic correlations are treated with NEGF within iii) 2nd Born and with iv) a recently introduced hybrid scheme, which mixes 2nd Born self-energies with non-perturbative, local exchange- correlation potentials of Density Functional Theory (DFT). Finally, the effect of a semi-infinite substrate is considered: we observe that a macroscopic number of de-excitation channels can hinder desorption. While very preliminary in character and based on a simple and rather specific model system, our results clearly illustrate the large potential of NEGF to investigate atomic desorption, and more generally, the non equilibrium dynamics of material surfaces subject to ultrafast laser fields.
A simple model of hysteresis behavior using spreadsheet analysis
NASA Astrophysics Data System (ADS)
Ehrmann, A.; Blachowicz, T.
2015-01-01
Hysteresis loops occur in many scientific and technical problems, especially as field dependent magnetization of ferromagnetic materials, but also as stress-strain-curves of materials measured by tensile tests including thermal effects, liquid-solid phase transitions, in cell biology or economics. While several mathematical models exist which aim to calculate hysteresis energies and other parameters, here we offer a simple model for a general hysteretic system, showing different hysteresis loops depending on the defined parameters. The calculation which is based on basic spreadsheet analysis plus an easy macro code can be used by students to understand how these systems work and how the parameters influence the reactions of the system on an external field. Importantly, in the step-by-step mode, each change of the system state, compared to the last step, becomes visible. The simple program can be developed further by several changes and additions, enabling the building of a tool which is capable of answering real physical questions in the broad field of magnetism as well as in other scientific areas, in which similar hysteresis loops occur.
Coupling Climate Models and Forward-Looking Economic Models
NASA Astrophysics Data System (ADS)
Judd, K.; Brock, W. A.
2010-12-01
Authors: Dr. Kenneth L. Judd, Hoover Institution, and Prof. William A. Brock, University of Wisconsin Current climate models range from General Circulation Models (GCM’s) with millions of degrees of freedom to models with few degrees of freedom. Simple Energy Balance Climate Models (EBCM’s) help us understand the dynamics of GCM’s. The same is true in economics with Computable General Equilibrium Models (CGE’s) where some models are infinite-dimensional multidimensional differential equations but some are simple models. Nordhaus (2007, 2010) couples a simple EBCM with a simple economic model. One- and two- dimensional ECBM’s do better at approximating damages across the globe and positive and negative feedbacks from anthroprogenic forcing (North etal. (1981), Wu and North (2007)). A proper coupling of climate and economic systems is crucial for arriving at effective policies. Brock and Xepapadeas (2010) have used Fourier/Legendre based expansions to study the shape of socially optimal carbon taxes over time at the planetary level in the face of damages caused by polar ice cap melt (as discussed by Oppenheimer, 2005) but in only a “one dimensional” EBCM. Economists have used orthogonal polynomial expansions to solve dynamic, forward-looking economic models (Judd, 1992, 1998). This presentation will couple EBCM climate models with basic forward-looking economic models, and examine the effectiveness and scaling properties of alternative solution methods. We will use a two dimensional EBCM model on the sphere (Wu and North, 2007) and a multicountry, multisector regional model of the economic system. Our aim will be to gain insights into intertemporal shape of the optimal carbon tax schedule, and its impact on global food production, as modeled by Golub and Hertel (2009). We will initially have limited computing resources and will need to focus on highly aggregated models. However, this will be more complex than existing models with forward-looking economic modules, and the initial models will help guide the construction of more refined models that can effectively use more powerful computational environments to analyze economic policies related to climate change. REFERENCES Brock, W., Xepapadeas, A., 2010, “An Integration of Simple Dynamic Energy Balance Climate Models and Ramsey Growth Models,” Department of Economics, University of Wisconsin, Madison, and University of Athens. Golub, A., Hertel, T., etal., 2009, “The opportunity cost of land use and the global potential for greenhouse gas mitigation in agriculture and forestry,” RESOURCE AND ENERGY ECONOMICS, 31, 299-319. Judd, K., 1992, “Projection methods for solving aggregate growth models,” JOURNAL OF ECONOMIC THEORY, 58: 410-52. Judd, K., 1998, NUMERICAL METHODS IN ECONOMICS, MIT Press, Cambridge, Mass. Nordhaus, W., 2007, A QUESTION OF BALANCE: ECONOMIC MODELS OF CLIMATE CHANGE, Yale University Press, New Haven, CT. North, G., R., Cahalan, R., Coakely, J., 1981, “Energy balance climate models,” REVIEWS OF GEOPHYSICS AND SPACE PHYSICS, Vol. 19, No. 1, 91-121, February Wu, W., North, G. R., 2007, “Thermal decay modes of a 2-D energy balance climate model,” TELLUS, 59A, 618-626.
The Structure of Working Memory Abilities across the Adult Life Span
Hale, Sandra; Rose, Nathan S.; Myerson, Joel; Strube, Michael J; Sommers, Mitchell; Tye-Murray, Nancy; Spehar, Brent
2010-01-01
The present study addresses three questions regarding age differences in working memory: (1) whether performance on complex span tasks decreases as a function of age at a faster rate than performance on simple span tasks; (2) whether spatial working memory decreases at a faster rate than verbal working memory; and (3) whether the structure of working memory abilities is different for different age groups. Adults, ages 20–89 (n=388), performed three simple and three complex verbal span tasks and three simple and three complex spatial memory tasks. Performance on the spatial tasks decreased at faster rates as a function of age than performance on the verbal tasks, but within each domain, performance on complex and simple span tasks decreased at the same rates. Confirmatory factor analyses revealed that domain-differentiated models yielded better fits than models involving domain-general constructs, providing further evidence of the need to distinguish verbal and spatial working memory abilities. Regardless of which domain-differentiated model was examined, and despite the faster rates of decrease in the spatial domain, age group comparisons revealed that the factor structure of working memory abilities was highly similar in younger and older adults and showed no evidence of age-related dedifferentiation. PMID:21299306
Simulating Eastern- and Central-Pacific Type ENSO Using a Simple Coupled Model
NASA Astrophysics Data System (ADS)
Fang, Xianghui; Zheng, Fei
2018-06-01
Severe biases exist in state-of-the-art general circulation models (GCMs) in capturing realistic central-Pacific (CP) El Niño structures. At the same time, many observational analyses have emphasized that thermocline (TH) feedback and zonal advective (ZA) feedback play dominant roles in the development of eastern-Pacific (EP) and CP El Niño-Southern Oscillation (ENSO), respectively. In this work, a simple linear air-sea coupled model, which can accurately depict the strength distribution of the TH and ZA feedbacks in the equatorial Pacific, is used to investigate these two types of El Niño. The results indicate that the model can reproduce the main characteristics of CP ENSO if the TH feedback is switched off and the ZA feedback is retained as the only positive feedback, confirming the dominant role played by ZA feedback in the development of CP ENSO. Further experiments indicate that, through a simple nonlinear control approach, many ENSO characteristics, including the existence of both CP and EP El Niño and the asymmetries between El Niño and La Niña, can be successfully captured using the simple linear air-sea coupled model. These analyses indicate that an accurate depiction of the climatological sea surface temperature distribution and the related ZA feedback, which are the subject of severe biases in GCMs, is very important in simulating a realistic CP El Niño.
Modelling the firing pattern of bullfrog vestibular neurons responding to naturalistic stimuli
NASA Technical Reports Server (NTRS)
Paulin, M. G.; Hoffman, L. F.
1999-01-01
We have developed a neural system identification method for fitting models to stimulus-response data, where the response is a spike train. The method involves using a general nonlinear optimisation procedure to fit models in the time domain. We have applied the method to model bullfrog semicircular canal afferent neuron responses during naturalistic, broad-band head rotations. These neurons respond in diverse ways, but a simple four parameter class of models elegantly accounts for the various types of responses observed. c1999 Elsevier Science B.V. All rights reserved.
Analysis of whisker-toughened CMC structural components using an interactive reliability model
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Palko, Joseph L.
1992-01-01
Realizing wider utilization of ceramic matrix composites (CMC) requires the development of advanced structural analysis technologies. This article focuses on the use of interactive reliability models to predict component probability of failure. The deterministic William-Warnke failure criterion serves as theoretical basis for the reliability model presented here. The model has been implemented into a test-bed software program. This computer program has been coupled to a general-purpose finite element program. A simple structural problem is presented to illustrate the reliability model and the computer algorithm.
The dark side of cosmology: dark matter and dark energy.
Spergel, David N
2015-03-06
A simple model with only six parameters (the age of the universe, the density of atoms, the density of matter, the amplitude of the initial fluctuations, the scale dependence of this amplitude, and the epoch of first star formation) fits all of our cosmological data . Although simple, this standard model is strange. The model implies that most of the matter in our Galaxy is in the form of "dark matter," a new type of particle not yet detected in the laboratory, and most of the energy in the universe is in the form of "dark energy," energy associated with empty space. Both dark matter and dark energy require extensions to our current understanding of particle physics or point toward a breakdown of general relativity on cosmological scales. Copyright © 2015, American Association for the Advancement of Science.
SynBioSS-aided design of synthetic biological constructs.
Kaznessis, Yiannis N
2011-01-01
We present walkthrough examples of using SynBioSS to design, model, and simulate synthetic gene regulatory networks. SynBioSS stands for Synthetic Biology Software Suite, a platform that is publicly available with Open Licenses at www.synbioss.org. An important aim of computational synthetic biology is the development of a mathematical modeling formalism that is applicable to a wide variety of simple synthetic biological constructs. SynBioSS-based modeling of biomolecular ensembles that interact away from the thermodynamic limit and not necessarily at steady state affords for a theoretical framework that is generally applicable to known synthetic biological systems, such as bistable switches, AND gates, and oscillators. Here, we discuss how SynBioSS creates links between DNA sequences and targeted dynamic phenotypes of these simple systems. Copyright © 2011 Elsevier Inc. All rights reserved.
Kinematic analysis of asymmetric folds in competent layers using mathematical modelling
NASA Astrophysics Data System (ADS)
Aller, J.; Bobillo-Ares, N. C.; Bastida, F.; Lisle, R. J.; Menéndez, C. O.
2010-08-01
Mathematical 2D modelling of asymmetric folds is carried out by applying a combination of different kinematic folding mechanisms: tangential longitudinal strain, flexural flow and homogeneous deformation. The main source of fold asymmetry is discovered to be due to the superimposition of a general homogeneous deformation on buckle folds that typically produces a migration of the hinge point. Forward modelling is performed mathematically using the software 'FoldModeler', by the superimposition of simple shear or a combination of simple shear and irrotational strain on initial buckle folds. The resulting folds are Ramsay class 1C folds, comparable to those formed by symmetric flattening, but with different length of limbs and layer thickness asymmetry. Inverse modelling is made by fitting the natural fold to a computer-simulated fold. A problem of this modelling is the search for the most appropriate homogeneous deformation to be superimposed on the initial fold. A comparative analysis of the irrotational and rotational deformations is made in order to find the deformation which best simulates the shapes and attitudes of natural folds. Modelling of recumbent folds suggests that optimal conditions for their development are: a) buckling in a simple shear regime with a sub-horizontal shear direction and layering gently dipping towards this direction; b) kinematic amplification due to superimposition of a combination of simple shear and irrotational strain with a sub-vertical maximum shortening direction for the latter component. The modelling shows that the amount of homogeneous strain necessary for the development of recumbent folds is much less when an irrotational strain component is superimposed at this stage that when the superimposed strain is only simple shear. In nature, the amount of the irrotational strain component probably increases during the development of the fold as a consequence of the increasing influence of the gravity due to the tectonic superimposition of rocks.
Précis of Simple heuristics that make us smart.
Todd, P M; Gigerenzer, G
2000-10-01
How can anyone be rational in a world where knowledge is limited, time is pressing, and deep thought is often an unattainable luxury? Traditional models of unbounded rationality and optimization in cognitive science, economics, and animal behavior have tended to view decision-makers as possessing supernatural powers of reason, limitless knowledge, and endless time. But understanding decisions in the real world requires a more psychologically plausible notion of bounded rationality. In Simple heuristics that make us smart (Gigerenzer et al. 1999), we explore fast and frugal heuristics--simple rules in the mind's adaptive toolbox for making decisions with realistic mental resources. These heuristics can enable both living organisms and artificial systems to make smart choices quickly and with a minimum of information by exploiting the way that information is structured in particular environments. In this précis, we show how simple building blocks that control information search, stop search, and make decisions can be put together to form classes of heuristics, including: ignorance-based and one-reason decision making for choice, elimination models for categorization, and satisficing heuristics for sequential search. These simple heuristics perform comparably to more complex algorithms, particularly when generalizing to new data--that is, simplicity leads to robustness. We present evidence regarding when people use simple heuristics and describe the challenges to be addressed by this research program.
Controlling the light shift of the CPT resonance by modulation technique
NASA Astrophysics Data System (ADS)
Tsygankov, E. A.; Petropavlovsky, S. V.; Vaskovskaya, M. I.; Zibrov, S. A.; Velichansky, V. L.; Yakovlev, V. P.
2017-12-01
Motivated by recent developments in atomic frequency standards employing the effect of coherent population trapping (CPT), we propose a theoretical framework for the frequency modulation spectroscopy of the CPT resonances. Under realistic assumptions we provide simple yet non-trivial analytical formulae for the major spectroscopic signals such as the CPT resonance line and the in-phase/quadrature responses. We discuss the influence of the light shift and, in particular, derive a simple expression for the displacement of the resonance as a function of modulation index. The performance of the model is checked against numerical simulations, the agreement is good to perfect. The obtained results can be used in more general models accounting for light absorption in the thick optical medium.
Occupation probabilities and fluctuations in the asymmetric simple inclusion process
NASA Astrophysics Data System (ADS)
Reuveni, Shlomi; Hirschberg, Ori; Eliazar, Iddo; Yechiali, Uri
2014-04-01
The asymmetric simple inclusion process (ASIP), a lattice-gas model of unidirectional transport and aggregation, was recently proposed as an "inclusion" counterpart of the asymmetric simple exclusion process. In this paper we present an exact closed-form expression for the probability that a given number of particles occupies a given set of consecutive lattice sites. Our results are expressed in terms of the entries of Catalan's trapezoids—number arrays which generalize Catalan's numbers and Catalan's triangle. We further prove that the ASIP is asymptotically governed by the following: (i) an inverse square-root law of occupation, (ii) a square-root law of fluctuation, and (iii) a Rayleigh law for the distribution of interexit times. The universality of these results is discussed.
Configurational coupled cluster approach with applications to magnetic model systems
NASA Astrophysics Data System (ADS)
Wu, Siyuan; Nooijen, Marcel
2018-05-01
A general exponential, coupled cluster like, approach is discussed to extract an effective Hamiltonian in configurational space, as a sum of 1-body, 2-body up to n-body operators. The simplest two-body approach is illustrated by calculations on simple magnetic model systems. A key feature of the approach is that equations up to a certain rank do not depend on higher body cluster operators.
NASA Technical Reports Server (NTRS)
Smith, David E.; Jonsson, Ari K.; Clancy, Daniel (Technical Monitor)
2001-01-01
In recent years, Graphplan style reachability analysis and mutual exclusion reasoning have been used in many high performance planning systems. While numerous refinements and extensions have been developed, the basic plan graph structure and reasoning mechanisms used in these systems are tied to the very simple STRIPS model of action. In 1999, Smith and Weld generalized the Graphplan methods for reachability and mutex reasoning to allow actions to have differing durations. However, the representation of actions still has some severe limitations that prevent the use of these techniques for many real-world planning systems. In this paper, we 1) separate the logic of reachability from the particular representation and inference methods used in Graphplan, and 2) extend the notions of reachability and mutual exclusion to more general notions of time and action. As it turns out, the general rules for mutual exclusion reasoning take on a remarkably clean and simple form. However, practical instantiations of them turn out to be messy, and require that we make representation and reasoning choices.
NASA Astrophysics Data System (ADS)
Singh, Gaurav; Krishnan, Girish
2017-06-01
Fiber reinforced elastomeric enclosures (FREEs) are soft and smart pneumatic actuators that deform in a predetermined fashion upon inflation. This paper analyzes the deformation behavior of FREEs by formulating a simple calculus of variations problem that involves constrained maximization of the enclosed volume. The model accurately captures the deformed shape for FREEs with any general fiber angle orientation, and its relation with actuation pressure, material properties and applied load. First, the accuracy of the model is verified with existing literature and experiments for the popular McKibben pneumatic artificial muscle actuator with two equal and opposite families of helically wrapped fibers. Then, the model is used to predict and experimentally validate the deformation behavior of novel rotating-contracting FREEs, for which no prior literature exist. The generality of the model enables conceptualization of novel FREEs whose fiber orientations vary arbitrarily along the geometry. Furthermore, the model is deemed to be useful in the design synthesis of fiber reinforced elastomeric actuators for general axisymmetric desired motion and output force requirement.
NASA Astrophysics Data System (ADS)
Schmidt, Peter; Lund, Björn; Hieronymus, Christoph
2012-03-01
When general-purpose finite element analysis software is used to model glacial isostatic adjustment (GIA), the first-order effect of prestress advection has to be accounted for by the user. We show here that the common use of elastic foundations at boundaries between materials of different densities will produce incorrect displacements, unless the boundary is perpendicular to the direction of gravity. This is due to the foundations always acting perpendicular to the surface to which they are attached, while the body force they represent always acts in the direction of gravity. If prestress advection is instead accounted for by the use of elastic spring elements in the direction of gravity, the representation will be correct. The use of springs adds a computation of the spring constants to the analysis. The spring constant for a particular node is defined by the product of the density contrast at the boundary, the gravitational acceleration, and the area supported by the node. To be consistent with the finite element formulation, the area is evaluated by integration of the nodal shape functions. We outline an algorithm for the calculation and include a Python script that integrates the shape functions over a bilinear quadrilateral element. For linear rectangular and triangular elements, the area supported by each node is equal to the element area divided the number of defining nodes, thereby simplifying the computation. This is, however, not true in the general nonrectangular case, and we demonstrate this with a simple 1-element model. The spring constant calculation is simple and performed in the preprocessing stage of the analysis. The time spent on the calculation is more than compensated for by a shorter analysis time, compared to that for a model with foundations. We illustrate the effects of using springs versus foundations with a simple two-dimensional GIA model of glacial loading, where the Earth model has an inclined boundary between the overlying elastic layer and the lower viscoelastic layer. Our example shows that the error introduced by the use of foundations is large enough to affect an analysis based on high-accuracy geodetic data.
A Bayesian Model of the Memory Colour Effect.
Witzel, Christoph; Olkkonen, Maria; Gegenfurtner, Karl R
2018-01-01
According to the memory colour effect, the colour of a colour-diagnostic object is not perceived independently of the object itself. Instead, it has been shown through an achromatic adjustment method that colour-diagnostic objects still appear slightly in their typical colour, even when they are colourimetrically grey. Bayesian models provide a promising approach to capture the effect of prior knowledge on colour perception and to link these effects to more general effects of cue integration. Here, we model memory colour effects using prior knowledge about typical colours as priors for the grey adjustments in a Bayesian model. This simple model does not involve any fitting of free parameters. The Bayesian model roughly captured the magnitude of the measured memory colour effect for photographs of objects. To some extent, the model predicted observed differences in memory colour effects across objects. The model could not account for the differences in memory colour effects across different levels of realism in the object images. The Bayesian model provides a particularly simple account of memory colour effects, capturing some of the multiple sources of variation of these effects.
A Bayesian Model of the Memory Colour Effect
Olkkonen, Maria; Gegenfurtner, Karl R.
2018-01-01
According to the memory colour effect, the colour of a colour-diagnostic object is not perceived independently of the object itself. Instead, it has been shown through an achromatic adjustment method that colour-diagnostic objects still appear slightly in their typical colour, even when they are colourimetrically grey. Bayesian models provide a promising approach to capture the effect of prior knowledge on colour perception and to link these effects to more general effects of cue integration. Here, we model memory colour effects using prior knowledge about typical colours as priors for the grey adjustments in a Bayesian model. This simple model does not involve any fitting of free parameters. The Bayesian model roughly captured the magnitude of the measured memory colour effect for photographs of objects. To some extent, the model predicted observed differences in memory colour effects across objects. The model could not account for the differences in memory colour effects across different levels of realism in the object images. The Bayesian model provides a particularly simple account of memory colour effects, capturing some of the multiple sources of variation of these effects. PMID:29760874
Gauge Theories of Vector Particles
DOE R&D Accomplishments Database
Glashow, S. L.; Gell-Mann, M.
1961-04-24
The possibility of generalizing the Yang-Mills trick is examined. Thus we seek theories of vector bosons invariant under continuous groups of coordinate-dependent linear transformations. All such theories may be expressed as superpositions of certain "simple" theories; we show that each "simple theory is associated with a simple Lie algebra. We may introduce mass terms for the vector bosons at the price of destroying the gauge-invariance for coordinate-dependent gauge functions. The theories corresponding to three particular simple Lie algebras - those which admit precisely two commuting quantum numbers - are examined in some detail as examples. One of them might play a role in the physics of the strong interactions if there is an underlying super-symmetry, transcending charge independence, that is badly broken. The intermediate vector boson theory of weak interactions is discussed also. The so-called "schizon" model cannot be made to conform to the requirements of partial gauge-invariance.
Garcia, F; Arruda-Neto, J D; Manso, M V; Helene, O M; Vanin, V R; Rodriguez, O; Mesa, J; Likhachev, V P; Filho, J W; Deppman, A; Perez, G; Guzman, F; de Camargo, S P
1999-10-01
A new and simple statistical procedure (STATFLUX) for the calculation of transfer coefficients of radionuclide transport to animals and plants is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. By using experimentally available curves of radionuclide concentrations versus time, for each animal compartment (organs), flow parameters were estimated by employing a least-squares procedure, whose consistency is tested. Some numerical results are presented in order to compare the STATFLUX transfer coefficients with those from other works and experimental data.
NASA Astrophysics Data System (ADS)
Adler, Ronald S.; Swanson, Scott D.; Yeung, Hong N.
1996-01-01
A projection-operator technique is applied to a general three-component model for magnetization transfer, extending our previous two-component model [R. S. Adler and H. N. Yeung,J. Magn. Reson. A104,321 (1993), and H. N. Yeung, R. S. Adler, and S. D. Swanson,J. Magn. Reson. A106,37 (1994)]. The PO technique provides an elegant means of deriving a simple, effective rate equation in which there is natural separation of relaxation and source terms and allows incorporation of Redfield-Provotorov theory without any additional assumptions or restrictive conditions. The PO technique is extended to incorporate more general, multicomponent models. The three-component model is used to fit experimental data from samples of human hyaline cartilage and fibrocartilage. The fits of the three-component model are compared to the fits of the two-component model.
{Γ}-Convergence Analysis of a Generalized XY Model: Fractional Vortices and String Defects
NASA Astrophysics Data System (ADS)
Badal, Rufat; Cicalese, Marco; De Luca, Lucia; Ponsiglione, Marcello
2018-03-01
We propose and analyze a generalized two dimensional XY model, whose interaction potential has n weighted wells, describing corresponding symmetries of the system. As the lattice spacing vanishes, we derive by {Γ}-convergence the discrete-to-continuum limit of this model. In the energy regime we deal with, the asymptotic ground states exhibit fractional vortices, connected by string defects. The {Γ}-limit takes into account both contributions, through a renormalized energy, depending on the configuration of fractional vortices, and a surface energy, proportional to the length of the strings. Our model describes in a simple way several topological singularities arising in Physics and Materials Science. Among them, disclinations and string defects in liquid crystals, fractional vortices and domain walls in micromagnetics, partial dislocations and stacking faults in crystal plasticity.
A hierarchy of granular continuum models: Why flowing grains are both simple and complex
NASA Astrophysics Data System (ADS)
Kamrin, Ken
2017-06-01
Granular materials have a strange propensity to behave as either a complex media or a simple media depending on the precise question being asked. This review paper offers a summary of granular flow rheologies for well-developed or steady-state motion, and seeks to explain this dichotomy through the vast range of complexity intrinsic to these models. A key observation is that to achieve accuracy in predicting flow fields in general geometries, one requires a model that accounts for a number of subtleties, most notably a nonlocal effect to account for cooperativity in the flow as induced by the finite size of grains. On the other hand, forces and tractions that develop on macro-scale, submerged boundaries appear to be minimally affected by grain size and, barring very rapid motions, are well represented by simple rate-independent frictional plasticity models. A major simplification observed in experiments of granular intrusion, which we refer to as the `resistive force hypothesis' of granular Resistive Force Theory, can be shown to arise directly from rate-independent plasticity. Because such plasticity models have so few parameters, and the major rheological parameter is a dimensionless internal friction coefficient, some of these simplifications can be seen as consequences of scaling.
Model validation of simple-graph representations of metabolism
Holme, Petter
2009-01-01
The large-scale properties of chemical reaction systems, such as metabolism, can be studied with graph-based methods. To do this, one needs to reduce the information, lists of chemical reactions, available in databases. Even for the simplest type of graph representation, this reduction can be done in several ways. We investigate different simple network representations by testing how well they encode information about one biologically important network structure—network modularity (the propensity for edges to be clustered into dense groups that are sparsely connected between each other). To achieve this goal, we design a model of reaction systems where network modularity can be controlled and measure how well the reduction to simple graphs captures the modular structure of the model reaction system. We find that the network types that best capture the modular structure of the reaction system are substrate–product networks (where substrates are linked to products of a reaction) and substance networks (with edges between all substances participating in a reaction). Furthermore, we argue that the proposed model for reaction systems with tunable clustering is a general framework for studies of how reaction systems are affected by modularity. To this end, we investigate statistical properties of the model and find, among other things, that it recreates correlations between degree and mass of the molecules. PMID:19158012
Dosimetry in x-ray-based breast imaging
Dance, David R; Sechopoulos, Ioannis
2016-01-01
The estimation of the mean glandular dose to the breast (MGD) for x-ray based imaging modalities forms an essential part of quality control and is needed for risk estimation and for system design and optimisation. This review considers the development of methods for estimating the MGD for mammography, digital breast tomosynthesis (DBT) and dedicated breast CT (DBCT). Almost all of the methodology used employs Monte Carlo calculated conversion factors to relate the measurable quantity, generally the incident air kerma, to the MGD. After a review of the size and composition of the female breast, the various mathematical models used are discussed, with particular emphasis on models for mammography. These range from simple geometrical shapes, to the more recent complex models based on patient DBCT examinations. The possibility of patient-specific dose estimates is considered as well as special diagnostic views and the effect of breast implants. Calculations using the complex models show that the MGD for mammography is overestimated by about 30% when the simple models are used. The design and uses of breast-simulating test phantoms for measuring incident air kerma are outlined and comparisons made between patient and phantom-based dose estimates. The most widely used national and international dosimetry protocols for mammography are based on different simple geometrical models of the breast, and harmonisation of these protocols using more complex breast models is desirable. PMID:27617767
Dosimetry in x-ray-based breast imaging
NASA Astrophysics Data System (ADS)
Dance, David R.; Sechopoulos, Ioannis
2016-10-01
The estimation of the mean glandular dose to the breast (MGD) for x-ray based imaging modalities forms an essential part of quality control and is needed for risk estimation and for system design and optimisation. This review considers the development of methods for estimating the MGD for mammography, digital breast tomosynthesis (DBT) and dedicated breast CT (DBCT). Almost all of the methodology used employs Monte Carlo calculated conversion factors to relate the measurable quantity, generally the incident air kerma, to the MGD. After a review of the size and composition of the female breast, the various mathematical models used are discussed, with particular emphasis on models for mammography. These range from simple geometrical shapes, to the more recent complex models based on patient DBCT examinations. The possibility of patient-specific dose estimates is considered as well as special diagnostic views and the effect of breast implants. Calculations using the complex models show that the MGD for mammography is overestimated by about 30% when the simple models are used. The design and uses of breast-simulating test phantoms for measuring incident air kerma are outlined and comparisons made between patient and phantom-based dose estimates. The most widely used national and international dosimetry protocols for mammography are based on different simple geometrical models of the breast, and harmonisation of these protocols using more complex breast models is desirable.
Lorber, Matthew; Toms, Leisa-Maree L
2017-10-01
Several studies have examined the role of breast milk consumption in the buildup of environmental chemicals in infants, and have concluded that this pathway elevates infant body burdens above what would occur in a formula-only diet. Unique data from Australia provide an opportunity to study this finding using simple pharmacokinetic (PK) models. Pooled serum samples from infants in the general population provided data on PCB 153, BDE 47, and DDE at 6-month increments from birth until 4 years of age. General population breast-feeding scenarios for Australian conditions were crafted and input into a simple PK model which predicted infant serum concentrations over time. Comparison scenarios of background exposures to characterize formula-feeding were also crafted. It was found that the models were able to replicate the rise in measured infant body burdens for PCB 153 and DDE in the breast-feeding scenarios, while the background scenarios resulted in infant body burdens substantially below the measurements. The same was not true for BDE 47, however. Both the breast-feeding and background scenarios substantially underpredicted body burden measurements. Two possible explanations were offered: that exposure to higher BDE congeners would debrominate and form BDE 47 in the body, and/or, a second overlooked exposure pathway for PBDEs might be the cause of high infant and toddler body burdens. This pathway was inhalation due to the use of PBDEs as flame retardants in bedding materials. More research to better understand and quantify this pathway, or other unknown pathways, to describe infant and toddler exposures to PBDEs is needed. Published by Elsevier Ltd.
Falling head ponded infiltration in the nonlinear limit
NASA Astrophysics Data System (ADS)
Triadis, D.
2014-12-01
The Green and Ampt infiltration solution represents only an extreme example of behavior within a larger class of very nonlinear, delta function diffusivity soils. The mathematical analysis of these soils is greatly simplified by the existence of a sharp wetting front below the soil surface. Solutions for more realistic delta function soil models have recently been presented for infiltration under surface saturation without ponding. After general formulation of the problem, solutions for a full suite of delta function soils are derived for ponded surface water depleted by infiltration. Exact expressions for the cumulative infiltration as a function of time, or the drainage time as a function of the initial ponded depth may take implicit or parametric forms, and are supplemented by simple asymptotic expressions valid for small times, and small and large initial ponded depths. As with surface saturation without ponding, the Green-Ampt model overestimates the effect of the soil hydraulic conductivity. At the opposing extreme, a low-conductivity model is identified that also takes a very simple mathematical form and appears to be more accurate than the Green-Ampt model for larger ponded depths. Between these two, the nonlinear limit of Gardner's soil is recommended as a physically valid first approximation. Relative discrepancies between different soil models are observed to reach a maximum for intermediate values of the dimensionless initial ponded depth, and in general are smaller than for surface saturation without ponding.
Maximum entropy production allows a simple representation of heterogeneity in semiarid ecosystems.
Schymanski, Stanislaus J; Kleidon, Axel; Stieglitz, Marc; Narula, Jatin
2010-05-12
Feedbacks between water use, biomass and infiltration capacity in semiarid ecosystems have been shown to lead to the spontaneous formation of vegetation patterns in a simple model. The formation of patterns permits the maintenance of larger overall biomass at low rainfall rates compared with homogeneous vegetation. This results in a bias of models run at larger scales neglecting subgrid-scale variability. In the present study, we investigate the question whether subgrid-scale heterogeneity can be parameterized as the outcome of optimal partitioning between bare soil and vegetated area. We find that a two-box model reproduces the time-averaged biomass of the patterns emerging in a 100 x 100 grid model if the vegetated fraction is optimized for maximum entropy production (MEP). This suggests that the proposed optimality-based representation of subgrid-scale heterogeneity may be generally applicable to different systems and at different scales. The implications for our understanding of self-organized behaviour and its modelling are discussed.
Cloud fluid models of gas dynamics and star formation in galaxies
NASA Technical Reports Server (NTRS)
Struck-Marcell, Curtis; Scalo, John M.; Appleton, P. N.
1987-01-01
The large dynamic range of star formation in galaxies, and the apparently complex environmental influences involved in triggering or suppressing star formation, challenges the understanding. The key to this understanding may be the detailed study of simple physical models for the dominant nonlinear interactions in interstellar cloud systems. One such model is described, a generalized Oort model cloud fluid, and two simple applications of it are explored. The first of these is the relaxation of an isolated volume of cloud fluid following a disturbance. Though very idealized, this closed box study suggests a physical mechanism for starbursts, which is based on the approximate commensurability of massive cloud lifetimes and cloud collisional growth times. The second application is to the modeling of colliding ring galaxies. In this case, the driving processes operating on a dynamical timescale interact with the local cloud processes operating on the above timescale. The results is a variety of interesting nonequilibrium behaviors, including spatial variations of star formation that do not depend monotonically on gas density.
NASA Astrophysics Data System (ADS)
Muthukrishnan, S.; Harbor, J.
2001-12-01
Hydrological studies are significant part of every engineering, developmental project and geological studies done to assess and understand the interactions between the hydrology and the environment. Such studies are generally conducted before the beginning of the project as well as after the project is completed, such that a comprehensive analysis can be done on the impact of such projects on the local and regional hydrology of the area. A good understanding of the chain of relationships that form the hydro-eco-biological and environmental cycle can be of immense help in maintaining the natural balance as we work towards exploration and exploitation of the natural resources as well as urbanization of undeveloped land. Rainfall-Runoff modeling techniques have been of great use here for decades since they provide fast and efficient means of analyzing vast amount of data that is gathered. Though process based, detailed models are better than the simple models, the later ones are used more often due to their simplicity, ease of use, and easy availability of data needed to run them. The Curve Number (CN) method developed by the United States Department of Agriculture (USDA) is one of the most widely used hydrologic modeling tools in the US, and has earned worldwide acceptance as a practical method for evaluating the effects of land use changes on the hydrology of an area. The Long-Term Hydrological Impact Assessment (L-THIA) model is a basic, CN-based, user-oriented model that has gained popularity amongst watershed planners because of its reliance on readily available data, and because the model is easy to use (http://www.ecn.purdue.edu/runoff) and produces results geared to the general information needs of planners. The L-THIA model was initially developed to study the relative long-term hydrologic impacts of different land use (past/current/future) scenarios, and it has been successful in meeting this goal. However, one of the weaknesses of L-THIA, as well as other models that focus strictly on surface runoff, is that many users are interested in predictions of runoff that match observations of flow in streams and rivers. To make L-THIA more useful for the planners and engineers alike, a simple, long-term calibration method based on linear regression of L-THIA predicted and observed surface runoff has been developed and tested here. The results from Little Eagle Creek (LEC) in Indiana show that such calibrations are successful and valuable. This method can be used to calibrate other simple rainfall-runoff models too.
NASA Technical Reports Server (NTRS)
Bougher, S. W.; Gerard, J. C.; Stewart, A. I. F.; Fesen, C. G.
1990-01-01
The mechanism responsible for the Venus nitric oxide (0,1) delta band nightglow observed in the Pioneer Venus Orbiter UV spectrometer (OUVS) images was investigated using the Venus Thermospheric General Circulation Model (Dickinson et al., 1984), modified to include simple odd nitrogen chemistry. Results obtained for the solar maximum conditions indicate that the recently revised dark-disk average NO intensity at 198.0 nm, based on statistically averaged OUVS measurements, can be reproduced with minor modifications in chemical rate coefficients. The results imply a nightside hemispheric downward N flux of (2.5-3) x 10 to the 9th/sq cm sec, corresponding to the dayside net production of N atoms needed for transport.
Didactic discussion of stochastic resonance effects and weak signals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adair, R.K.
1996-12-01
A simple, paradigmatic, model is used to illustrate some general properties of effects subsumed under the label stochastic resonance. In particular, analyses of the transparent model show that (1) a small amount of noise added to a much larger signal can greatly increase the response to the signal, but (2) a weak signal added to much larger noise will not generate a substantial added response. The conclusions drawn from the model illustrate the general result that stochastic resonance effects do not provide an avenue for signals that are much smaller than noise to affect biology. A further analysis demonstrates themore » effects of small signals in the shifting of biologically important chemical equilibria under conditions where stochastic resonance effects are significant.« less
Learning through Geography. Pathways in Geography Series, Title No. 7.
ERIC Educational Resources Information Center
Slater, Frances
This teacher's guide is to enable the teacher to promote thinking through the use of geography. The book lays out the rationale in learning theory for an issues-based, question-driven inquiry method and proceeds through a simple model of progression from identifying key questions to developing generalizations. Students study issues of geographic…
A Simple Model of Entrepreneurship for Principles of Economics Courses
ERIC Educational Resources Information Center
Gunter, Frank R.
2012-01-01
The critical roles of entrepreneurs in creating, operating, and destroying markets, as well as their importance in driving long-term economic growth are still generally either absent from principles of economics texts or relegated to later chapters. The primary difficulties in explaining entrepreneurship at the principles level are the lack of a…
ERIC Educational Resources Information Center
Schumaker, Jean B.; Hazel, J. Stephen
1984-01-01
The authors review research on techniques to change social behavior, ranging from relatively simple manipulations of antecedent and consequent conditions to complex instructional "packages" involving didactic, modeling, rehearsal, and feedback procedures and examine issues involved in generalization of social skills training as well as ethical…
ERIC Educational Resources Information Center
Shieh, Gwowen
2006-01-01
This paper considers the problem of analysis of correlation coefficients from a multivariate normal population. A unified theorem is derived for the regression model with normally distributed explanatory variables and the general results are employed to provide useful expressions for the distributions of simple, multiple, and partial-multiple…
While it is generally accepted that dense stands of plants exacerbate epidemics caused by foliar pathogens, there is little experimental evidence to support this view. We grew model plant communities consisting of wheat and wild oats at different densities and proportions and exp...
While it is generally accepted that dense stands of plants exacerbate epidermics caused by foliar pathogens, there is little experimental evidence to support this view. We grew model plant communities consisting of wheat and wild oats at different densities and proportions and ex...
NASA Technical Reports Server (NTRS)
Lan, C. Edward; Ge, Fuying
1989-01-01
Control system design for general nonlinear flight dynamic models is considered through numerical simulation. The design is accomplished through a numerical optimizer coupled with analysis of flight dynamic equations. The general flight dynamic equations are numerically integrated and dynamic characteristics are then identified from the dynamic response. The design variables are determined iteratively by the optimizer to optimize a prescribed objective function which is related to desired dynamic characteristics. Generality of the method allows nonlinear effects to aerodynamics and dynamic coupling to be considered in the design process. To demonstrate the method, nonlinear simulation models for an F-5A and an F-16 configurations are used to design dampers to satisfy specifications on flying qualities and control systems to prevent departure. The results indicate that the present method is simple in formulation and effective in satisfying the design objectives.
Coherent states, quantum gravity, and the Born-Oppenheimer approximation. I. General considerations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stottmeister, Alexander, E-mail: alexander.stottmeister@gravity.fau.de; Thiemann, Thomas, E-mail: thomas.thiemann@gravity.fau.de
2016-06-15
This article, as the first of three, aims at establishing the (time-dependent) Born-Oppenheimer approximation, in the sense of space adiabatic perturbation theory, for quantum systems constructed by techniques of the loop quantum gravity framework, especially the canonical formulation of the latter. The analysis presented here fits into a rather general framework and offers a solution to the problem of applying the usual Born-Oppenheimer ansatz for molecular (or structurally analogous) systems to more general quantum systems (e.g., spin-orbit models) by means of space adiabatic perturbation theory. The proposed solution is applied to a simple, finite dimensional model of interacting spin systems,more » which serves as a non-trivial, minimal model of the aforesaid problem. Furthermore, it is explained how the content of this article and its companion affect the possible extraction of quantum field theory on curved spacetime from loop quantum gravity (including matter fields).« less
The architecture of Newton, a general-purpose dynamics simulator
NASA Technical Reports Server (NTRS)
Cremer, James F.; Stewart, A. James
1989-01-01
The architecture for Newton, a general-purpose system for simulating the dynamics of complex physical objects, is described. The system automatically formulates and analyzes equations of motion, and performs automatic modification of this system equations when necessitated by changes in kinematic relationships between objects. Impact and temporary contact are handled, although only using simple models. User-directed influence of simulations is achieved using Newton's module, which can be used to experiment with the control of many-degree-of-freedom articulated objects.
Review of Software Platforms for Agent Based Models
2008-04-01
EINSTein 4.3.2 Battlefield Python (optional, for batch runs) MANA 4.3.3 Battlefield N/A MASON 4.3.4 General Java NetLogo 4.3.5 General Logo-variant...through the use of relatively simple Python scripts. It also has built-in functions for parameter sweeps, and can plot the resulting fitness landscape ac...Nonetheless its ease of use, and support for automatic drawing of agents in 2D or 3D2 makes this a suitable platform for beginner programmers. 2Only in the
Random Boolean networks for autoassociative memory: Optimization and sequential learning
NASA Astrophysics Data System (ADS)
Sherrington, D.; Wong, K. Y. M.
Conventional neural networks are based on synaptic storage of information, even when the neural states are discrete and bounded. In general, the set of potential local operations is much greater. Here we discuss some aspects of the properties of networks of binary neurons with more general Boolean functions controlling the local dynamics. Two specific aspects are emphasised; (i) optimization in the presence of noise and (ii) a simple model for short-term memory exhibiting primacy and recency in the recall of sequentially taught patterns.
Ion Thermal Conductivity and Ion Distribution Function in the Banana Regime
1988-04-01
approximate collision operator which is more general than the model operator derived by HIRSHMAN and SIGMAR is presented. By use of this collision...by HIRSHMAN and SIGMAR (1976). The finite aspect ratio correction is shown to increase the ion thermal conductivity by a factor of two in the...operator (12) is more general than that of Hirshman and Sigmar which can be derived by approximating Ct(1=0,1,2)in (12) by more simple forms. Let us
Gradient structure and transport coefficients for strong particles
NASA Astrophysics Data System (ADS)
Gabrielli, Davide; Krapivsky, P. L.
2018-04-01
We introduce and study a simple and natural class of solvable stochastic lattice gases. This is the class of strong particles. The name is due to the fact that when they try to jump to an occupied site they succeed in pushing away a pile of particles. For this class of models we explicitly compute the transport coefficients. We also discuss some generalizations and the relations with other classes of solvable models.
Entropic Lattice Boltzmann Methods
2001-12-10
model of fluid dynamics in one dimension, first considered by Renda et al. in 1997 [14]. Here the geometric picture involves a four dimensional polytope...convention of including constant terms in an extra column of the matrix, using the device of appending 1 to the column vector of unknowns. In general, there...we apply the entropic lattice Boltzmann method to a simple five-velocity model of fluid dynamics in one dimension, first considered by Renda et al
A hybrid multigroup neutron-pattern model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pogosbekyan, L.R.; Lysov, D.A.
In this paper, we use the general approach to construct a multigroup hybrid model for the neutron pattern. The equations are given together with a reasonably economic and simple iterative method of solving them. The algorithm can be used to calculate the pattern and the functionals as well as to correct the constants from the experimental data and to adapt the support over the constants to the engineering programs by reference to precision ones.
2011-11-01
elastic range, and with some simple forms of progressing damage . However, a general physics-based methodology to assess the initial and lifetime... damage evolution in the RVE for all possible load histories. Microstructural data on initial configuration and damage progression in CMCs were...the damaged elements will have changed, hence, a progressive damage model. The crack opening for each crack type in each element is stored as a
Lee, Cameron C; Sheridan, Scott C
2018-07-01
Temperature-mortality relationships are nonlinear, time-lagged, and can vary depending on the time of year and geographic location, all of which limits the applicability of simple regression models in describing these associations. This research demonstrates the utility of an alternative method for modeling such complex relationships that has gained recent traction in other environmental fields: nonlinear autoregressive models with exogenous input (NARX models). All-cause mortality data and multiple temperature-based data sets were gathered from 41 different US cities, for the period 1975-2010, and subjected to ensemble NARX modeling. Models generally performed better in larger cities and during the winter season. Across the US, median absolute percentage errors were 10% (ranging from 4% to 15% in various cities), the average improvement in the r-squared over that of a simple persistence model was 17% (6-24%), and the hit rate for modeling spike days in mortality (>80th percentile) was 54% (34-71%). Mortality responded acutely to hot summer days, peaking at 0-2 days of lag before dropping precipitously, and there was an extended mortality response to cold winter days, peaking at 2-4 days of lag and dropping slowly and continuing for multiple weeks. Spring and autumn showed both of the aforementioned temperature-mortality relationships, but generally to a lesser magnitude than what was seen in summer or winter. When compared to distributed lag nonlinear models, NARX model output was nearly identical. These results highlight the applicability of NARX models for use in modeling complex and time-dependent relationships for various applications in epidemiology and environmental sciences. Copyright © 2018 Elsevier Inc. All rights reserved.
A (very) Simple Model for the Aspect Ratio of High-Order River Basins
NASA Astrophysics Data System (ADS)
Shelef, E.
2017-12-01
The structure of river networks dictates the distribution of elevation, water, and sediments across Earth's surface. Despite its intricate shape, the structure of high-order river networks displays some surprising regularities such as the consistent aspect ratio (i.e., basin's width over length) of river basins along linear mountain fronts. This ratio controls the spacing between high-order channels as well as the spacing between the depositional bodies they form. It is generally independent of tectonic and climatic conditions and is often attributed to the initial topography over which the network was formed. This study shows that a simple, cross-like channel model explains this ratio via a requirement for equal elevation gain between the outlets and drainage-divides of adjacent channels at topographic steady state. This model also explains the dependence of aspect ratio on channel concavity and the location of the widest point on a drainage divide.
Fractality à la carte: a general particle aggregation model.
Nicolás-Carlock, J R; Carrillo-Estrada, J L; Dossetti, V
2016-01-19
In nature, fractal structures emerge in a wide variety of systems as a local optimization of entropic and energetic distributions. The fractality of these systems determines many of their physical, chemical and/or biological properties. Thus, to comprehend the mechanisms that originate and control the fractality is highly relevant in many areas of science and technology. In studying clusters grown by aggregation phenomena, simple models have contributed to unveil some of the basic elements that give origin to fractality, however, the specific contribution from each of these elements to fractality has remained hidden in the complex dynamics. Here, we propose a simple and versatile model of particle aggregation that is, on the one hand, able to reveal the specific entropic and energetic contributions to the clusters' fractality and morphology, and, on the other, capable to generate an ample assortment of rich natural-looking aggregates with any prescribed fractal dimension.
A Selected Library of Transport Coefficients for Combustion and Plasma Physics Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cloutman, L.D.
2000-08-01
COYOTE and similar combustion programs based on the multicomponent Navier-Stokes equations require the mixture viscosity, thermal conductivity, and species transport coefficients as input. This report documents a model of these molecular transport coefficients that is simpler than the general theory, but which provides adequate accuracy for many purposes. This model leads to a computationally convenient, self-contained, and easy-to-use source of such data in a format suitable for use by such programs. We present the data for various neutral species in two forms. The first form is a simple functional fit to the transport coefficients. The second form is the usemore » of tabulated Lennard-Jones parameters in simple theoretical expressions for the gas-phase transport coefficients. The model then is extended to the case of a two-temperature plasma. Lennard-Jones parameters are given for a number of chemical species of interest in combustion research.« less
NASA Technical Reports Server (NTRS)
Clancy, Edward A.; Smith, Joseph M.; Cohen, Richard J.
1991-01-01
Recent evidence has shown that a subtle alternation in the surface ECG (electrical alternans) may be correlated with the susceptibility to ventricular fibrillation. In the present work, the author presents evidence that a mechanical alternation in the heartbeat (mechanical alternans) generally accompanies electrical alternans. A simple finite-element computer model which emulates both the electrical and the mechanical activity of the heart is presented. A pilot animal study is also reported. The computer model and the animal study both found that (1) there exists a regime of combined electrical-mechanical alternans during the transition from a normal rhythm towards a fibrillatory rhythm, (2) the detected degree of alternation is correlated with the relative instability of the rhythm, and (3) the electrical and mechanical alternans may result from a dispersion in local electrical properties leading to a spatial-temporal alternation in the electrical conduction process.
Remote tropical and sub-tropical responses to Amazon deforestation
NASA Astrophysics Data System (ADS)
Badger, Andrew M.; Dirmeyer, Paul A.
2016-05-01
Replacing natural vegetation with realistic tropical crops over the Amazon region in a global Earth system model impacts vertical transport of heat and moisture, modifying the interaction between the atmospheric boundary layer and the free atmosphere. Vertical velocity is decreased over a majority of the Amazon region, shifting the ascending branch and modifying the seasonality of the Hadley circulation over the Atlantic and eastern Pacific oceans. Using a simple model that relates circulation changes to heating anomalies and generalizing the upper-atmosphere temperature response to deforestation, agreement is found between the response in the fully-coupled model and the simple solution. These changes to the large-scale dynamics significantly impact precipitation in several remote regions, namely sub-Saharan Africa, Mexico, the southwestern United States and extratropical South America, suggesting non-local climate repercussions for large-scale land use changes in the tropics are possible.
Inhomogeneity and velocity fields effects on scattering polarization in solar prominences
NASA Astrophysics Data System (ADS)
Milić, I.; Faurobert, M.
2015-10-01
One of the methods for diagnosing vector magnetic fields in solar prominences is the so called "inversion" of observed polarized spectral lines. This inversion usually assumes a fairly simple generative model and in this contribution we aim to study the possible systematic errors that are introduced by this assumption. On two-dimensional toy model of a prominence, we first demonstrate importance of multidimensional radiative transfer and horizontal inhomogeneities. These are able to induce a significant level of polarization in Stokes U, without the need for the magnetic field. We then compute emergent Stokes spectrum from a prominence which is pervaded by the vector magnetic field and use a simple, one-dimensional model to interpret these synthetic observations. We find that inferred values for the magnetic field vector generally differ from the original ones. Most importantly, the magnetic field might seem more inclined than it really is.
Simple Scaling of Mulit-Stream Jet Plumes for Aeroacoustic Modeling
NASA Technical Reports Server (NTRS)
Bridges, James
2016-01-01
When creating simplified, semi-empirical models for the noise of simple single-stream jets near surfaces it has proven useful to be able to generalize the geometry of the jet plume. Having a model that collapses the mean and turbulent velocity fields for a range of flows allows the problem to become one of relating the normalized jet field and the surface. However, most jet flows of practical interest involve jets of two or more coannular flows for which standard models for the plume geometry do not exist. The present paper describes one attempt to relate the mean and turbulent velocity fields of multi-stream jets to that of an equivalent single-stream jet. The normalization of single-stream jets is briefly reviewed, from the functional form of the flow model to the results of the modeling. Next, PIV data from a number of multi-stream jets is analyzed in a similar fashion. The results of several single-stream approximations of the multi-stream jet plume are demonstrated, with a best approximation determined and the shortcomings of the model highlighted.
Simple Scaling of Multi-Stream Jet Plumes for Aeroacoustic Modeling
NASA Technical Reports Server (NTRS)
Bridges, James
2015-01-01
When creating simplified, semi-empirical models for the noise of simple single-stream jets near surfaces it has proven useful to be able to generalize the geometry of the jet plume. Having a model that collapses the mean and turbulent velocity fields for a range of flows allows the problem to become one of relating the normalized jet field and the surface. However, most jet flows of practical interest involve jets of two or more co-annular flows for which standard models for the plume geometry do not exist. The present paper describes one attempt to relate the mean and turbulent velocity fields of multi-stream jets to that of an equivalent single-stream jet. The normalization of single-stream jets is briefly reviewed, from the functional form of the flow model to the results of the modeling. Next, PIV (Particle Image Velocimetry) data from a number of multi-stream jets is analyzed in a similar fashion. The results of several single-stream approximations of the multi-stream jet plume are demonstrated, with a 'best' approximation determined and the shortcomings of the model highlighted.
A study of hyperelastic models for predicting the mechanical behavior of extensor apparatus.
Elyasi, Nahid; Taheri, Kimia Karimi; Narooei, Keivan; Taheri, Ali Karimi
2017-06-01
In this research, the nonlinear elastic behavior of human extensor apparatus was investigated. To this goal, firstly the best material parameters of hyperelastic strain energy density functions consisting of the Mooney-Rivlin, Ogden, invariants, and general exponential models were derived for the simple tension experimental data. Due to the significance of stress response in other deformation modes of nonlinear models, the calculated parameters were used to study the pure shear and balance biaxial tension behavior of the extensor apparatus. The results indicated that the Mooney-Rivlin model predicts an unstable behavior in the balance biaxial deformation of the extensor apparatus, while the Ogden order 1 represents a stable behavior, although the fitting of experimental data and theoretical model was not satisfactory. However, the Ogden order 6 model was unstable in the simple tension mode and the Ogden order 5 and general exponential models presented accurate and stable results. In order to reduce the material parameters, the invariants model with four material parameters was investigated and this model presented the minimum error and stable behavior in all deformation modes. The ABAQUS Explicit solver was coupled with the VUMAT subroutine code of the invariants model to simulate the mechanical behavior of the central and terminal slips of the extensor apparatus during the passive finger flexion, which is important in the prediction of boutonniere deformity and chronic mallet finger injuries, respectively. Also, to evaluate the adequacy of constitutive models in simulations, the results of the Ogden order 5 were presented. The difference between the predictions was attributed to the better fittings of the invariants model compared with the Ogden model.
Understanding the complex dynamics of stock markets through cellular automata
NASA Astrophysics Data System (ADS)
Qiu, G.; Kandhai, D.; Sloot, P. M. A.
2007-04-01
We present a cellular automaton (CA) model for simulating the complex dynamics of stock markets. Within this model, a stock market is represented by a two-dimensional lattice, of which each vertex stands for a trader. According to typical trading behavior in real stock markets, agents of only two types are adopted: fundamentalists and imitators. Our CA model is based on local interactions, adopting simple rules for representing the behavior of traders and a simple rule for price updating. This model can reproduce, in a simple and robust manner, the main characteristics observed in empirical financial time series. Heavy-tailed return distributions due to large price variations can be generated through the imitating behavior of agents. In contrast to other microscopic simulation (MS) models, our results suggest that it is not necessary to assume a certain network topology in which agents group together, e.g., a random graph or a percolation network. That is, long-range interactions can emerge from local interactions. Volatility clustering, which also leads to heavy tails, seems to be related to the combined effect of a fast and a slow process: the evolution of the influence of news and the evolution of agents’ activity, respectively. In a general sense, these causes of heavy tails and volatility clustering appear to be common among some notable MS models that can confirm the main characteristics of financial markets.
Glassy Behavior due to Kinetic Constraints: from Topological Foam to Backgammon
NASA Astrophysics Data System (ADS)
Sherrington, David
A study is reported of a series of simple model systems with only non-interacting Hamiltonians, and hence simple equilibrium thermodynamics, but with constrained kinetics of a type initially suggested by topological considerations of foams and two-dimensional covalent glasses. It is demonstrated that oscopic dynamical features characteristic of real glasses, such as two-time decays in energy and auto-correlation functions, arise and may be understood in terms of annihilation-diffusion concepts and theory. This recognition leads to a sequence of further models which (i) encapsulate the essense but are more readily simulated and open to easier analytic study, and (ii) allow generalization and extension to higher dimension. Fluctuation-dissipation relations are also considered and show novel aspects. The comparison is with strong glasses.
Probability, statistics, and computational science.
Beerenwinkel, Niko; Siebourg, Juliane
2012-01-01
In this chapter, we review basic concepts from probability theory and computational statistics that are fundamental to evolutionary genomics. We provide a very basic introduction to statistical modeling and discuss general principles, including maximum likelihood and Bayesian inference. Markov chains, hidden Markov models, and Bayesian network models are introduced in more detail as they occur frequently and in many variations in genomics applications. In particular, we discuss efficient inference algorithms and methods for learning these models from partially observed data. Several simple examples are given throughout the text, some of which point to models that are discussed in more detail in subsequent chapters.
Allele-sharing models: LOD scores and accurate linkage tests.
Kong, A; Cox, N J
1997-11-01
Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested.
Allele-sharing models: LOD scores and accurate linkage tests.
Kong, A; Cox, N J
1997-01-01
Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested. PMID:9345087
A gentle introduction to Rasch measurement models for metrologists
NASA Astrophysics Data System (ADS)
Mari, Luca; Wilson, Mark
2013-09-01
The talk introduces the basics of Rasch models by systematically interpreting them in the conceptual and lexical framework of the International Vocabulary of Metrology, third edition (VIM3). An admittedly simple example of physical measurement highlights the analogies between physical transducers and tests, as they can be understood as measuring instruments of Rasch models and psychometrics in general. From the talk natural scientists and engineers might learn something of Rasch models, as a specifically relevant case of social measurement, and social scientists might re-interpret something of their knowledge of measurement in the light of the current physical measurement models.
Raymer, James; Abel, Guy J.; Rogers, Andrei
2012-01-01
Population projection models that introduce uncertainty are a growing subset of projection models in general. In this paper, we focus on the importance of decisions made with regard to the model specifications adopted. We compare the forecasts and prediction intervals associated with four simple regional population projection models: an overall growth rate model, a component model with net migration, a component model with in-migration and out-migration rates, and a multiregional model with destination-specific out-migration rates. Vector autoregressive models are used to forecast future rates of growth, birth, death, net migration, in-migration and out-migration, and destination-specific out-migration for the North, Midlands and South regions in England. They are also used to forecast different international migration measures. The base data represent a time series of annual data provided by the Office for National Statistics from 1976 to 2008. The results illustrate how both the forecasted subpopulation totals and the corresponding prediction intervals differ for the multiregional model in comparison to other simpler models, as well as for different assumptions about international migration. The paper ends end with a discussion of our results and possible directions for future research. PMID:23236221
A generalized gamma mixture model for ultrasonic tissue characterization.
Vegas-Sanchez-Ferrero, Gonzalo; Aja-Fernandez, Santiago; Palencia, Cesar; Martin-Fernandez, Marcos
2012-01-01
Several statistical models have been proposed in the literature to describe the behavior of speckles. Among them, the Nakagami distribution has proven to very accurately characterize the speckle behavior in tissues. However, it fails when describing the heavier tails caused by the impulsive response of a speckle. The Generalized Gamma (GG) distribution (which also generalizes the Nakagami distribution) was proposed to overcome these limitations. Despite the advantages of the distribution in terms of goodness of fitting, its main drawback is the lack of a closed-form maximum likelihood (ML) estimates. Thus, the calculation of its parameters becomes difficult and not attractive. In this work, we propose (1) a simple but robust methodology to estimate the ML parameters of GG distributions and (2) a Generalized Gama Mixture Model (GGMM). These mixture models are of great value in ultrasound imaging when the received signal is characterized by a different nature of tissues. We show that a better speckle characterization is achieved when using GG and GGMM rather than other state-of-the-art distributions and mixture models. Results showed the better performance of the GG distribution in characterizing the speckle of blood and myocardial tissue in ultrasonic images.
A Generalized Gamma Mixture Model for Ultrasonic Tissue Characterization
Palencia, Cesar; Martin-Fernandez, Marcos
2012-01-01
Several statistical models have been proposed in the literature to describe the behavior of speckles. Among them, the Nakagami distribution has proven to very accurately characterize the speckle behavior in tissues. However, it fails when describing the heavier tails caused by the impulsive response of a speckle. The Generalized Gamma (GG) distribution (which also generalizes the Nakagami distribution) was proposed to overcome these limitations. Despite the advantages of the distribution in terms of goodness of fitting, its main drawback is the lack of a closed-form maximum likelihood (ML) estimates. Thus, the calculation of its parameters becomes difficult and not attractive. In this work, we propose (1) a simple but robust methodology to estimate the ML parameters of GG distributions and (2) a Generalized Gama Mixture Model (GGMM). These mixture models are of great value in ultrasound imaging when the received signal is characterized by a different nature of tissues. We show that a better speckle characterization is achieved when using GG and GGMM rather than other state-of-the-art distributions and mixture models. Results showed the better performance of the GG distribution in characterizing the speckle of blood and myocardial tissue in ultrasonic images. PMID:23424602
Modelling road accident blackspots data with the discrete generalized Pareto distribution.
Prieto, Faustino; Gómez-Déniz, Emilio; Sarabia, José María
2014-10-01
This study shows how road traffic networks events, in particular road accidents on blackspots, can be modelled with simple probabilistic distributions. We considered the number of crashes and the number of fatalities on Spanish blackspots in the period 2003-2007, from Spanish General Directorate of Traffic (DGT). We modelled those datasets, respectively, with the discrete generalized Pareto distribution (a discrete parametric model with three parameters) and with the discrete Lomax distribution (a discrete parametric model with two parameters, and particular case of the previous model). For that, we analyzed the basic properties of both parametric models: cumulative distribution, survival, probability mass, quantile and hazard functions, genesis and rth-order moments; applied two estimation methods of their parameters: the μ and (μ+1) frequency method and the maximum likelihood method; used two goodness-of-fit tests: Chi-square test and discrete Kolmogorov-Smirnov test based on bootstrap resampling; and compared them with the classical negative binomial distribution in terms of absolute probabilities and in models including covariates. We found that those probabilistic models can be useful to describe the road accident blackspots datasets analyzed. Copyright © 2014 Elsevier Ltd. All rights reserved.
The algebra of the general Markov model on phylogenetic trees and networks.
Sumner, J G; Holland, B R; Jarvis, P D
2012-04-01
It is known that the Kimura 3ST model of sequence evolution on phylogenetic trees can be extended quite naturally to arbitrary split systems. However, this extension relies heavily on mathematical peculiarities of the associated Hadamard transformation, and providing an analogous augmentation of the general Markov model has thus far been elusive. In this paper, we rectify this shortcoming by showing how to extend the general Markov model on trees to include incompatible edges; and even further to more general network models. This is achieved by exploring the algebra of the generators of the continuous-time Markov chain together with the “splitting” operator that generates the branching process on phylogenetic trees. For simplicity, we proceed by discussing the two state case and then show that our results are easily extended to more states with little complication. Intriguingly, upon restriction of the two state general Markov model to the parameter space of the binary symmetric model, our extension is indistinguishable from the Hadamard approach only on trees; as soon as any incompatible splits are introduced the two approaches give rise to differing probability distributions with disparate structure. Through exploration of a simple example, we give an argument that our extension to more general networks has desirable properties that the previous approaches do not share. In particular, our construction allows for convergent evolution of previously divergent lineages; a property that is of significant interest for biological applications.
Simulations of induced-charge electro-osmosis in microfluidic devices
NASA Astrophysics Data System (ADS)
Ben, Yuxing
2005-03-01
Theories of nonlinear electrokinetic phenomena generally assume a uniform, neutral bulk electroylte in contact with a polarizable thin double layer near a metal or dielectric surface, which acts as a "capacitor skin". Induced-charge electro-osmosis (ICEO) is the general effect of nonlinear electro-osmotic slip, when an applied electric field acts on its own induced (diffuse) double-layer charge. In most theoretical and experimental work, ICEO has been studied in very simple geometries, such as colloidal spheres and planar, periodic micro-electrode arrays. Here we use finite-element simulations to predict how more complicated geometries of polarizable surfaces and/or electrodes yield flow profiles with subtle dependence on the amplitude and frequency of the applied voltage. We also consider how the simple model equations break down, due to surface conduction, bulk diffusion, and concentration polarization, for large applied voltages (as in most experiments).
Color generalization across hue and saturation in chicks described by a simple (Bayesian) model.
Scholtyssek, Christine; Osorio, Daniel C; Baddeley, Roland J
2016-08-01
Color conveys important information for birds in tasks such as foraging and mate choice, but in the natural world color signals can vary substantially, so birds may benefit from generalizing responses to perceptually discriminable colors. Studying color generalization is therefore a way to understand how birds take account of suprathreshold stimulus variations in decision making. Former studies on color generalization have focused on hue variation, but natural colors often vary in saturation, which could be an additional, independent source of information. We combine behavioral experiments and statistical modeling to investigate whether color generalization by poultry chicks depends on the chromatic dimension in which colors vary. Chicks were trained to discriminate colors separated by equal distances on a hue or a saturation dimension, in a receptor-based color space. Generalization tests then compared the birds' responses to familiar and novel colors lying on the same chromatic dimension. To characterize generalization we introduce a Bayesian model that extracts a threshold color distance beyond which chicks treat novel colors as significantly different from the rewarded training color. These thresholds were the same for generalization along the hue and saturation dimensions, demonstrating that responses to novel colors depend on similarity and expected variation of color signals but are independent of the chromatic dimension.
NASA Astrophysics Data System (ADS)
De Lucas, Javier
2015-03-01
A simple geometrical model for calculating the effective emissivity in blackbody cylindrical cavities has been developed. The back ray tracing technique and the Monte Carlo method have been employed, making use of a suitable set of coordinates and auxiliary planes. In these planes, the trajectories of individual photons in the successive reflections between the cavity points are followed in detail. The theoretical model is implemented by using simple numerical tools, programmed in Microsoft Visual Basic for Application and Excel. The algorithm is applied to isothermal and non-isothermal diffuse cylindrical cavities with a lid; however, the basic geometrical structure can be generalized to a cylindro-conical shape and specular reflection. Additionally, the numerical algorithm and the program source code can be used, with minor changes, for determining the distribution of the cavity points, where photon absorption takes place. This distribution could be applied to the study of the influence of thermal gradients on the effective emissivity profiles, for example. Validation is performed by analyzing the convergence of the Monte Carlo method as a function of the number of trials and by comparison with published results of different authors.
Complexity in language learning and treatment.
Thompson, Cynthia K
2007-02-01
To introduce a Clinical Forum focused on the Complexity Account of Treatment Efficacy (C. K. Thompson, L. P. Shapiro, S. Kiran, & J. Sobecks, 2003), a counterintuitive but effective approach for treating language disorders. This approach espouses training complex structures to promote generalized improvement of simpler, linguistically related structures. Three articles are included, addressing complexity in treatment of phonology, lexical-semantics, and syntax. Complexity hierarchies based on models of normal language representation and processing are discussed in each language domain. In addition, each article presents single-subject controlled experimental studies examining the complexity effect. By counterbalancing treatment of complex and simple structures across participants, acquisition and generalization patterns are examined as they emerge. In all language domains, cascading generalization occurs from more to less complex structures; however, the opposite pattern is rarely seen. The results are robust, with replication within and across participants. The construct of complexity appears to be a general principle that is relevant to treating a range of language disorders in both children and adults. While challenging the long-standing clinical notion that treatment should begin with simple structures, mounting evidence points toward the facilitative effects of using more complex structures as a starting point for treatment.
A Simple Illustrative Model of a Charge-Coupled Device (CCD)
ERIC Educational Resources Information Center
Santillo, Michael F.
2009-01-01
Many students (as well as the general public) use modern technology without an understanding of how these devices actually work. They are what scientists refer to in the laboratory as "black boxes." Students often wonder how physics relates to the technology used in the real world and are interested in such applications. An example of one such…
Working Memory and Intelligence Are Highly Related Constructs, but Why?
ERIC Educational Resources Information Center
Colom, Roberto; Abad, Francisco J.; Quiroga, M. Angeles; Shih, Pei Chun; Flores-Mendoza, Carmen
2008-01-01
Working memory and the general factor of intelligence (g) are highly related constructs. However, we still don't know why. Some models support the central role of simple short-term storage, whereas others appeal to executive functions like the control of attention. Nevertheless, the available empirical evidence does not suffice to get an answer,…
USDA-ARS?s Scientific Manuscript database
Cations, such as Ca and Mg, are generally thought to alleviate toxicities of trace metals through site-specific competition (as incorporated in the biotic ligand model, BLM). Short term (48 h) experiments were conducted using cowpea (Vigna unguiculata L. Walp.) seedlings in simple nutrient solution...
The Relationship between Looks and Personality: Strong and General or Content Specific?
ERIC Educational Resources Information Center
Longo, Laura C.; Ashmore, Richard D.
Most researchers explain the attractiveness-personality link in terms of a simple self-fulfilling prophecy model: a person's good looks evoke a "Beauty is Good" stereotype that causes positive treatment by others, which, in turn, causes the target to develop a "good personality." An alternative conceptual framework expands on the current implicit…
Marbles: A Means of Introducing Students to Scattering Concepts
ERIC Educational Resources Information Center
Bender, K. M.; Westphal, P. S.; Ramsier, R. D.
2008-01-01
The purpose of this activity is to introduce students to concepts of short-range and long-range scattering, and engage them in using indirect measurements and probabilistic models. The activity uses simple and readily available apparatus, and can be adapted for use with secondary level students as well as those in general physics courses or…
Structural stocking guides: a new look at an old friend
Jeffrey H. Gove
2004-01-01
A parameter recovery-based model is developed that allows the incorporation of diameter distribution information directly into stocking guides. The method is completely general in applicability across different guides and forest types and could be adapted to other systems such as density management diagrams. It relies on a simple measure of diameter distribution shape...
NASA Astrophysics Data System (ADS)
Federici, Stefania; Oliviero, Giulio; Hamad-Schifferli, Kimberly; Bergese, Paolo
2010-12-01
We report the first example of microcantilever beams that are reversibly driven by protein thin film machines fuelled by cycling the salt concentration of the surrounding solution. We also show that upon the same salinity stimulus the drive can be completely reversed in its direction by introducing a surface coating ligand. Experimental results are throughout discussed within a general yet simple thermodynamic model.
Two-Layer Variable Infiltration Capacity Land Surface Representation for General Circulation Models
NASA Technical Reports Server (NTRS)
Xu, L.
1994-01-01
A simple two-layer variable infiltration capacity (VIC-2L) land surface model suitable for incorporation in general circulation models (GCMs) is described. The model consists of a two-layer characterization of the soil within a GCM grid cell, and uses an aerodynamic representation of latent and sensible heat fluxes at the land surface. The effects of GCM spatial subgrid variability of soil moisture and a hydrologically realistic runoff mechanism are represented in the soil layers. The model was tested using long-term hydrologic and climatalogical data for Kings Creek, Kansas to estimate and validate the hydrological parameters. Surface flux data from three First International Satellite Land Surface Climatology Project Field Experiments (FIFE) intensive field compaigns in the summer and fall of 1987 in central Kansas, and from the Anglo-Brazilian Amazonian Climate Observation Study (ABRACOS) in Brazil were used to validate the mode-simulated surface energy fluxes and surface temperature.
NASA Technical Reports Server (NTRS)
Montoya, L. C.; Flechner, S. G.; Jacobs, P. F.
1977-01-01
Pressure and spanwise load distributions on a first-generation jet transport semispan model at high subsonic speeds are presented for the basic wing and for configurations with an upper winglet only, upper and lower winglets, and a simple wing-tip extension. Selected data are discussed to show the general trends and effects of the various configurations.
Zheng, Mingwen; Li, Lixiang; Peng, Haipeng; Xiao, Jinghua; Yang, Yixian; Zhang, Yanping; Zhao, Hui
2018-01-01
This paper mainly studies the globally fixed-time synchronization of a class of coupled neutral-type neural networks with mixed time-varying delays via discontinuous feedback controllers. Compared with the traditional neutral-type neural network model, the model in this paper is more general. A class of general discontinuous feedback controllers are designed. With the help of the definition of fixed-time synchronization, the upper right-hand derivative and a defined simple Lyapunov function, some easily verifiable and extensible synchronization criteria are derived to guarantee the fixed-time synchronization between the drive and response systems. Finally, two numerical simulations are given to verify the correctness of the results.
2018-01-01
This paper mainly studies the globally fixed-time synchronization of a class of coupled neutral-type neural networks with mixed time-varying delays via discontinuous feedback controllers. Compared with the traditional neutral-type neural network model, the model in this paper is more general. A class of general discontinuous feedback controllers are designed. With the help of the definition of fixed-time synchronization, the upper right-hand derivative and a defined simple Lyapunov function, some easily verifiable and extensible synchronization criteria are derived to guarantee the fixed-time synchronization between the drive and response systems. Finally, two numerical simulations are given to verify the correctness of the results. PMID:29370248
Simple models for the simulation of submarine melt for a Greenland glacial system model
NASA Astrophysics Data System (ADS)
Beckmann, Johanna; Perrette, Mahé; Ganopolski, Andrey
2018-01-01
Two hundred marine-terminating Greenland outlet glaciers deliver more than half of the annually accumulated ice into the ocean and have played an important role in the Greenland ice sheet mass loss observed since the mid-1990s. Submarine melt may play a crucial role in the mass balance and position of the grounding line of these outlet glaciers. As the ocean warms, it is expected that submarine melt will increase, potentially driving outlet glaciers retreat and contributing to sea level rise. Projections of the future contribution of outlet glaciers to sea level rise are hampered by the necessity to use models with extremely high resolution of the order of a few hundred meters. That requirement in not only demanded when modeling outlet glaciers as a stand alone model but also when coupling them with high-resolution 3-D ocean models. In addition, fjord bathymetry data are mostly missing or inaccurate (errors of several hundreds of meters), which questions the benefit of using computationally expensive 3-D models for future predictions. Here we propose an alternative approach built on the use of a computationally efficient simple model of submarine melt based on turbulent plume theory. We show that such a simple model is in reasonable agreement with several available modeling studies. We performed a suite of experiments to analyze sensitivity of these simple models to model parameters and climate characteristics. We found that the computationally cheap plume model demonstrates qualitatively similar behavior as 3-D general circulation models. To match results of the 3-D models in a quantitative manner, a scaling factor of the order of 1 is needed for the plume models. We applied this approach to model submarine melt for six representative Greenland glaciers and found that the application of a line plume can produce submarine melt compatible with observational data. Our results show that the line plume model is more appropriate than the cone plume model for simulating the average submarine melting of real glaciers in Greenland.
A Maximum Entropy Method for Particle Filtering
NASA Astrophysics Data System (ADS)
Eyink, Gregory L.; Kim, Sangil
2006-06-01
Standard ensemble or particle filtering schemes do not properly represent states of low priori probability when the number of available samples is too small, as is often the case in practical applications. We introduce here a set of parametric resampling methods to solve this problem. Motivated by a general H-theorem for relative entropy, we construct parametric models for the filter distributions as maximum-entropy/minimum-information models consistent with moments of the particle ensemble. When the prior distributions are modeled as mixtures of Gaussians, our method naturally generalizes the ensemble Kalman filter to systems with highly non-Gaussian statistics. We apply the new particle filters presented here to two simple test cases: a one-dimensional diffusion process in a double-well potential and the three-dimensional chaotic dynamical system of Lorenz.
No way out? The double-bind in seeking global prosperity alongside mitigated climate change
NASA Astrophysics Data System (ADS)
Garrett, T. J.
2012-01-01
In a prior study (Garrett, 2011), I introduced a simple economic growth model designed to be consistent with general thermodynamic laws. Unlike traditional economic models, civilization is viewed only as a well-mixed global whole with no distinction made between individual nations, economic sectors, labor, or capital investments. At the model core is a hypothesis that the global economy's current rate of primary energy consumption is tied through a constant to a very general representation of its historically accumulated wealth. Observations support this hypothesis, and indicate that the constant's value is λ = 9.7 ± 0.3 milliwatts per 1990 US dollar. It is this link that allows for treatment of seemingly complex economic systems as simple physical systems. Here, this growth model is coupled to a linear formulation for the evolution of globally well-mixed atmospheric CO2 concentrations. While very simple, the coupled model provides faithful multi-decadal hindcasts of trajectories in gross world product (GWP) and CO2. Extending the model to the future, the model suggests that the well-known IPCC SRES scenarios substantially underestimate how much CO2 levels will rise for a given level of future economic prosperity. For one, global CO2 emission rates cannot be decoupled from wealth through efficiency gains. For another, like a long-term natural disaster, future greenhouse warming can be expected to act as an inflationary drag on the real growth of global wealth. For atmospheric CO2 concentrations to remain below a "dangerous" level of 450 ppmv (Hansen et al., 2007), model forecasts suggest that there will have to be some combination of an unrealistically rapid rate of energy decarbonization and nearly immediate reductions in global civilization wealth. Effectively, it appears that civilization may be in a double-bind. If civilization does not collapse quickly this century, then CO2 levels will likely end up exceeding 1000 ppmv; but, if CO2 levels rise by this much, then the risk is that civilization will gradually tend towards collapse.
Wardlow, Nathan; Polin, Chris; Villagomez-Bernabe, Balder; Currell, Fred
2015-11-01
We present a simple model for a component of the radiolytic production of any chemical species due to electron emission from irradiated nanoparticles (NPs) in a liquid environment, provided the expression for the G value for product formation is known and is reasonably well characterized by a linear dependence on beam energy. This model takes nanoparticle size, composition, density and a number of other readily available parameters (such as X-ray and electron attenuation data) as inputs and therefore allows for the ready determination of this contribution. Several approximations are used, thus this model provides an upper limit to the yield of chemical species due to electron emission, rather than a distinct value, and this upper limit is compared with experimental results. After the general model is developed we provide details of its application to the generation of HO• through irradiation of gold nanoparticles (AuNPs), a potentially important process in nanoparticle-based enhancement of radiotherapy. This model has been constructed with the intention of making it accessible to other researchers who wish to estimate chemical yields through this process, and is shown to be applicable to NPs of single elements and mixtures. The model can be applied without the need to develop additional skills (such as using a Monte Carlo toolkit), providing a fast and straightforward method of estimating chemical yields. A simple framework for determining the HO• yield for different NP sizes at constant NP concentration and initial photon energy is also presented.
Single-particle dynamics of the Anderson model: a local moment approach
NASA Astrophysics Data System (ADS)
Glossop, Matthew T.; Logan, David E.
2002-07-01
A non-perturbative local moment approach to single-particle dynamics of the general asymmetric Anderson impurity model is developed. The approach encompasses all energy scales and interaction strengths. It captures thereby strong coupling Kondo behaviour, including the resultant universal scaling behaviour of the single-particle spectrum; as well as the mixed valence and essentially perturbative empty orbital regimes. The underlying approach is physically transparent and innately simple, and as such is capable of practical extension to lattice-based models within the framework of dynamical mean-field theory.
A convenient basis for the Izergin-Korepin model
NASA Astrophysics Data System (ADS)
Qiao, Yi; Zhang, Xin; Hao, Kun; Cao, Junpeng; Li, Guang-Liang; Yang, Wen-Li; Shi, Kangjie
2018-05-01
We propose a convenient orthogonal basis of the Hilbert space for the quantum spin chain associated with the A2(2) algebra (or the Izergin-Korepin model). It is shown that compared with the original basis the monodromy-matrix elements acting on this basis take relatively simple forms, which is quite similar as that for the quantum spin chain associated with An algebra in the so-called F-basis. As an application of our general results, we present the explicit recursive expressions of the Bethe states in this basis for the Izergin-Korepin model.
A comprehensive surface-groundwater flow model
NASA Astrophysics Data System (ADS)
Arnold, Jeffrey G.; Allen, Peter M.; Bernhardt, Gilbert
1993-02-01
In this study, a simple groundwater flow and height model was added to an existing basin-scale surface water model. The linked model is: (1) watershed scale, allowing the basin to be subdivided; (2) designed to accept readily available inputs to allow general use over large regions; (3) continuous in time to allow simulation of land management, including such factors as climate and vegetation changes, pond and reservoir management, groundwater withdrawals, and stream and reservoir withdrawals. The model is described, and is validated on a 471 km 2 watershed near Waco, Texas. This linked model should provide a comprehensive tool for water resource managers in development and planning.
Electrostatics of cysteine residues in proteins: Parameterization and validation of a simple model
Salsbury, Freddie R.; Poole, Leslie B.; Fetrow, Jacquelyn S.
2013-01-01
One of the most popular and simple models for the calculation of pKas from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pKas. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pKas; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pKas. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pKa values (where the calculation should reproduce the pKa within experimental error). Both the general behavior of cysteines in proteins and the perturbed pKa in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pKa should be shifted, and validation of force field parameters for cysteine residues. PMID:22777874
A robust data scaling algorithm to improve classification accuracies in biomedical data.
Cao, Xi Hang; Stojkovic, Ivan; Obradovic, Zoran
2016-09-09
Machine learning models have been adapted in biomedical research and practice for knowledge discovery and decision support. While mainstream biomedical informatics research focuses on developing more accurate models, the importance of data preprocessing draws less attention. We propose the Generalized Logistic (GL) algorithm that scales data uniformly to an appropriate interval by learning a generalized logistic function to fit the empirical cumulative distribution function of the data. The GL algorithm is simple yet effective; it is intrinsically robust to outliers, so it is particularly suitable for diagnostic/classification models in clinical/medical applications where the number of samples is usually small; it scales the data in a nonlinear fashion, which leads to potential improvement in accuracy. To evaluate the effectiveness of the proposed algorithm, we conducted experiments on 16 binary classification tasks with different variable types and cover a wide range of applications. The resultant performance in terms of area under the receiver operation characteristic curve (AUROC) and percentage of correct classification showed that models learned using data scaled by the GL algorithm outperform the ones using data scaled by the Min-max and the Z-score algorithm, which are the most commonly used data scaling algorithms. The proposed GL algorithm is simple and effective. It is robust to outliers, so no additional denoising or outlier detection step is needed in data preprocessing. Empirical results also show models learned from data scaled by the GL algorithm have higher accuracy compared to the commonly used data scaling algorithms.
Zipf exponent of trajectory distribution in the hidden Markov model
NASA Astrophysics Data System (ADS)
Bochkarev, V. V.; Lerner, E. Yu
2014-03-01
This paper is the first step of generalization of the previously obtained full classification of the asymptotic behavior of the probability for Markov chain trajectories for the case of hidden Markov models. The main goal is to study the power (Zipf) and nonpower asymptotics of the frequency list of trajectories of hidden Markov frequencys and to obtain explicit formulae for the exponent of the power asymptotics. We consider several simple classes of hidden Markov models. We prove that the asymptotics for a hidden Markov model and for the corresponding Markov chain can be essentially different.
Aragón, Alfredo S; Kalberg, Wendy O; Buckley, David; Barela-Scott, Lindsey M; Tabachnick, Barbara G; May, Philip A
2008-12-01
Although a large body of literature exists on cognitive functioning in alcohol-exposed children, it is unclear if there is a signature neuropsychological profile in children with Fetal Alcohol Spectrum Disorders (FASD). This study assesses cognitive functioning in children with FASD from several American Indian reservations in the Northern Plains States, and it applies a hierarchical model of simple versus complex information processing to further examine cognitive function. We hypothesized that complex tests would discriminate between children with FASD and culturally similar controls, while children with FASD would perform similar to controls on relatively simple tests. Our sample includes 32 control children and 24 children with a form of FASD [fetal alcohol syndrome (FAS) = 10, partial fetal alcohol syndrome (PFAS) = 14]. The test battery measures general cognitive ability, verbal fluency, executive functioning, memory, and fine-motor skills. Many of the neuropsychological tests produced results consistent with a hierarchical model of simple versus complex processing. The complexity of the tests was determined "a priori" based on the number of cognitive processes involved in them. Multidimensional scaling was used to statistically analyze the accuracy of classifying the neurocognitive tests into a simple versus complex dichotomy. Hierarchical logistic regression models were then used to define the contribution made by complex versus simple tests in predicting the significant differences between children with FASD and controls. Complex test items discriminated better than simple test items. The tests that conformed well to the model were the Verbal Fluency, Progressive Planning Test (PPT), the Lhermitte memory tasks, and the Grooved Pegboard Test (GPT). The FASD-grouped children, when compared with controls, demonstrated impaired performance on letter fluency, while their performance was similar on category fluency. On the more complex PPT trials (problems 5 to 8), as well as the Lhermitte logical tasks, the FASD group performed the worst. The differential performance between children with FASD and controls was evident across various neuropsychological measures. The children with FASD performed significantly more poorly on the complex tasks than did the controls. The identification of a neurobehavioral profile in children with prenatal alcohol exposure will help clinicians identify and diagnose children with FASD.
Dynamics of non-Markovian exclusion processes
NASA Astrophysics Data System (ADS)
Khoromskaia, Diana; Harris, Rosemary J.; Grosskinsky, Stefan
2014-12-01
Driven diffusive systems are often used as simple discrete models of collective transport phenomena in physics, biology or social sciences. Restricting attention to one-dimensional geometries, the asymmetric simple exclusion process (ASEP) plays a paradigmatic role to describe noise-activated driven motion of entities subject to an excluded volume interaction and many variants have been studied in recent years. While in the standard ASEP the noise is Poissonian and the process is therefore Markovian, in many applications the statistics of the activating noise has a non-standard distribution with possible memory effects resulting from internal degrees of freedom or external sources. This leads to temporal correlations and can significantly affect the shape of the current-density relation as has been studied recently for a number of scenarios. In this paper we report a general framework to derive the fundamental diagram of ASEPs driven by non-Poissonian noise by using effectively only two simple quantities, viz., the mean residual lifetime of the jump distribution and a suitably defined temporal correlation length. We corroborate our results by detailed numerical studies for various noise statistics under periodic boundary conditions and discuss how our approach can be applied to more general driven diffusive systems.
Measuring Household Vulnerability: A Fuzzy Approach
NASA Astrophysics Data System (ADS)
Sethi, G.; Pierce, S. A.
2016-12-01
This research develops an index of vulnerability for Ugandan households using a variety of economic, social and environmental variables with two objectives. First, there is only a small body of research that measures household vulnerability. Given the stresses faced by households susceptible to water, environment, food, livelihood, energy, and health security concerns, it is critical that they be identified in order to make effective policy. We draw on the socio-ecological systems (SES) framework described by Ostrom (2009) and adapt the model developed by from Giupponi, Giove, and Giannini (2013) to develop a composite measure. Second, most indices in the literature are linear in nature, relying on simple weighted averages. In this research, we contrast the results obtained by a simple weighted average with those obtained by using the Choquet integral. The Choquet integral is a fuzzy measure, and is based on the generalization of the Lebesgue integral. Due to its non-additive nature, the Choquet integral offers a more general approach. Our results reveal that all households included in this study are highly vulnerable, and that vulnerability scores obtained by the fuzzy approach are significantly different from those obtained by using the simple weighted average (p = 9.46e-160).
Multiconfigurational quantum propagation with trajectory-guided generalized coherent states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grigolo, Adriano, E-mail: agrigolo@ifi.unicamp.br; Aguiar, Marcus A. M. de, E-mail: aguiar@ifi.unicamp.br; Viscondi, Thiago F., E-mail: viscondi@if.usp.br
2016-03-07
A generalized version of the coupled coherent states method for coherent states of arbitrary Lie groups is developed. In contrast to the original formulation, which is restricted to frozen-Gaussian basis sets, the extended method is suitable for propagating quantum states of systems featuring diversified physical properties, such as spin degrees of freedom or particle indistinguishability. The approach is illustrated with simple models for interacting bosons trapped in double- and triple-well potentials, most adequately described in terms of SU(2) and SU(3) bosonic coherent states, respectively.
GIS data models for coal geology
DOE Office of Scientific and Technical Information (OSTI.GOV)
McColloch, G.H. Jr.; Timberlake, K.J.; Oldham, A.V.
A variety of spatial data models can be applied to different aspects of coal geology. The simple vector data models found in various Computer Aided Drafting (CAD) programs are sometimes used for routine mapping and some simple analyses. However, more sophisticated applications that maintain the topological relationships between cartographic elements enhance analytical potential. Also, vector data models are best for producing various types of high quality, conventional maps. The raster data model is generally considered best for representing data that varies continuously over a geographic area, such as the thickness of a coal bed. Information is lost when contour linesmore » are threaded through raster grids for display, so volumes and tonnages are more accurately determined by working directly with raster data. Raster models are especially well suited to computationally simple surface-to-surface analysis, or overlay functions. Another data model, triangulated irregular networks (TINs) are superior at portraying visible surfaces because many TIN programs support break fines. Break lines locate sharp breaks in slope such as those generated by bodies of water or ridge crests. TINs also {open_quotes}honor{close_quotes} data points so that a surface generated from a set of points will be forced to pass through those points. TINs or grids generated from TINs, are particularly good at determining the intersections of surfaces such as coal seam outcrops and geologic unit boundaries. No single technique works best for all coal-related applications. The ability to use a variety of data models, and transform from one model to another is essential for obtaining optimum results in a timely manner.« less
Load partitioning in Ai{sub 2}0{sub 3-}Al composites with three- dimensional periodic architecture.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, M. L.; Rao, R.; Almer, J. D.
2009-05-01
Interpenetrating composites are created by infiltration of liquid aluminum into three-dimensional (3-D) periodic Al{sub 2}O{sub 3} preforms with simple tetragonal symmetry produced by direct-write assembly. Volume-averaged lattice strains in the Al{sub 2}O{sub 3} phase of the composite are measured by synchrotron X-ray diffraction for various uniaxial compression stresses up to -350MPa. Load transfer, found by diffraction to occur from the metal phase to the ceramic phase, is in general agreement with simple rule-of-mixture models and in better agreement with more complex, 3-D finite-element models that account for metal plasticity and details of the geometry of both phases. Spatially resolved diffractionmore » measurements show variations in load transfer at two different positions within the composite.« less
Characterization of biofilms with a fiber optic spectrometer
NASA Astrophysics Data System (ADS)
Krautwald, S.; Tonyali, A.; Fellerhoff, B.; Franke, Hilmar; Tamachkiarov, A.; Griebe, T.; Flemming, H. C.
2000-12-01
Optical sensing is one promising approach to monitor bioflims in an early stage. Generally, natural bioflims are quite inhomogeneous, therefore we start the investigation with suspensions of dead bacteria in water as a simple model for a bioflim. An experimental arrangement based on a white light fiber optic spectrometer is used for measuring the density of a thin film with a local resolution in the order of several tim. The method is applied with model biofilms. In a computer controlled procedure reflectance spectra may be recorded at different positions in the x-y plane. Scanning through thin suspension regions of bacteria between glass plates allows an estimation of the refractive index of bacteria. Taking advantage of the light collecting property of the glass substrate a simple measurement of the fluorescence with local resolution is demonstrated as well.
On a simple molecular–statistical model of a liquid-crystal suspension of anisometric particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zakhlevnykh, A. N., E-mail: anz@psu.ru; Lubnin, M. S.; Petrov, D. A.
2016-11-15
A molecular–statistical mean-field theory is constructed for suspensions of anisometric particles in nematic liquid crystals (NLCs). The spherical approximation, well known in the physics of ferromagnetic materials, is considered that allows one to obtain an analytic expression for the free energy and simple equations for the orientational state of a suspension that describe the temperature dependence of the order parameters of the suspension components. The transition temperature from ordered to isotropic state and the jumps in the order parameters at the phase-transition point are studied as a function of the anchoring energy of dispersed particles to the matrix, the concentrationmore » of the impurity phase, and the size of particles. The proposed approach allows one to generalize the model to the case of biaxial ordering.« less
Reference Models for Structural Technology Assessment and Weight Estimation
NASA Technical Reports Server (NTRS)
Cerro, Jeff; Martinovic, Zoran; Eldred, Lloyd
2005-01-01
Previously the Exploration Concepts Branch of NASA Langley Research Center has developed techniques for automating the preliminary design level of launch vehicle airframe structural analysis for purposes of enhancing historical regression based mass estimating relationships. This past work was useful and greatly reduced design time, however its application area was very narrow in terms of being able to handle a large variety in structural and vehicle general arrangement alternatives. Implementation of the analysis approach presented herein also incorporates some newly developed computer programs. Loft is a program developed to create analysis meshes and simultaneously define structural element design regions. A simple component defining ASCII file is read by Loft to begin the design process. HSLoad is a Visual Basic implementation of the HyperSizer Application Programming Interface, which automates the structural element design process. Details of these two programs and their use are explained in this paper. A feature which falls naturally out of the above analysis paradigm is the concept of "reference models". The flexibility of the FEA based JAVA processing procedures and associated process control classes coupled with the general utility of Loft and HSLoad make it possible to create generic program template files for analysis of components ranging from something as simple as a stiffened flat panel, to curved panels, fuselage and cryogenic tank components, flight control surfaces, wings, through full air and space vehicle general arrangements.
Dynamical spreading of small bodies in 1:1 resonance with planets by the diurnal Yarkovsky effect
NASA Astrophysics Data System (ADS)
Wang, Xuefeng; Hou, Xiyun
2017-10-01
A simple model is introduced to describe the inherent dynamics of Trojans in the presence of the diurnal Yarkovsky effect. For different spin statuses, the orbital elements of the Trojans (mainly semimajor axis, eccentricity and inclination) undergo different variations. The variation rate is generally very small, but the total variation of the semimajor axis or the orbit eccentricity over the age of the Solar system may be large enough to send small Trojans out of the regular region (or, vice versa, to capture small bodies in the regular region). In order to demonstrate the analytical analysis, we first carry out numerical simulations in a simple model, and then generalize these to two 'real' systems, namely the Sun-Jupiter system and the Sun-Earth system. In the Sun-Jupiter system, where the motion of Trojans is regular, the Yarkovsky effect gradually alters the libration width or the orbit eccentricity, forcing the Trojan to move from regular regionsto chaotic regions, where chaos may eventually cause it to escape. In the Sun-Earth system, where the motion of Trojans is generally chaotic, our limited numerical simulations indicate that the Yarkovsky effect is negligible for Trojans of 100 m in size, and even for larger ones. The Yarkovsky effect on small bodies captured in other 1:1 resonance orbits is also briefly discussed.
Elliott, Rachel A; Putman, Koen D; Franklin, Matthew; Annemans, Lieven; Verhaeghe, Nick; Eden, Martin; Hayre, Jasdeep; Rodgers, Sarah; Sheikh, Aziz; Avery, Anthony J
2014-06-01
We recently showed that a pharmacist-led information technology-based intervention (PINCER) was significantly more effective in reducing medication errors in general practices than providing simple feedback on errors, with cost per error avoided at £79 (US$131). We aimed to estimate cost effectiveness of the PINCER intervention by combining effectiveness in error reduction and intervention costs with the effect of the individual errors on patient outcomes and healthcare costs, to estimate the effect on costs and QALYs. We developed Markov models for each of six medication errors targeted by PINCER. Clinical event probability, treatment pathway, resource use and costs were extracted from literature and costing tariffs. A composite probabilistic model combined patient-level error models with practice-level error rates and intervention costs from the trial. Cost per extra QALY and cost-effectiveness acceptability curves were generated from the perspective of NHS England, with a 5-year time horizon. The PINCER intervention generated £2,679 less cost and 0.81 more QALYs per practice [incremental cost-effectiveness ratio (ICER): -£3,037 per QALY] in the deterministic analysis. In the probabilistic analysis, PINCER generated 0.001 extra QALYs per practice compared with simple feedback, at £4.20 less per practice. Despite this extremely small set of differences in costs and outcomes, PINCER dominated simple feedback with a mean ICER of -£3,936 (standard error £2,970). At a ceiling 'willingness-to-pay' of £20,000/QALY, PINCER reaches 59 % probability of being cost effective. PINCER produced marginal health gain at slightly reduced overall cost. Results are uncertain due to the poor quality of data to inform the effect of avoiding errors.
The effects of numerical-model complexity and observation type on estimated porosity values
Starn, Jeffrey; Bagtzoglou, Amvrossios C.; Green, Christopher T.
2015-01-01
The relative merits of model complexity and types of observations employed in model calibration are compared. An existing groundwater flow model coupled with an advective transport simulation of the Salt Lake Valley, Utah (USA), is adapted for advective transport, and effective porosity is adjusted until simulated tritium concentrations match concentrations in samples from wells. Two calibration approaches are used: a “complex” highly parameterized porosity field and a “simple” parsimonious model of porosity distribution. The use of an atmospheric tracer (tritium in this case) and apparent ages (from tritium/helium) in model calibration also are discussed. Of the models tested, the complex model (with tritium concentrations and tritium/helium apparent ages) performs best. Although tritium breakthrough curves simulated by complex and simple models are very generally similar, and there is value in the simple model, the complex model is supported by a more realistic porosity distribution and a greater number of estimable parameters. Culling the best quality data did not lead to better calibration, possibly because of processes and aquifer characteristics that are not simulated. Despite many factors that contribute to shortcomings of both the models and the data, useful information is obtained from all the models evaluated. Although any particular prediction of tritium breakthrough may have large errors, overall, the models mimic observed trends.
Impact of the time scale of model sensitivity response on coupled model parameter estimation
NASA Astrophysics Data System (ADS)
Liu, Chang; Zhang, Shaoqing; Li, Shan; Liu, Zhengyu
2017-11-01
That a model has sensitivity responses to parameter uncertainties is a key concept in implementing model parameter estimation using filtering theory and methodology. Depending on the nature of associated physics and characteristic variability of the fluid in a coupled system, the response time scales of a model to parameters can be different, from hourly to decadal. Unlike state estimation, where the update frequency is usually linked with observational frequency, the update frequency for parameter estimation must be associated with the time scale of the model sensitivity response to the parameter being estimated. Here, with a simple coupled model, the impact of model sensitivity response time scales on coupled model parameter estimation is studied. The model includes characteristic synoptic to decadal scales by coupling a long-term varying deep ocean with a slow-varying upper ocean forced by a chaotic atmosphere. Results show that, using the update frequency determined by the model sensitivity response time scale, both the reliability and quality of parameter estimation can be improved significantly, and thus the estimated parameters make the model more consistent with the observation. These simple model results provide a guideline for when real observations are used to optimize the parameters in a coupled general circulation model for improving climate analysis and prediction initialization.
NASA Technical Reports Server (NTRS)
Schmidt, H.; Tango, G. J.; Werby, M. F.
1985-01-01
A new matrix method for rapid wave propagation modeling in generalized stratified media, which has recently been applied to numerical simulations in diverse areas of underwater acoustics, solid earth seismology, and nondestructive ultrasonic scattering is explained and illustrated. A portion of recent efforts jointly undertaken at NATOSACLANT and NORDA Numerical Modeling groups in developing, implementing, and testing a new fast general-applications wave propagation algorithm, SAFARI, formulated at SACLANT is summarized. The present general-applications SAFARI program uses a Direct Global Matrix Approach to multilayer Green's function calculation. A rapid and unconditionally stable solution is readily obtained via simple Gaussian ellimination on the resulting sparsely banded block system, precisely analogous to that arising in the Finite Element Method. The resulting gains in accuracy and computational speed allow consideration of much larger multilayered air/ocean/Earth/engineering material media models, for many more source-receiver configurations than previously possible. The validity and versatility of the SAFARI-DGM method is demonstrated by reviewing three practical examples of engineering interest, drawn from ocean acoustics, engineering seismology and ultrasonic scattering.
Pilots Rate Augmented Generalized Predictive Control for Reconfiguration
NASA Technical Reports Server (NTRS)
Soloway, Don; Haley, Pam
2004-01-01
The objective of this paper is to report the results from the research being conducted in reconfigurable fight controls at NASA Ames. A study was conducted with three NASA Dryden test pilots to evaluate two approaches of reconfiguring an aircraft's control system when failures occur in the control surfaces and engine. NASA Ames is investigating both a Neural Generalized Predictive Control scheme and a Neural Network based Dynamic Inverse controller. This paper highlights the Predictive Control scheme where a simple augmentation to reduce zero steady-state error led to the neural network predictor model becoming redundant for the task. Instead of using a neural network predictor model, a nominal single point linear model was used and then augmented with an error corrector. This paper shows that the Generalized Predictive Controller and the Dynamic Inverse Neural Network controller perform equally well at reconfiguration, but with less rate requirements from the actuators. Also presented are the pilot ratings for each controller for various failure scenarios and two samples of the required control actuation during reconfiguration. Finally, the paper concludes by stepping through the Generalized Predictive Control's reconfiguration process for an elevator failure.
Kuan, Hui-Shun; Betterton, Meredith D.
2016-01-01
Motor protein motion on biopolymers can be described by models related to the totally asymmetric simple exclusion process (TASEP). Inspired by experiments on the motion of kinesin-4 motors on antiparallel microtubule overlaps, we analyze a model incorporating the TASEP on two antiparallel lanes with binding kinetics and lane switching. We determine the steady-state motor density profiles using phase-plane analysis of the steady-state mean field equations and kinetic Monte Carlo simulations. We focus on the density-density phase plane, where we find an analytic solution to the mean field model. By studying the phase-space flows, we determine the model’s fixed points and their changes with parameters. Phases previously identified for the single-lane model occur for low switching rate between lanes. We predict a multiple coexistence phase due to additional fixed points that appear as the switching rate increases: switching moves motors from the higher-density to the lower-density lane, causing local jamming and creating multiple domain walls. We determine the phase diagram of the model for both symmetric and general boundary conditions. PMID:27627345
Cranking Calculation in the sdg Interacting Boson Model
NASA Astrophysics Data System (ADS)
Wang, Baolin
1998-10-01
A self-consistent cranking calculation of the intrinsic states of the sdg interacting boson model is performed. The formulae of the moment of inertia are given in a general sdg IBM multipole Hamiltonian with one- and two-body terms. In the quadrupole interaction, the intrinsic states, the quadrupole and hexadecapole deformation and the moment of inertia are investigated in the large N limit. Using a simple Hamiltonian, the results of numerical calculations for 152, 154Sm and 154-160 Gd satisfactorily reproduce the experimental data.
The ARC/INFO geographic information system
NASA Astrophysics Data System (ADS)
Morehouse, Scott
1992-05-01
ARC/INFO is a general-purpose system for processing geographic information. It is based on a relatively simple model of geographic space—the coverage—and contains an extensive set of geoprocessing tools which operate on coverages. ARC/INFO is used in a wide variety of applications areas, including: natural-resource inventory and planning, cadastral database development and mapping, urban and regional planning, and cartography. This paper is an overview of ARC/INFO and discusses the ARC/INFO conceptual architecture, data model, operators, and user interface.
Relative entropy as a universal metric for multiscale errors
NASA Astrophysics Data System (ADS)
Chaimovich, Aviel; Shell, M. Scott
2010-06-01
We show that the relative entropy, Srel , suggests a fundamental indicator of the success of multiscale studies, in which coarse-grained (CG) models are linked to first-principles (FP) ones. We demonstrate that Srel inherently measures fluctuations in the differences between CG and FP potential energy landscapes, and develop a theory that tightly and generally links it to errors associated with coarse graining. We consider two simple case studies substantiating these results, and suggest that Srel has important ramifications for evaluating and designing coarse-grained models.
Relative entropy as a universal metric for multiscale errors.
Chaimovich, Aviel; Shell, M Scott
2010-06-01
We show that the relative entropy, Srel, suggests a fundamental indicator of the success of multiscale studies, in which coarse-grained (CG) models are linked to first-principles (FP) ones. We demonstrate that Srel inherently measures fluctuations in the differences between CG and FP potential energy landscapes, and develop a theory that tightly and generally links it to errors associated with coarse graining. We consider two simple case studies substantiating these results, and suggest that Srel has important ramifications for evaluating and designing coarse-grained models.
Impact resistance of fiber composites - Energy-absorbing mechanisms and environmental effects
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Sinclair, J. H.
1985-01-01
Energy absorbing mechanisms were identified by several approaches. The energy absorbing mechanisms considered are those in unidirectional composite beams subjected to impact. The approaches used include: mechanic models, statistical models, transient finite element analysis, and simple beam theory. Predicted results are correlated with experimental data from Charpy impact tests. The environmental effects on impact resistance are evaluated. Working definitions for energy absorbing and energy releasing mechanisms are proposed and a dynamic fracture progression is outlined. Possible generalizations to angle-plied laminates are described.
Impact resistance of fiber composites: Energy absorbing mechanisms and environmental effects
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Sinclair, J. H.
1983-01-01
Energy absorbing mechanisms were identified by several approaches. The energy absorbing mechanisms considered are those in unidirectional composite beams subjected to impact. The approaches used include: mechanic models, statistical models, transient finite element analysis, and simple beam theory. Predicted results are correlated with experimental data from Charpy impact tests. The environmental effects on impact resistance are evaluated. Working definitions for energy absorbing and energy releasing mechanisms are proposed and a dynamic fracture progression is outlined. Possible generalizations to angle-plied laminates are described.
A scheme for parameterizing ice cloud water content in general circulation models
NASA Technical Reports Server (NTRS)
Heymsfield, Andrew J.; Donner, Leo J.
1989-01-01
A method for specifying ice water content in GCMs is developed, based on theory and in-cloud measurements. A theoretical development of the conceptual precipitation model is given and the aircraft flights used to characterize the ice mass distribution in deep ice clouds is discussed. Ice water content values derived from the theoretical parameterization are compared with the measured values. The results demonstrate that a simple parameterization for atmospheric ice content can account for ice contents observed in several synoptic contexts.
ERIC Educational Resources Information Center
Bagdadi, Andrea; Orona, Nadia; Fernandez, Eugenio; Altamirano, Anibal; Amorena, Carlos
2010-01-01
We have realized that our Biology undergraduate students learn biological concepts as established truths without awareness of the body of experimental evidence supporting the emerging models as usually presented in handbooks and texts in general. Therefore, we have implemented a laboratory practice in our course of Physiology and Biophysics, aimed…
The Motion of a Leaking Oscillator: A Study for the Physics Class
ERIC Educational Resources Information Center
Rodrigues, Hilário; Panza, Nelson; Portes, Dirceu; Soares, Alexandre
2014-01-01
This paper is essentially about the general form of Newton's second law for variable mass problems. We develop a model for describing the motion of the one-dimensional oscillator with a variable mass within the framework of classroom physics. We present a simple numerical procedure for the solution of the equation of motion of the system to…
Simple Models to Explore Deterrence and More General Influence in the War with al-Qaeda
2010-01-01
Academy of Sciences, 1996).9 Regrettably, few of these enrichments come to most people’s minds when the term “deter- rence” is used. Instead, they...may flour - ish. Efforts to reduce the root-cause factors using “influence” will, in most cases, be uphill and long-term in nature. Moreover, it is
ERIC Educational Resources Information Center
Shen, Linjun
As part of a longitudinal study of the growth of general medical knowledge among osteopathic medical students, a simple, convenient, and accurate vertical equating method was developed for constructing a scale for medical achievement. It was believed that Parts 1, 2, and 3 of the National Board of Osteopathic Medical Examiners' (NBOME) examination…
Nuclear Stability and Nucleon-Nucleon Interactions in Introductory and General Chemistry Textbooks
ERIC Educational Resources Information Center
Millevolte, Anthony
2010-01-01
The nucleus is a highly dense and highly charged substructure of atoms. In the nuclei of all atoms beyond hydrogen, multiple protons are in close proximity to each other in spite of strong electrostatic repulsions between them. The attractive internucleon strong force is described and its origin explained by using a simple quark model for the…
ERIC Educational Resources Information Center
Monroe County School District, Key West, FL.
Intended for use in Florida training programs for caregivers of infants and toddlers with disabilities, this booklet describes some of the more common physical and health impairments that can affect young children. For each disability, the description generally stresses typical characteristics and special requirements. Addresses and telephone…
A Simple Experiment for Determining the Elastic Constant of a Fine Wire
ERIC Educational Resources Information Center
Freeman, W. Larry; Freda, Ronald F.
2007-01-01
Many general physics laboratories involve the use of springs to demonstrate Hooke's law, and much ado is made about how this can be used as a model for describing the elastic characteristics of materials at the molecular or atomic level. In recent years, the proliferation of computers, and appropriate sensors, have made it possible to demonstrate…
A simple hyperbolic model for communication in parallel processing environments
NASA Technical Reports Server (NTRS)
Stoica, Ion; Sultan, Florin; Keyes, David
1994-01-01
We introduce a model for communication costs in parallel processing environments called the 'hyperbolic model,' which generalizes two-parameter dedicated-link models in an analytically simple way. Dedicated interprocessor links parameterized by a latency and a transfer rate that are independent of load are assumed by many existing communication models; such models are unrealistic for workstation networks. The communication system is modeled as a directed communication graph in which terminal nodes represent the application processes that initiate the sending and receiving of the information and in which internal nodes, called communication blocks (CBs), reflect the layered structure of the underlying communication architecture. The direction of graph edges specifies the flow of the information carried through messages. Each CB is characterized by a two-parameter hyperbolic function of the message size that represents the service time needed for processing the message. The parameters are evaluated in the limits of very large and very small messages. Rules are given for reducing a communication graph consisting of many to an equivalent two-parameter form, while maintaining an approximation for the service time that is exact in both large and small limits. The model is validated on a dedicated Ethernet network of workstations by experiments with communication subprograms arising in scientific applications, for which a tight fit of the model predictions with actual measurements of the communication and synchronization time between end processes is demonstrated. The model is then used to evaluate the performance of two simple parallel scientific applications from partial differential equations: domain decomposition and time-parallel multigrid. In an appropriate limit, we also show the compatibility of the hyperbolic model with the recently proposed LogP model.
NASA Astrophysics Data System (ADS)
Nikurashin, Maxim; Gunn, Andrew
2017-04-01
The meridional overturning circulation (MOC) is a planetary-scale oceanic flow which is of direct importance to the climate system: it transports heat meridionally and regulates the exchange of CO2 with the atmosphere. The MOC is forced by wind and heat and freshwater fluxes at the surface and turbulent mixing in the ocean interior. A number of conceptual theories for the sensitivity of the MOC to changes in forcing have recently been developed and tested with idealized numerical models. However, the skill of the simple conceptual theories to describe the MOC simulated with higher complexity global models remains largely unknown. In this study, we present a systematic comparison of theoretical and modelled sensitivity of the MOC and associated deep ocean stratification to vertical mixing and southern hemisphere westerlies. The results show that theories that simplify the ocean into a single-basin, zonally-symmetric box are generally in a good agreement with a realistic, global ocean circulation model. Some disagreement occurs in the abyssal ocean, where complex bottom topography is not taken into account by simple theories. Distinct regimes, where the MOC has a different sensitivity to wind or mixing, as predicted by simple theories, are also clearly shown by the global ocean model. The sensitivity of the Indo-Pacific, Atlantic, and global basins is analysed separately to validate the conceptual understanding of the upper and lower overturning cells in the theory.
Pei, Jiquan; Han, Steve; Liao, Haijun; Li, Tao
2014-01-22
A highly efficient and simple-to-implement Monte Carlo algorithm is proposed for the evaluation of the Rényi entanglement entropy (REE) of the quantum dimer model (QDM) at the Rokhsar-Kivelson (RK) point. It makes possible the evaluation of REE at the RK point to the thermodynamic limit for a general QDM. We apply the algorithm to a QDM defined on the triangular and the square lattice in two dimensions and the simple and the face centered cubic (fcc) lattice in three dimensions. We find the REE on all these lattices follows perfect linear scaling in the thermodynamic limit, apart from an even-odd oscillation in the case of the square lattice. We also evaluate the topological entanglement entropy (TEE) with both a subtraction and an extrapolation procedure. We find the QDMs on both the triangular and the fcc lattice exhibit robust Z2 topological order. The expected TEE of ln2 is clearly demonstrated in both cases. Our large scale simulation also proves the recently proposed extrapolation procedure in cylindrical geometry to be a highly reliable way to extract the TEE of a topologically ordered system.
Complexity analysis based on generalized deviation for financial markets
NASA Astrophysics Data System (ADS)
Li, Chao; Shang, Pengjian
2018-03-01
In this paper, a new modified method is proposed as a measure to investigate the correlation between past price and future volatility for financial time series, known as the complexity analysis based on generalized deviation. In comparison with the former retarded volatility model, the new approach is both simple and computationally efficient. The method based on the generalized deviation function presents us an exhaustive way showing the quantization of the financial market rules. Robustness of this method is verified by numerical experiments with both artificial and financial time series. Results show that the generalized deviation complexity analysis method not only identifies the volatility of financial time series, but provides a comprehensive way distinguishing the different characteristics between stock indices and individual stocks. Exponential functions can be used to successfully fit the volatility curves and quantify the changes of complexity for stock market data. Then we study the influence for negative domain of deviation coefficient and differences during the volatile periods and calm periods. after the data analysis of the experimental model, we found that the generalized deviation model has definite advantages in exploring the relationship between the historical returns and future volatility.
Albaugh, Alex; Head-Gordon, Teresa; Niklasson, Anders M. N.
2018-01-09
Generalized extended Lagrangian Born−Oppenheimer molecular dynamics (XLBOMD) methods provide a framework for fast iteration-free simulations of models that normally require expensive electronic ground state optimizations prior to the force evaluations at every time step. XLBOMD uses dynamically driven auxiliary degrees of freedom that fluctuate about a variationally optimized ground state of an approximate “shadow” potential which approximates the true reference potential. While the requirements for such shadow potentials are well understood, constructing such potentials in practice has previously been ad hoc, and in this work, we present a systematic development of XLBOMD shadow potentials that match the reference potential tomore » any order. We also introduce a framework for combining friction-like dissipation for the auxiliary degrees of freedom with general-order integration, a combination that was not previously possible. These developments are demonstrated with a simple fluctuating charge model and point induced dipole polarization models.« less
Saddles and dynamics in a solvable mean-field model
NASA Astrophysics Data System (ADS)
Angelani, L.; Ruocco, G.; Zamponi, F.
2003-05-01
We use the saddle-approach, recently introduced in the numerical investigation of simple model liquids, in the analysis of a mean-field solvable system. The investigated system is the k-trigonometric model, a k-body interaction mean field system, that generalizes the trigonometric model introduced by Madan and Keyes [J. Chem. Phys. 98, 3342 (1993)] and that has been recently introduced to investigate the relationship between thermodynamics and topology of the configuration space. We find a close relationship between the properties of saddles (stationary points of the potential energy surface) visited by the system and the dynamics. In particular the temperature dependence of saddle order follows that of the diffusivity, both having an Arrhenius behavior at low temperature and a similar shape in the whole temperature range. Our results confirm the general usefulness of the saddle-approach in the interpretation of dynamical processes taking place in interacting systems.
General mechanism of two-state protein folding kinetics.
Rollins, Geoffrey C; Dill, Ken A
2014-08-13
We describe here a general model of the kinetic mechanism of protein folding. In the Foldon Funnel Model, proteins fold in units of secondary structures, which form sequentially along the folding pathway, stabilized by tertiary interactions. The model predicts that the free energy landscape has a volcano shape, rather than a simple funnel, that folding is two-state (single-exponential) when secondary structures are intrinsically unstable, and that each structure along the folding path is a transition state for the previous structure. It shows how sequential pathways are consistent with multiple stochastic routes on funnel landscapes, and it gives good agreement with the 9 order of magnitude dependence of folding rates on protein size for a set of 93 proteins, at the same time it is consistent with the near independence of folding equilibrium constant on size. This model gives estimates of folding rates of proteomes, leading to a median folding time in Escherichia coli of about 5 s.
Albaugh, Alex; Head-Gordon, Teresa; Niklasson, Anders M N
2018-02-13
Generalized extended Lagrangian Born-Oppenheimer molecular dynamics (XLBOMD) methods provide a framework for fast iteration-free simulations of models that normally require expensive electronic ground state optimizations prior to the force evaluations at every time step. XLBOMD uses dynamically driven auxiliary degrees of freedom that fluctuate about a variationally optimized ground state of an approximate "shadow" potential which approximates the true reference potential. While the requirements for such shadow potentials are well understood, constructing such potentials in practice has previously been ad hoc, and in this work, we present a systematic development of XLBOMD shadow potentials that match the reference potential to any order. We also introduce a framework for combining friction-like dissipation for the auxiliary degrees of freedom with general-order integration, a combination that was not previously possible. These developments are demonstrated with a simple fluctuating charge model and point induced dipole polarization models.
Social inheritance can explain the structure of animal social networks
Ilany, Amiyaal; Akçay, Erol
2016-01-01
The social network structure of animal populations has major implications for survival, reproductive success, sexual selection and pathogen transmission of individuals. But as of yet, no general theory of social network structure exists that can explain the diversity of social networks observed in nature, and serve as a null model for detecting species and population-specific factors. Here we propose a simple and generally applicable model of social network structure. We consider the emergence of network structure as a result of social inheritance, in which newborns are likely to bond with maternal contacts, and via forming bonds randomly. We compare model output with data from several species, showing that it can generate networks with properties such as those observed in real social systems. Our model demonstrates that important observed properties of social networks, including heritability of network position or assortative associations, can be understood as consequences of social inheritance. PMID:27352101
Emergence of a complex and stable network in a model ecosystem with extinction and mutation.
Tokita, Kei; Yasutomi, Ayumu
2003-03-01
We propose a minimal model of the dynamics of diversity-replicator equations with extinction, invasion and mutation. We numerically study the behavior of this simple model and show that it displays completely different behavior from the conventional replicator equation and the generalized Lotka-Volterra equation. We reach several significant conclusions as follows: (1) a complex ecosystem can emerge when mutants with respect to species-specific interaction are introduced; (2) such an ecosystem possesses strong resistance to invasion; (3) a typical fixation process of mutants is realized through the rapid growth of a group of mutualistic mutants with higher fitness than majority species; (4) a hierarchical taxonomic structure (like family-genus-species) emerges; and (5) the relative abundance of species exhibits a typical pattern widely observed in nature. Several implications of these results are discussed in connection with the relationship of the present model to the generalized Lotka-Volterra equation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albaugh, Alex; Head-Gordon, Teresa; Niklasson, Anders M. N.
Generalized extended Lagrangian Born−Oppenheimer molecular dynamics (XLBOMD) methods provide a framework for fast iteration-free simulations of models that normally require expensive electronic ground state optimizations prior to the force evaluations at every time step. XLBOMD uses dynamically driven auxiliary degrees of freedom that fluctuate about a variationally optimized ground state of an approximate “shadow” potential which approximates the true reference potential. While the requirements for such shadow potentials are well understood, constructing such potentials in practice has previously been ad hoc, and in this work, we present a systematic development of XLBOMD shadow potentials that match the reference potential tomore » any order. We also introduce a framework for combining friction-like dissipation for the auxiliary degrees of freedom with general-order integration, a combination that was not previously possible. These developments are demonstrated with a simple fluctuating charge model and point induced dipole polarization models.« less
Use of paired simple and complex models to reduce predictive bias and quantify uncertainty
NASA Astrophysics Data System (ADS)
Doherty, John; Christensen, Steen
2011-12-01
Modern environmental management and decision-making is based on the use of increasingly complex numerical models. Such models have the advantage of allowing representation of complex processes and heterogeneous system property distributions inasmuch as these are understood at any particular study site. The latter are often represented stochastically, this reflecting knowledge of the character of system heterogeneity at the same time as it reflects a lack of knowledge of its spatial details. Unfortunately, however, complex models are often difficult to calibrate because of their long run times and sometimes questionable numerical stability. Analysis of predictive uncertainty is also a difficult undertaking when using models such as these. Such analysis must reflect a lack of knowledge of spatial hydraulic property details. At the same time, it must be subject to constraints on the spatial variability of these details born of the necessity for model outputs to replicate observations of historical system behavior. In contrast, the rapid run times and general numerical reliability of simple models often promulgates good calibration and ready implementation of sophisticated methods of calibration-constrained uncertainty analysis. Unfortunately, however, many system and process details on which uncertainty may depend are, by design, omitted from simple models. This can lead to underestimation of the uncertainty associated with many predictions of management interest. The present paper proposes a methodology that attempts to overcome the problems associated with complex models on the one hand and simple models on the other hand, while allowing access to the benefits each of them offers. It provides a theoretical analysis of the simplification process from a subspace point of view, this yielding insights into the costs of model simplification, and into how some of these costs may be reduced. It then describes a methodology for paired model usage through which predictive bias of a simplified model can be detected and corrected, and postcalibration predictive uncertainty can be quantified. The methodology is demonstrated using a synthetic example based on groundwater modeling environments commonly encountered in northern Europe and North America.
Adaptation, Growth, and Resilience in Biological Distribution Networks
NASA Astrophysics Data System (ADS)
Ronellenfitsch, Henrik; Katifori, Eleni
Highly optimized complex transport networks serve crucial functions in many man-made and natural systems such as power grids and plant or animal vasculature. Often, the relevant optimization functional is nonconvex and characterized by many local extrema. In general, finding the global, or nearly global optimum is difficult. In biological systems, it is believed that such an optimal state is slowly achieved through natural selection. However, general coarse grained models for flow networks with local positive feedback rules for the vessel conductivity typically get trapped in low efficiency, local minima. We show how the growth of the underlying tissue, coupled to the dynamical equations for network development, can drive the system to a dramatically improved optimal state. This general model provides a surprisingly simple explanation for the appearance of highly optimized transport networks in biology such as plant and animal vasculature. In addition, we show how the incorporation of spatially collective fluctuating sources yields a minimal model of realistic reticulation in distribution networks and thus resilience against damage.
Estimates of runoff using water-balance and atmospheric general circulation models
Wolock, D.M.; McCabe, G.J.
1999-01-01
The effects of potential climate change on mean annual runoff in the conterminous United States (U.S.) are examined using a simple water-balance model and output from two atmospheric general circulation models (GCMs). The two GCMs are from the Canadian Centre for Climate Prediction and Analysis (CCC) and the Hadley Centre for Climate Prediction and Research (HAD). In general, the CCC GCM climate results in decreases in runoff for the conterminous U.S., and the HAD GCM climate produces increases in runoff. These estimated changes in runoff primarily are the result of estimated changes in precipitation. The changes in mean annual runoff, however, mostly are smaller than the decade-to-decade variability in GCM-based mean annual runoff and errors in GCM-based runoff. The differences in simulated runoff between the two GCMs, together with decade-to-decade variability and errors in GCM-based runoff, cause the estimates of changes in runoff to be uncertain and unreliable.
Discrete is it enough? The revival of Piola-Hencky keynotes to analyze three-dimensional Elastica
NASA Astrophysics Data System (ADS)
Turco, Emilio
2018-04-01
Complex problems such as those concerning the mechanics of materials can be confronted only by considering numerical simulations. Analytical methods are useful to build guidelines or reference solutions but, for general cases of technical interest, they have to be solved numerically, especially in the case of large displacements and deformations. Probably continuous models arose for producing inspiring examples and stemmed from homogenization techniques. These techniques allowed for the solution of some paradigmatic examples but, in general, always require a discretization method for solving problems dictated by the applications. Therefore, and also by taking into account that computing powers are nowadays more largely available and cheap, the question arises: why not using directly a discrete model for 3D beams? In other words, it could be interesting to formulate a discrete model without using an intermediate continuum one, as this last, at the end, has to be discretized in any case. These simple considerations immediately evoke some very basic models developed many years ago when the computing powers were practically inexistent but the problem of finding simple solutions to beam deformation problem was already an emerging one. Actually, in recent years, the keynotes of Hencky and Piola attracted a renewed attention [see, one for all, the work (Turco et al. in Zeitschrift für Angewandte Mathematik und Physik 67(4):1-28, 2016)]: generalizing their results, in the present paper, a novel directly discrete three-dimensional beam model is presented and discussed, in the framework of geometrically nonlinear analysis. Using a stepwise algorithm based essentially on Newton's method to compute the extrapolations and on the Riks' arc-length method to perform the corrections, we could obtain some numerical simulations showing the computational effectiveness of presented model: Indeed, it presents a convenient balance between accuracy and computational cost.
Hot cheese: a processed Swiss cheese model.
Li, Y; Thimbleby, H
2014-01-01
James Reason's classic Swiss cheese model is a vivid and memorable way to visualise how patient harm happens only when all system defences fail. Although Reason's model has been criticised for its simplicity and static portrait of complex systems, its use has been growing, largely because of the direct clarity of its simple and memorable metaphor. A more general, more flexible and equally memorable model of accident causation in complex systems is needed. We present the hot cheese model, which is more realistic, particularly in portraying defence layers as dynamic and active - more defences may cause more hazards. The hot cheese model, being more flexible, encourages deeper discussion of incidents than the simpler Swiss cheese model permits.
Vaeth, Michael; Skovlund, Eva
2004-06-15
For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.
Accounting for nitrogen fixation in simple models of lake nitrogen loading/export.
Ruan, Xiaodan; Schellenger, Frank; Hellweger, Ferdi L
2014-05-20
Coastal eutrophication, an important global environmental problem, is primarily caused by excess nitrogen and management efforts consequently focus on lowering watershed N export (e.g., by reducing fertilizer use). Simple quantitative models are needed to evaluate alternative scenarios at the watershed scale. Existing models generally assume that, for a specific lake/reservoir, a constant fraction of N loading is exported downstream. However, N fixation by cyanobacteria may increase when the N loading is reduced, which may change the (effective) fraction of N exported. Here we present a model that incorporates this process. The model (Fixation and Export of Nitrogen from Lakes, FENL) is based on a steady-state mass balance with loading, output, loss/retention, and N fixation, where the amount fixed is a function of the N/P ratio of the loading (i.e., when N/P is less than a threshold value, N is fixed). Three approaches are used to parametrize and evaluate the model, including microcosm lab experiments, lake field observations/budgets and lake ecosystem model applications. Our results suggest that N export will not be reduced proportionally with N loading, which needs to be considered when evaluating management scenarios.
Simple projects guidebook : federal-aid procedure for simple projects
DOT National Transportation Integrated Search
2002-06-01
Experience has shown that a simple project generally 1) does not have any right-of-way involvement and 2) has a Programmatic Categorical Exclusion or Categorical Exclusion environmental determination. Page 7 outlines the definition of simple projects...
A consistent framework for Horton regression statistics that leads to a modified Hack's law
Furey, P.R.; Troutman, B.M.
2008-01-01
A statistical framework is introduced that resolves important problems with the interpretation and use of traditional Horton regression statistics. The framework is based on a univariate regression model that leads to an alternative expression for Horton ratio, connects Horton regression statistics to distributional simple scaling, and improves the accuracy in estimating Horton plot parameters. The model is used to examine data for drainage area A and mainstream length L from two groups of basins located in different physiographic settings. Results show that confidence intervals for the Horton plot regression statistics are quite wide. Nonetheless, an analysis of covariance shows that regression intercepts, but not regression slopes, can be used to distinguish between basin groups. The univariate model is generalized to include n > 1 dependent variables. For the case where the dependent variables represent ln A and ln L, the generalized model performs somewhat better at distinguishing between basin groups than two separate univariate models. The generalized model leads to a modification of Hack's law where L depends on both A and Strahler order ??. Data show that ?? plays a statistically significant role in the modified Hack's law expression. ?? 2008 Elsevier B.V.
Theory of inhomogeneous quantum systems. III. Variational wave functions for Fermi fluids
NASA Astrophysics Data System (ADS)
Krotscheck, E.
1985-04-01
We develop a general variational theory for inhomogeneous Fermi systems such as the electron gas in a metal surface, the surface of liquid 3He, or simple models of heavy nuclei. The ground-state wave function is expressed in terms of two-body correlations, a one-body attenuation factor, and a model-system Slater determinant. Massive partial summations of cluster expansions are performed by means of Born-Green-Yvon and hypernetted-chain techniques. An optimal single-particle basis is generated by a generalized Hartree-Fock equation in which the two-body correlations screen the bare interparticle interaction. The optimization of the pair correlations leads to a state-averaged random-phase-approximation equation and a strictly microscopic determination of the particle-hole interaction.
General ecological models for human subsistence, health and poverty.
Ngonghala, Calistus N; De Leo, Giulio A; Pascual, Mercedes M; Keenan, Donald C; Dobson, Andrew P; Bonds, Matthew H
2017-08-01
The world's rural poor rely heavily on their immediate natural environment for subsistence and suffer high rates of morbidity and mortality from infectious diseases. We present a general framework for modelling subsistence and health of the rural poor by coupling simple dynamic models of population ecology with those for economic growth. The models show that feedbacks between the biological and economic systems can lead to a state of persistent poverty. Analyses of a wide range of specific systems under alternative assumptions show the existence of three possible regimes corresponding to a globally stable development equilibrium, a globally stable poverty equilibrium and bistability. Bistability consistently emerges as a property of generalized disease-economic systems for about a fifth of the feasible parameter space. The overall proportion of parameters leading to poverty is larger than that resulting in healthy/wealthy development. All the systems are found to be most sensitive to human disease parameters. The framework highlights feedbacks, processes and parameters that are important to measure in studies of rural poverty to identify effective pathways towards sustainable development.
Matter-coupled de Sitter supergravity
NASA Astrophysics Data System (ADS)
Kallosh, R. E.
2016-05-01
The de Sitter supergravity describes the interaction of supergravity with general chiral and vector multiplets and also one nilpotent chiral multiplet. The extra universal positive term in the potential, generated by the nilpotent multiplet and corresponding to the anti-D3 brane in string theory, is responsible for the de Sitter vacuum stability in these supergravity models. In the flat-space limit, these supergravity models include the Volkov-Akulov model with a nonlinearly realized supersymmetry. We generalize the rules for constructing the pure de Sitter supergravity action to the case of models containing other matter multiplets. We describe a method for deriving the closed-form general supergravity action with a given potential K, superpotential W, and vectormatrix fAB interacting with a nilpotent chiral multiplet. It has the potential V = eK(|F2|+|DW|2-3|W|2), where F is the auxiliary field of the nilpotent multiplet and is necessarily nonzero. The de Sitter vacuums are present under the simple condition that |F2|-3|W|2 > 0. We present an explicit form of the complete action in the unitary gauge.
NASA Astrophysics Data System (ADS)
de Jong, Kenneth; Silbert, Noah; Park, Hanyong
2004-05-01
Experimental models of cross-language perception and second-language acquisition (such as PAM and SLM) typically treat language differences in terms of whether the two languages share phonological segmental categories. Linguistic models, by contrast, generally examine properties which cross classify segments, such as features, rules, or prosodic constraints. Such models predict that perceptual patterns found for one segment will generalize to other segments of the same class. This paper presents perceptual identifications of Korean listeners to a set of voiced and voiceless English stops and fricatives in various prosodic locations to determine the extent to which such generality occurs. Results show some class-general effects; for example, voicing identification patterns generalize from stops, which occur in Korean, to nonsibilant fricatives, which are new to Korean listeners. However, when identification is poor, there are clear differences between segments within the same class. For example, in identifying stops and fricatives, both point of articulation and prosodic position bias perceptions; coronals are more often labeled fricatives, and syllable initial obstruents are more often labeled stops. These results suggest that class-general perceptual patterns are not a simple consequence of the structure of the perceptual system, but need to be acquired by factoring out within-class differences.
NASA Technical Reports Server (NTRS)
Lindholm, F. A.
1982-01-01
The derivation of a simple expression for the capacitance C(V) associated with the transition region of a p-n junction under a forward bias is derived by phenomenological reasoning. The treatment of C(V) is based on the conventional Shockley equations, and simpler expressions for C(V) result that are in general accord with the previous analytical and numerical results. C(V) consists of two components resulting from changes in majority carrier concentration and from free hole and electron accumulation in the space-charge region. The space-charge region is conceived as the intrinsic region of an n-i-p structure for a space-charge region markedly wider than the extrinsic Debye lengths at its edges. This region is excited in the sense that the forward bias creates hole and electron densities orders of magnitude larger than those in equilibrium. The recent Shirts-Gordon (1979) modeling of the space-charge region using a dielectric response function is contrasted with the more conventional Schottky-Shockley modeling.
A proposed mathematical model for sleep patterning.
Lawder, R E
1984-01-01
The simple model of a ramp, intersecting a triangular waveform, yields results which conform with seven generalized observations of sleep patterning; including the progressive lengthening of 'rapid-eye-movement' (REM) sleep periods within near-constant REM/nonREM cycle periods. Predicted values of REM sleep time, and of Stage 3/4 nonREM sleep time, can be computed using the observed values of other parameters. The distributions of the actual REM and Stage 3/4 times relative to the predicted values were closer to normal than the distributions relative to simple 'best line' fits. It was found that sleep onset tends to occur at a particular moment in the individual subject's '90-min cycle' (the use of a solar time-scale masks this effect), which could account for a subject with a naturally short sleep/wake cycle synchronizing to a 24-h rhythm. A combined 'sleep control system' model offers quantitative simulation of the sleep patterning of endogenous depressives and, with a different perturbation, qualitative simulation of the symptoms of narcolepsy.
A simple orbit-attitude coupled modelling method for large solar power satellites
NASA Astrophysics Data System (ADS)
Li, Qingjun; Wang, Bo; Deng, Zichen; Ouyang, Huajiang; Wei, Yi
2018-04-01
A simple modelling method is proposed to study the orbit-attitude coupled dynamics of large solar power satellites based on natural coordinate formulation. The generalized coordinates are composed of Cartesian coordinates of two points and Cartesian components of two unitary vectors instead of Euler angles and angular velocities, which is the reason for its simplicity. Firstly, in order to develop natural coordinate formulation to take gravitational force and gravity gradient torque of a rigid body into account, Taylor series expansion is adopted to approximate the gravitational potential energy. The equations of motion are constructed through constrained Hamilton's equations. Then, an energy- and constraint-conserving algorithm is presented to solve the differential-algebraic equations. Finally, the proposed method is applied to simulate the orbit-attitude coupled dynamics and control of a large solar power satellite considering gravity gradient torque and solar radiation pressure. This method is also applicable to dynamic modelling of other rigid multibody aerospace systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fanizza, G.; Nugier, F., E-mail: giuseppe.fanizza@ba.infn.it, E-mail: fabienjean.nugier@unibo.it
We present in this paper a new application of the geodesic light-cone (GLC) gauge for weak lensing calculations. Using interesting properties of this gauge, we derive an exact expression of the amplification matrix—involving convergence, magnification and shear—and of the deformation matrix—involving the optical scalars. These expressions are simple and non-perturbative as long as no caustics are created on the past light-cone and are, by construction, free from the thin lens approximation. We apply these general expressions on the example of an Lemaȋtre-Tolman-Bondi (LTB) model with an off-center observer and obtain explicit forms for the lensing quantities as a direct consequencemore » of the non-perturbative transformation between GLC and LTB coordinates. We show their evolution in redshift after a numerical integration, for underdense and overdense LTB models, and interpret their respective variations in the simple non-curvature case.« less
White, Thomas E; Rojas, Bibiana; Mappes, Johanna; Rautiala, Petri; Kemp, Darrell J
2017-09-01
Much of what we know about human colour perception has come from psychophysical studies conducted in tightly-controlled laboratory settings. An enduring challenge, however, lies in extrapolating this knowledge to the noisy conditions that characterize our actual visual experience. Here we combine statistical models of visual perception with empirical data to explore how chromatic (hue/saturation) and achromatic (luminant) information underpins the detection and classification of stimuli in a complex forest environment. The data best support a simple linear model of stimulus detection as an additive function of both luminance and saturation contrast. The strength of each predictor is modest yet consistent across gross variation in viewing conditions, which accords with expectation based upon general primate psychophysics. Our findings implicate simple visual cues in the guidance of perception amidst natural noise, and highlight the potential for informing human vision via a fusion between psychophysical modelling and real-world behaviour. © 2017 The Author(s).
Smoothing Motion Estimates for Radar Motion Compensation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doerry, Armin W.
2017-07-01
Simple motion models for complex motion environments are often not adequate for keeping radar data coherent. Eve n perfect motion samples appli ed to imperfect models may lead to interim calculations e xhibiting errors that lead to degraded processing results. Herein we discuss a specific i ssue involving calculating motion for groups of pulses, with measurements only available at pulse-group boundaries. - 4 - Acknowledgements This report was funded by General A tomics Aeronautical Systems, Inc. (GA-ASI) Mission Systems under Cooperative Re search and Development Agre ement (CRADA) SC08/01749 between Sandia National Laboratories and GA-ASI. General Atomics Aeronautical Systems, Inc.more » (GA-ASI), an affilia te of privately-held General Atomics, is a leading manufacturer of Remotely Piloted Aircraft (RPA) systems, radars, and electro-optic and rel ated mission systems, includin g the Predator(r)/Gray Eagle(r)-series and Lynx(r) Multi-mode Radar.« less
On Two-Scale Modelling of Heat and Mass Transfer
NASA Astrophysics Data System (ADS)
Vala, J.; Št'astník, S.
2008-09-01
Modelling of macroscopic behaviour of materials, consisting of several layers or components, whose microscopic (at least stochastic) analysis is available, as well as (more general) simulation of non-local phenomena, complicated coupled processes, etc., requires both deeper understanding of physical principles and development of mathematical theories and software algorithms. Starting from the (relatively simple) example of phase transformation in substitutional alloys, this paper sketches the general formulation of a nonlinear system of partial differential equations of evolution for the heat and mass transfer (useful in mechanical and civil engineering, etc.), corresponding to conservation principles of thermodynamics, both at the micro- and at the macroscopic level, and suggests an algorithm for scale-bridging, based on the robust finite element techniques. Some existence and convergence questions, namely those based on the construction of sequences of Rothe and on the mathematical theory of two-scale convergence, are discussed together with references to useful generalizations, required by new technologies.
Assimilation of satellite color observations in a coupled ocean GCM-ecosystem model
NASA Technical Reports Server (NTRS)
Sarmiento, Jorge L.
1992-01-01
Monthly average coastal zone color scanner (CZCS) estimates of chlorophyll concentration were assimilated into an ocean global circulation model(GCM) containing a simple model of the pelagic ecosystem. The assimilation was performed in the simplest possible manner, to allow the assessment of whether there were major problems with the ecosystem model or with the assimilation procedure. The current ecosystem model performed well in some regions, but failed in others to assimilate chlorophyll estimates without disrupting important ecosystem properties. This experiment gave insight into those properties of the ecosystem model that must be changed to allow data assimilation to be generally successful, while raising other important issues about the assimilation procedure.
NASA Technical Reports Server (NTRS)
Saunders, D. F.; Thomas, G. E. (Principal Investigator); Kinsman, F. E.; Beatty, D. F.
1973-01-01
The author has identified the following significant results. This study was performed to investigate applications of ERTS-1 imagery in commercial reconnaissance for mineral and hydrocarbon resources. ERTS-1 imagery collected over five areas in North America (Montana; Colorado; New Mexico-West Texas; Superior Province, Canada; and North Slope, Alaska) has been analyzed for data content including linears, lineaments, and curvilinear anomalies. Locations of these features were mapped and compared with known locations of mineral and hydrocarbon accumulations. Results were analyzed in the context of a simple-shear, block-coupling model. Data analyses have resulted in detection of new lineaments, some of which may be continental in extent, detection of many curvilinear patterns not generally seen on aerial photos, strong evidence of continental regmatic fracture patterns, and realization that geological features can be explained in terms of a simple-shear, block-coupling model. The conculsions are that ERTS-1 imagery is of great value in photogeologic/geomorphic interpretations of regional features, and the simple-shear, block-coupling model provides a means of relating data from ERTS imagery to structures that have controlled emplacement of ore deposits and hydrocarbon accumulations, thus providing a basis for a new approach for reconnaissance for mineral, uranium, gas, and oil deposits and structures.
An interactive modelling tool for understanding hydrological processes in lowland catchments
NASA Astrophysics Data System (ADS)
Brauer, Claudia; Torfs, Paul; Uijlenhoet, Remko
2016-04-01
Recently, we developed the Wageningen Lowland Runoff Simulator (WALRUS), a rainfall-runoff model for catchments with shallow groundwater (Brauer et al., 2014ab). WALRUS explicitly simulates processes which are important in lowland catchments, such as feedbacks between saturated and unsaturated zone and between groundwater and surface water. WALRUS has a simple model structure and few parameters with physical connotations. Some default functions (which can be changed easily for research purposes) are implemented to facilitate application by practitioners and students. The effect of water management on hydrological variables can be simulated explicitly. The model description and applications are published in open access journals (Brauer et al, 2014). The open source code (provided as R package) and manual can be downloaded freely (www.github.com/ClaudiaBrauer/WALRUS). We organised a short course for Dutch water managers and consultants to become acquainted with WALRUS. We are now adapting this course as a stand-alone tutorial suitable for a varied, international audience. In addition, simple models can aid teachers to explain hydrological principles effectively. We used WALRUS to generate examples for simple interactive tools, which we will present at the EGU General Assembly. C.C. Brauer, A.J. Teuling, P.J.J.F. Torfs, R. Uijlenhoet (2014a): The Wageningen Lowland Runoff Simulator (WALRUS): a lumped rainfall-runoff model for catchments with shallow groundwater, Geosci. Model Dev., 7, 2313-2332. C.C. Brauer, P.J.J.F. Torfs, A.J. Teuling, R. Uijlenhoet (2014b): The Wageningen Lowland Runoff Simulator (WALRUS): application to the Hupsel Brook catchment and Cabauw polder, Hydrol. Earth Syst. Sci., 18, 4007-4028.
Large-eddy simulations with wall models
NASA Technical Reports Server (NTRS)
Cabot, W.
1995-01-01
The near-wall viscous and buffer regions of wall-bounded flows generally require a large expenditure of computational resources to be resolved adequately, even in large-eddy simulation (LES). Often as much as 50% of the grid points in a computational domain are devoted to these regions. The dense grids that this implies also generally require small time steps for numerical stability and/or accuracy. It is commonly assumed that the inner wall layers are near equilibrium, so that the standard logarithmic law can be applied as the boundary condition for the wall stress well away from the wall, for example, in the logarithmic region, obviating the need to expend large amounts of grid points and computational time in this region. This approach is commonly employed in LES of planetary boundary layers, and it has also been used for some simple engineering flows. In order to calculate accurately a wall-bounded flow with coarse wall resolution, one requires the wall stress as a boundary condition. The goal of this work is to determine the extent to which equilibrium and boundary layer assumptions are valid in the near-wall regions, to develop models for the inner layer based on such assumptions, and to test these modeling ideas in some relatively simple flows with different pressure gradients, such as channel flow and flow over a backward-facing step. Ultimately, models that perform adequately in these situations will be applied to more complex flow configurations, such as an airfoil.
Helicopter vibration suppression using simple pendulum absorbers on the rotor blade
NASA Technical Reports Server (NTRS)
Hamouda, M.-N. H.; Pierce, G. A.
1981-01-01
A design procedure is presented for the installation of simple pendulums on the blades of a helicopter rotor to suppress the root reactions. The procedure consists of a frequency response analysis for a hingeless rotor blade excited by a harmonic variation of spanwise airload distributions during forward flight, as well as a concentrated load at the tip. The structural modeling of the blade provides for elastic degrees of freedom in flap and lead-lag bending plus torsion. Simple flap and lead-lag pendulums are considered individually. Using a rational order scheme, the general nonlinear equations of motion are linearized. A quasi-steady aerodynamic representation is used in the formation of the airloads. The solution of the system equations derives from their representation as a transfer matrix. The results include the effect of pendulum tuning on the minimization of the hub reactions.
Advanced data assimilation in strongly nonlinear dynamical systems
NASA Technical Reports Server (NTRS)
Miller, Robert N.; Ghil, Michael; Gauthiez, Francois
1994-01-01
Advanced data assimilation methods are applied to simple but highly nonlinear problems. The dynamical systems studied here are the stochastically forced double well and the Lorenz model. In both systems, linear approximation of the dynamics about the critical points near which regime transitions occur is not always sufficient to track their occurrence or nonoccurrence. Straightforward application of the extended Kalman filter yields mixed results. The ability of the extended Kalman filter to track transitions of the double-well system from one stable critical point to the other depends on the frequency and accuracy of the observations relative to the mean-square amplitude of the stochastic forcing. The ability of the filter to track the chaotic trajectories of the Lorenz model is limited to short times, as is the ability of strong-constraint variational methods. Examples are given to illustrate the difficulties involved, and qualitative explanations for these difficulties are provided. Three generalizations of the extended Kalman filter are described. The first is based on inspection of the innovation sequence, that is, the successive differences between observations and forecasts; it works very well for the double-well problem. The second, an extension to fourth-order moments, yields excellent results for the Lorenz model but will be unwieldy when applied to models with high-dimensional state spaces. A third, more practical method--based on an empirical statistical model derived from a Monte Carlo simulation--is formulated, and shown to work very well. Weak-constraint methods can be made to perform satisfactorily in the context of these simple models, but such methods do not seem to generalize easily to practical models of the atmosphere and ocean. In particular, it is shown that the equations derived in the weak variational formulation are difficult to solve conveniently for large systems.
Connections between survey calibration estimators and semiparametric models for incomplete data
Lumley, Thomas; Shaw, Pamela A.; Dai, James Y.
2012-01-01
Survey calibration (or generalized raking) estimators are a standard approach to the use of auxiliary information in survey sampling, improving on the simple Horvitz–Thompson estimator. In this paper we relate the survey calibration estimators to the semiparametric incomplete-data estimators of Robins and coworkers, and to adjustment for baseline variables in a randomized trial. The development based on calibration estimators explains the ‘estimated weights’ paradox and provides useful heuristics for constructing practical estimators. We present some examples of using calibration to gain precision without making additional modelling assumptions in a variety of regression models. PMID:23833390
NASA Technical Reports Server (NTRS)
Hayden, W. L.; Robinson, L. H.
1972-01-01
Spectral analyses of angle-modulated communication systems is studied by: (1) performing a literature survey of candidate power spectrum computational techniques, determining the computational requirements, and formulating a mathematical model satisfying these requirements; (2) implementing the model on UNIVAC 1230 digital computer as the Spectral Analysis Program (SAP); and (3) developing the hardware specifications for a data acquisition system which will acquire an input modulating signal for SAP. The SAP computational technique uses extended fast Fourier transform and represents a generalized approach for simple and complex modulating signals.
Quantum Mechanics, Path Integrals and Option Pricing:. Reducing the Complexity of Finance
NASA Astrophysics Data System (ADS)
Baaquie, Belal E.; Corianò, Claudio; Srikant, Marakani
2003-04-01
Quantum Finance represents the synthesis of the techniques of quantum theory (quantum mechanics and quantum field theory) to theoretical and applied finance. After a brief overview of the connection between these fields, we illustrate some of the methods of lattice simulations of path integrals for the pricing of options. The ideas are sketched out for simple models, such as the Black-Scholes model, where analytical and numerical results are compared. Application of the method to nonlinear systems is also briefly overviewed. More general models, for exotic or path-dependent options are discussed.
Simulation Speed Analysis and Improvements of Modelica Models for Building Energy Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jorissen, Filip; Wetter, Michael; Helsen, Lieve
This paper presents an approach for speeding up Modelica models. Insight is provided into how Modelica models are solved and what determines the tool’s computational speed. Aspects such as algebraic loops, code efficiency and integrator choice are discussed. This is illustrated using simple building simulation examples and Dymola. The generality of the work is in some cases verified using OpenModelica. Using this approach, a medium sized office building including building envelope, heating ventilation and air conditioning (HVAC) systems and control strategy can be simulated at a speed five hundred times faster than real time.
Fast trimers in a one-dimensional extended Fermi-Hubbard model
NASA Astrophysics Data System (ADS)
Dhar, A.; Törmä, P.; Kinnunen, J. J.
2018-04-01
We consider a one-dimensional two-component extended Fermi-Hubbard model with nearest-neighbor interactions and mass imbalance between the two species. We study the binding energy of trimers, various observables for detecting them, and expansion dynamics. We generalize the definition of the trimer gap to include the formation of different types of clusters originating from nearest-neighbor interactions. Expansion dynamics reveal rapidly propagating trimers, with speeds exceeding doublon propagation in the strongly interacting regime. We present a simple model for understanding this unique feature of the movement of the trimers, and we discuss the potential for experimental realization.
Accelerating Drug Development: Antiviral Therapies for Emerging Viruses as a Model.
Everts, Maaike; Cihlar, Tomas; Bostwick, J Robert; Whitley, Richard J
2017-01-06
Drug discovery and development is a lengthy and expensive process. Although no one, simple, single solution can significantly accelerate this process, steps can be taken to avoid unnecessary delays. Using the development of antiviral therapies as a model, we describe options for acceleration that cover target selection, assay development and high-throughput screening, hit confirmation, lead identification and development, animal model evaluations, toxicity studies, regulatory issues, and the general drug discovery and development infrastructure. Together, these steps could result in accelerated timelines for bringing antiviral therapies to market so they can treat emerging infections and reduce human suffering.
NASA Astrophysics Data System (ADS)
Li, K. F.; Yao, K.; Taketa, C.; Zhang, X.; Liang, M. C.; Jiang, X.; Newman, C. E.; Tung, K. K.; Yung, Y. L.
2015-12-01
With the advance of modern computers, studies of planetary atmospheres have heavily relied on general circulation models (GCMs). Because these GCMs are usually very complicated, the simulations are sometimes difficult to understand. Here we develop a semi-analytic zonally averaged, cyclostrophic residual Eulerian model to illustrate how some of the large-scale structures of the middle atmospheric circulation can be explained qualitatively in terms of simple thermal (e.g. solar heating) and mechanical (the Eliassen-Palm flux divergence) forcings. This model is a generalization of that for fast rotating planets such as the Earth, where geostrophy dominates (Andrews and McIntyre 1987). The solution to this semi-analytic model consists of a set of modified Hough functions of the generalized Laplace's tidal equation with the cyclostrohpic terms. As examples, we apply this model to Titan and Venus. We show that the seasonal variations of the temperature and the circulation of these slowly-rotating planets can be well reproduced by adjusting only three parameters in the model: the Brunt-Väisälä bouyancy frequency, the Newtonian radiative cooling rate, and the Rayleigh friction damping rate. We will also discuss the application of this model to study the meridional transport of photochemically produced tracers that can be observed by space instruments.
NASA Astrophysics Data System (ADS)
Li, King-Fai; Yao, Kaixuan; Taketa, Cameron; Zhang, Xi; Liang, Mao-Chang; Jiang, Xun; Newman, Claire; Tung, Ka-Kit; Yung, Yuk L.
2016-04-01
With the advance of modern computers, studies of planetary atmospheres have heavily relied on general circulation models (GCMs). Because these GCMs are usually very complicated, the simulations are sometimes difficult to understand. Here we develop a semi-analytic zonally averaged, cyclostrophic residual Eulerian model to illustrate how some of the large-scale structures of the middle atmospheric circulation can be explained qualitatively in terms of simple thermal (e.g. solar heating) and mechanical (the Eliassen-Palm flux divergence) forcings. This model is a generalization of that for fast rotating planets such as the Earth, where geostrophy dominates (Andrews and McIntyre 1987). The solution to this semi-analytic model consists of a set of modified Hough functions of the generalized Laplace's tidal equation with the cyclostrohpic terms. As an example, we apply this model to Titan. We show that the seasonal variations of the temperature and the circulation of these slowly-rotating planets can be well reproduced by adjusting only three parameters in the model: the Brunt-Väisälä bouyancy frequency, the Newtonian radiative cooling rate, and the Rayleigh friction damping rate. We will also discuss an application of this model to study the meridional transport of photochemically produced tracers that can be observed by space instruments.
Exponential quantum spreading in a class of kicked rotor systems near high-order resonances
NASA Astrophysics Data System (ADS)
Wang, Hailong; Wang, Jiao; Guarneri, Italo; Casati, Giulio; Gong, Jiangbin
2013-11-01
Long-lasting exponential quantum spreading was recently found in a simple but very rich dynamical model, namely, an on-resonance double-kicked rotor model [J. Wang, I. Guarneri, G. Casati, and J. B. Gong, Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.107.234104 107, 234104 (2011)]. The underlying mechanism, unrelated to the chaotic motion in the classical limit but resting on quasi-integrable motion in a pseudoclassical limit, is identified for one special case. By presenting a detailed study of the same model, this work offers a framework to explain long-lasting exponential quantum spreading under much more general conditions. In particular, we adopt the so-called “spinor” representation to treat the kicked-rotor dynamics under high-order resonance conditions and then exploit the Born-Oppenheimer approximation to understand the dynamical evolution. It is found that the existence of a flat band (or an effectively flat band) is one important feature behind why and how the exponential dynamics emerges. It is also found that a quantitative prediction of the exponential spreading rate based on an interesting and simple pseudoclassical map may be inaccurate. In addition to general interests regarding the question of how exponential behavior in quantum systems may persist for a long time scale, our results should motivate further studies toward a better understanding of high-order resonance behavior in δ-kicked quantum systems.
Doubly self-consistent field theory of grafted polymers under simple shear in steady state.
Suo, Tongchuan; Whitmore, Mark D
2014-03-21
We present a generalization of the numerical self-consistent mean-field theory of polymers to the case of grafted polymers under simple shear. The general theoretical framework is presented, and then applied to three different chain models: rods, Gaussian chains, and finitely extensible nonlinear elastic (FENE) chains. The approach is self-consistent at two levels. First, for any flow field, the polymer density profile and effective potential are calculated self-consistently in a manner similar to the usual self-consistent field theory of polymers, except that the calculation is inherently two-dimensional even for a laterally homogeneous system. Second, through the use of a modified Brinkman equation, the flow field and the polymer profile are made self-consistent with respect to each other. For all chain models, we find that reasonable levels of shear cause the chains to tilt, but it has very little effect on the overall thickness of the polymer layer, causing a small decrease for rods, and an increase of no more than a few percent for the Gaussian and FENE chains. Using the FENE model, we also probe the individual bond lengths, bond correlations, and bond angles along the chains, the effects of the shear on them, and the solvent and bonded stress profiles. We find that the approximations needed within the theory for the Brinkman equation affect the bonded stress, but none of the other quantities.
Parametrizing growth in dark energy and modified gravity models
NASA Astrophysics Data System (ADS)
Resco, Miguel Aparicio; Maroto, Antonio L.
2018-02-01
It is well known that an extremely accurate parametrization of the growth function of matter density perturbations in Λ CDM cosmology, with errors below 0.25%, is given by f (a )=Ωmγ(a ) with γ ≃0.55 . In this work, we show that a simple modification of this expression also provides a good description of growth in modified gravity theories. We consider the model-independent approach to modified gravity in terms of an effective Newton constant written as μ (a ,k )=Geff/G and show that f (a )=β (a )Ωmγ(a ) provides fits to the numerical solutions with similar accuracy to that of Λ CDM . In the time-independent case with μ =μ (k ), simple analytic expressions for β (μ ) and γ (μ ) are presented. In the time-dependent (but scale-independent) case μ =μ (a ), we show that β (a ) has the same time dependence as μ (a ). As an example, explicit formulas are provided in the Dvali-Gabadadze-Porrati (DGP) model. In the general case, for theories with μ (a ,k ), we obtain a perturbative expansion for β (μ ) around the general relativity case μ =1 which, for f (R ) theories, reaches an accuracy below 1%. Finally, as an example we apply the obtained fitting functions in order to forecast the precision with which future galaxy surveys will be able to measure the μ parameter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gavignet, A.A.; Wick, C.J.
In current practice, pressure drops in the mud circulating system and the settling velocity of cuttings are calculated with simple rheological models and simple equations. Wellsite computers now allow more sophistication in drilling computations. In this paper, experimental results on the settling velocity of spheres in drilling fluids are reported, along with rheograms done over a wide range of shear rates. The flow curves are fitted to polynomials and general methods are developed to predict friction losses and settling velocities as functions of the polynomial coefficients. These methods were incorporated in a software package that can handle any rig configurationmore » system, including riser booster. Graphic displays show the effect of each parameter on the performance of the circulating system.« less
NASA Astrophysics Data System (ADS)
Kukushkin, A. B.; Sdvizhenskii, P. A.
2017-12-01
The results of accuracy analysis of automodel solutions for Lévy flight-based transport on a uniform background are presented. These approximate solutions have been obtained for Green’s function of the following equations: the non-stationary Biberman-Holstein equation for three-dimensional (3D) radiative transfer in plasma and gases, for various (Doppler, Lorentz, Voigt and Holtsmark) spectral line shapes, and the 1D transport equation with a simple longtailed step-length probability distribution function with various power-law exponents. The results suggest the possibility of substantial extension of the developed method of automodel solution to other fields far beyond physics.
Linear analysis of auto-organization in Hebbian neural networks.
Carlos Letelier, J; Mpodozis, J
1995-01-01
The self-organization of neurotopies where neural connections follow Hebbian dynamics is framed in terms of linear operator theory. A general and exact equation describing the time evolution of the overall synaptic strength connecting two neural laminae is derived. This linear matricial equation, which is similar to the equations used to describe oscillating systems in physics, is modified by the introduction of non-linear terms, in order to capture self-organizing (or auto-organizing) processes. The behavior of a simple and small system, that contains a non-linearity that mimics a metabolic constraint, is analyzed by computer simulations. The emergence of a simple "order" (or degree of organization) in this low-dimensionality model system is discussed.
NASA Astrophysics Data System (ADS)
Sales, Brian; Sefat, Athena; McGuire, Michael; Mandrus, David
2010-03-01
A simple two-band 3D model of a semimetal is constructed to see which normal state features of the Ba(Fe1-xCox)2As2 superconductors can be qualitatively understood within this framework. The model is able to account in a semiquantitative fashion for the measured magnetic susceptibility, Hall, and Seebeck data, and the low temperature Sommerfeld coefficient for 0
"Shape function + memory mechanism"-based hysteresis modeling of magnetorheological fluid actuators
NASA Astrophysics Data System (ADS)
Qian, Li-Jun; Chen, Peng; Cai, Fei-Long; Bai, Xian-Xu
2018-03-01
A hysteresis model based on "shape function + memory mechanism" is presented and its feasibility is verified through modeling the hysteresis behavior of a magnetorheological (MR) damper. A hysteresis phenomenon in resistor-capacitor (RC) circuit is first presented and analyzed. In the hysteresis model, the "memory mechanism" originating from the charging and discharging processes of the RC circuit is constructed by adopting a virtual displacement variable and updating laws for the reference points. The "shape function" is achieved and generalized from analytical solutions of the simple semi-linear Duhem model. Using the approach, the memory mechanism reveals the essence of specific Duhem model and the general shape function provides a direct and clear means to fit the hysteresis loop. In the frame of the structure of a "Restructured phenomenological model", the original hysteresis operator, i.e., the Bouc-Wen operator, is replaced with the new hysteresis operator. The comparative work with the Bouc-Wen operator based model demonstrates superior performances of high computational efficiency and comparable accuracy of the new hysteresis operator-based model.
Simulation of seasonal anomalies of atmospheric circulation using coupled atmosphere-ocean model
NASA Astrophysics Data System (ADS)
Tolstykh, M. A.; Diansky, N. A.; Gusev, A. V.; Kiktev, D. B.
2014-03-01
A coupled atmosphere-ocean model intended for the simulation of coupled circulation at time scales up to a season is developed. The semi-Lagrangian atmospheric general circulation model of the Hydrometeorological Centre of Russia, SLAV, is coupled with the sigma model of ocean general circulation developed at the Institute of Numerical Mathematics, Russian Academy of Sciences (INM RAS), INMOM. Using this coupled model, numerical experiments on ensemble modeling of the atmosphere and ocean circulation for up to 4 months are carried out using real initial data for all seasons of an annual cycle in 1989-2010. Results of these experiments are compared to the results of the SLAV model with the simple evolution of the sea surface temperature. A comparative analysis of seasonally averaged anomalies of atmospheric circulation shows prospects in applying the coupled model for forecasts. It is shown with the example of the El Niño phenomenon of 1997-1998 that the coupled model forecasts the seasonally averaged anomalies for the period of the nonstationary El Niño phase significantly better.
On double shearing in frictional materials
NASA Astrophysics Data System (ADS)
Teunissen, J. A. M.
2007-01-01
This paper evaluates the mechanical behaviour of yielding frictional geomaterials. The general Double Shearing model describes this behaviour. Non-coaxiality of stress and plastic strain increments for plane strain conditions forms an important part of this model. The model is based on a micro-mechanical and macro-mechanical formulation. The stress-dilatancy theory in the model combines the mechanical behaviour on both scales.It is shown that the general Double Shearing formulation comprises other Double Shearing models. These models differ in the relation between the mobilized friction and dilatancy and in non-coaxiality. In order to describe reversible and irreversible deformations the general Double Shearing model is extended with elasticity.The failure of soil masses is controlled by shear mechanisms. These shear mechanisms are determined by the conditions along the shear band. The shear stress ratio of a shear band depends on the orientation of the stress in the shear band. There is a difference between the peak strength and the residual strength in the shear band. While peak stress depends on strength properties only, the residual strength depends upon the yield conditions and the plastic deformation mechanisms and is generally considerably lower than the maximum strength. It is shown that non-coaxial models give non-unique solutions for the shear stress ratio on the shear band. The Double Shearing model is applied to various failure problems of soils such as the direct simple shear test, the biaxial test, infinite slopes, interfaces and for the calculation of the undrained shear strength. Copyright
Simple spatial scaling rules behind complex cities.
Li, Ruiqi; Dong, Lei; Zhang, Jiang; Wang, Xinran; Wang, Wen-Xu; Di, Zengru; Stanley, H Eugene
2017-11-28
Although most of wealth and innovation have been the result of human interaction and cooperation, we are not yet able to quantitatively predict the spatial distributions of three main elements of cities: population, roads, and socioeconomic interactions. By a simple model mainly based on spatial attraction and matching growth mechanisms, we reveal that the spatial scaling rules of these three elements are in a consistent framework, which allows us to use any single observation to infer the others. All numerical and theoretical results are consistent with empirical data from ten representative cities. In addition, our model can also provide a general explanation of the origins of the universal super- and sub-linear aggregate scaling laws and accurately predict kilometre-level socioeconomic activity. Our work opens a new avenue for uncovering the evolution of cities in terms of the interplay among urban elements, and it has a broad range of applications.
NASA Astrophysics Data System (ADS)
Zieliński, Tomasz G.
2017-11-01
The paper proposes and investigates computationally-efficient microstructure representations for sound absorbing fibrous media. Three-dimensional volume elements involving non-trivial periodic arrangements of straight fibres are examined as well as simple two-dimensional cells. It has been found that a simple 2D quasi-representative cell can provide similar predictions as a volume element which is in general much more geometrically accurate for typical fibrous materials. The multiscale modelling allowed to determine the effective speeds and damping of acoustic waves propagating in such media, which brings up a discussion on the correlation between the speed, penetration range and attenuation of sound waves. Original experiments on manufactured copper-wire samples are presented and the microstructure-based calculations of acoustic absorption are compared with the corresponding experimental results. In fact, the comparison suggested the microstructure modifications leading to representations with non-uniformly distributed fibres.
Reinforced communication and social navigation: Remember your friends and remember yourself
NASA Astrophysics Data System (ADS)
Mirshahvalad, A.; Rosvall, M.
2011-09-01
In social systems, people communicate with each other and form groups based on their interests. The pattern of interactions, the network, and the ideas that flow on the network naturally evolve together. Researchers use simple models to capture the feedback between changing network patterns and ideas on the network, but little is understood about the role of past events in the feedback process. Here, we introduce a simple agent-based model to study the coupling between peoples’ ideas and social networks, and better understand the role of history in dynamic social networks. We measure how information about ideas can be recovered from information about network structure and, the other way around, how information about network structure can be recovered from information about ideas. We find that it is, in general, easier to recover ideas from the network structure than vice versa.
Data requirements to model creep in 9Cr-1Mo-V steel
NASA Technical Reports Server (NTRS)
Swindeman, R. W.
1988-01-01
Models for creep behavior are helpful in predicting response of components experiencing stress redistributions due to cyclic loads, and often the analyst would like information that correlates strain rate with history assuming simple hardening rules such as those based on time or strain. On the one hand, much progress has been made in the development of unified constitutive equations that include both hardening and softening through the introduction of state variables whose evolutions are history dependent. Although it is difficult to estimate specific data requirements for general application, there are several simple measurements that can be made in the course of creep testing and results reported in data bases. The issue is whether or not such data could be helpful in developing unified equations, and, if so, how should such data be reported. Data produced on a martensitic 9Cr-1Mo-V-Nb steel were examined with these issues in mind.
Mapping axonal density and average diameter using non-monotonic time-dependent gradient-echo MRI
NASA Astrophysics Data System (ADS)
Nunes, Daniel; Cruz, Tomás L.; Jespersen, Sune N.; Shemesh, Noam
2017-04-01
White Matter (WM) microstructures, such as axonal density and average diameter, are crucial to the normal function of the Central Nervous System (CNS) as they are closely related with axonal conduction velocities. Conversely, disruptions of these microstructural features may result in severe neurological deficits, suggesting that their noninvasive mapping could be an important step towards diagnosing and following pathophysiology. Whereas diffusion based MRI methods have been proposed to map these features, they typically entail the application of powerful gradients, which are rarely available in the clinic, or extremely long acquisition schemes to extract information from parameter-intensive models. In this study, we suggest that simple and time-efficient multi-gradient-echo (MGE) MRI can be used to extract the axon density from susceptibility-driven non-monotonic decay in the time-dependent signal. We show, both theoretically and with simulations, that a non-monotonic signal decay will occur for multi-compartmental microstructures - such as axons and extra-axonal spaces, which were here used as a simple model for the microstructure - and that, for axons parallel to the main magnetic field, the axonal density can be extracted. We then experimentally demonstrate in ex-vivo rat spinal cords that its different tracts - characterized by different microstructures - can be clearly contrasted using the MGE-derived maps. When the quantitative results are compared against ground-truth histology, they reflect the axonal fraction (though with a bias, as evident from Bland-Altman analysis). As well, the extra-axonal fraction can be estimated. The results suggest that our model is oversimplified, yet at the same time evidencing a potential and usefulness of the approach to map underlying microstructures using a simple and time-efficient MRI sequence. We further show that a simple general-linear-model can predict the average axonal diameters from the four model parameters, and map these average axonal diameters in the spinal cords. While clearly further modelling and theoretical developments are necessary, we conclude that salient WM microstructural features can be extracted from simple, SNR-efficient multi-gradient echo MRI, and that this paves the way towards easier estimation of WM microstructure in vivo.
Mapping axonal density and average diameter using non-monotonic time-dependent gradient-echo MRI.
Nunes, Daniel; Cruz, Tomás L; Jespersen, Sune N; Shemesh, Noam
2017-04-01
White Matter (WM) microstructures, such as axonal density and average diameter, are crucial to the normal function of the Central Nervous System (CNS) as they are closely related with axonal conduction velocities. Conversely, disruptions of these microstructural features may result in severe neurological deficits, suggesting that their noninvasive mapping could be an important step towards diagnosing and following pathophysiology. Whereas diffusion based MRI methods have been proposed to map these features, they typically entail the application of powerful gradients, which are rarely available in the clinic, or extremely long acquisition schemes to extract information from parameter-intensive models. In this study, we suggest that simple and time-efficient multi-gradient-echo (MGE) MRI can be used to extract the axon density from susceptibility-driven non-monotonic decay in the time-dependent signal. We show, both theoretically and with simulations, that a non-monotonic signal decay will occur for multi-compartmental microstructures - such as axons and extra-axonal spaces, which were here used as a simple model for the microstructure - and that, for axons parallel to the main magnetic field, the axonal density can be extracted. We then experimentally demonstrate in ex-vivo rat spinal cords that its different tracts - characterized by different microstructures - can be clearly contrasted using the MGE-derived maps. When the quantitative results are compared against ground-truth histology, they reflect the axonal fraction (though with a bias, as evident from Bland-Altman analysis). As well, the extra-axonal fraction can be estimated. The results suggest that our model is oversimplified, yet at the same time evidencing a potential and usefulness of the approach to map underlying microstructures using a simple and time-efficient MRI sequence. We further show that a simple general-linear-model can predict the average axonal diameters from the four model parameters, and map these average axonal diameters in the spinal cords. While clearly further modelling and theoretical developments are necessary, we conclude that salient WM microstructural features can be extracted from simple, SNR-efficient multi-gradient echo MRI, and that this paves the way towards easier estimation of WM microstructure in vivo. Copyright © 2017 Elsevier Inc. All rights reserved.
The Baldwin-Lomax model for separated and wake flows using the entropy envelope concept
NASA Technical Reports Server (NTRS)
Brock, J. S.; Ng, W. F.
1992-01-01
Implementation of the Baldwin-Lomax algebraic turbulence model is difficult and ambiguous within flows characterized by strong viscous-inviscid interactions and flow separations. A new method of implementation is proposed which uses an entropy envelope concept and is demonstrated to ensure the proper evaluation of modeling parameters. The method is simple, computationally fast, and applicable to both wake and boundary layer flows. The method is general, making it applicable to any turbulence model which requires the automated determination of the proper maxima of a vorticity-based function. The new method is evalulated within two test cases involving strong viscous-inviscid interaction.
Generalized fractional diffusion equations for subdiffusion in arbitrarily growing domains
NASA Astrophysics Data System (ADS)
Angstmann, C. N.; Henry, B. I.; McGann, A. V.
2017-10-01
The ubiquity of subdiffusive transport in physical and biological systems has led to intensive efforts to provide robust theoretical models for this phenomena. These models often involve fractional derivatives. The important physical extension of this work to processes occurring in growing materials has proven highly nontrivial. Here we derive evolution equations for modeling subdiffusive transport in a growing medium. The derivation is based on a continuous-time random walk. The concise formulation of these evolution equations requires the introduction of a new, comoving, fractional derivative. The implementation of the evolution equation is illustrated with a simple model of subdiffusing proteins in a growing membrane.
A flexible motif search technique based on generalized profiles.
Bucher, P; Karplus, K; Moeri, N; Hofmann, K
1996-03-01
A flexible motif search technique is presented which has two major components: (1) a generalized profile syntax serving as a motif definition language; and (2) a motif search method specifically adapted to the problem of finding multiple instances of a motif in the same sequence. The new profile structure, which is the core of the generalized profile syntax, combines the functions of a variety of motif descriptors implemented in other methods, including regular expression-like patterns, weight matrices, previously used profiles, and certain types of hidden Markov models (HMMs). The relationship between generalized profiles and other biomolecular motif descriptors is analyzed in detail, with special attention to HMMs. Generalized profiles are shown to be equivalent to a particular class of HMMs, and conversion procedures in both directions are given. The conversion procedures provide an interpretation for local alignment in the framework of stochastic models, allowing for clear, simple significance tests. A mathematical statement of the motif search problem defines the new method exactly without linking it to a specific algorithmic solution. Part of the definition includes a new definition of disjointness of alignments.
Taboo Search: An Approach to the Multiple Minima Problem
NASA Astrophysics Data System (ADS)
Cvijovic, Djurdje; Klinowski, Jacek
1995-02-01
Described here is a method, based on Glover's taboo search for discrete functions, of solving the multiple minima problem for continuous functions. As demonstrated by model calculations, the algorithm avoids entrapment in local minima and continues the search to give a near-optimal final solution. Unlike other methods of global optimization, this procedure is generally applicable, easy to implement, derivative-free, and conceptually simple.
Competing forces in five-dimensional fermion condensation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, Jongmin; Peskin, Michael E.
We study fermion condensation in the Randall-Sundrum background as a setting for composite Higgs models. We formalize the computation of the Coleman-Weinberg potential and present a simple, general formula. Using this tool, we study the competition of fermion multiplets with different boundary conditions, to find conditions for creating a little hierarchy with the Higgs field expectation value much smaller than the intrinsic Randall-Sundrum mass scale.
Variational Approach in the Theory of Liquid-Crystal State
NASA Astrophysics Data System (ADS)
Gevorkyan, E. V.
2018-03-01
The variational calculus by Leonhard Euler is the basis for modern mathematics and theoretical physics. The efficiency of variational approach in statistical theory of liquid-crystal state and in general case in condensed state theory is shown. The developed approach in particular allows us to introduce correctly effective pair interactions and optimize the simple models of liquid crystals with help of realistic intermolecular potentials.
ERIC Educational Resources Information Center
Slisko, Josip; Cruz, Adrian Corona
2013-01-01
There is a general agreement that critical thinking is an important element of 21st century skills. Although critical thinking is a very complex and controversial conception, many would accept that recognition and evaluation of assumptions is a basic critical-thinking process. When students use simple mathematical model to reason quantitatively…
A Simple Economic Model of Cocaine Production
1994-01-01
been important, often dominant, in U.S. relationships with the Andean region and, at times, with Burma, Mexico , Pakistan, and Turkey. The effort to...control drug production overseas has generally been viewed as ineffective. Mexico , the most cooperative of the source countries, continues to produce...attractive option, since the program would not face the difficulties presented in Mexico (the only country with an active program) of small, dispersed
Thermodynamics of Thomas-Fermi screened Coulomb systems
NASA Technical Reports Server (NTRS)
Firey, B.; Ashcroft, N. W.
1977-01-01
We obtain in closed analytic form, estimates for the thermodynamic properties of classical fluids with pair potentials of Yukawa type, with special reference to dense fully ionized plasmas with Thomas-Fermi or Debye-Hueckel screening. We further generalize the hard-sphere perturbative approach used for similarly screened two-component mixtures, and demonstrate phase separation in this simple model of a liquid mixture of metallic helium and hydrogen.
Antiparticle cloud temperatures for antihydrogen experiments
NASA Astrophysics Data System (ADS)
Bianconi, A.; Charlton, M.; Lodi Rizzini, E.; Mascagna, V.; Venturelli, L.
2017-07-01
A simple rate-equation description of the heating and cooling of antiparticle clouds under conditions typical of those found in antihydrogen formation experiments is developed and analyzed. We include single-particle collisional, radiative, and cloud expansion effects and, from the modeling calculations, identify typical cooling phenomena and trends and relate these to the underlying physics. Some general rules of thumb of use to experimenters are derived.
Greenhouse Effect: Temperature of a Metal Sphere Surrounded by a Glass Shell and Heated by Sunlight
ERIC Educational Resources Information Center
Nguyen, Phuc H.; Matzner, Richard A.
2012-01-01
We study the greenhouse effect on a model satellite consisting of a tungsten sphere surrounded by a thin spherical, concentric glass shell, with a small gap between the sphere and the shell. The system sits in vacuum and is heated by sunlight incident along the "z"-axis. This development is a generalization of the simple treatment of the…
Competing forces in five-dimensional fermion condensation
Yoon, Jongmin; Peskin, Michael E.
2017-12-27
We study fermion condensation in the Randall-Sundrum background as a setting for composite Higgs models. We formalize the computation of the Coleman-Weinberg potential and present a simple, general formula. Using this tool, we study the competition of fermion multiplets with different boundary conditions, to find conditions for creating a little hierarchy with the Higgs field expectation value much smaller than the intrinsic Randall-Sundrum mass scale.
Competing forces in five-dimensional fermion condensation
NASA Astrophysics Data System (ADS)
Yoon, Jongmin; Peskin, Michael E.
2017-12-01
We study fermion condensation in the Randall-Sundrum background as a setting for composite Higgs models. We formalize the computation of the Coleman-Weinberg potential and present a simple, general formula. Using this tool, we study the competition of fermion multiplets with different boundary conditions, to find conditions for creating a little hierarchy with the Higgs field expectation value much smaller than the intrinsic Randall-Sundrum mass scale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Yongpeng; Northwest Institute of Nuclear Technology, P.O. Box 69-13, Xi'an 710024; Liu Guozhi
In this paper, the Child-Langmuir law and Langmuir-Blodgett law are generalized to the relativistic regime by a simple method. Two classical laws suitable for the nonrelativistic regime are modified to simple approximate expressions applicable for calculating the space-charge-limited currents of one-dimensional steady-state planar diodes and coaxial diodes under the relativistic regime. The simple approximate expressions, extending the Child-Langmuir law and Langmuir-Blodgett law to fit the full range of voltage, have small relative errors less than 1% for one-dimensional planar diodes and less than 5% for coaxial diodes.
NASA Astrophysics Data System (ADS)
Benjankar, R. M.; Sohrabi, M.; Tonina, D.; McKean, J. A.
2013-12-01
Aquatic habitat models utilize flow variables which may be predicted with one-dimensional (1D) or two-dimensional (2D) hydrodynamic models to simulate aquatic habitat quality. Studies focusing on the effects of hydrodynamic model dimensionality on predicted aquatic habitat quality are limited. Here we present the analysis of the impact of flow variables predicted with 1D and 2D hydrodynamic models on simulated spatial distribution of habitat quality and Weighted Usable Area (WUA) for fall-spawning Chinook salmon. Our study focuses on three river systems located in central Idaho (USA), which are a straight and pool-riffle reach (South Fork Boise River), small pool-riffle sinuous streams in a large meadow (Bear Valley Creek) and a steep-confined plane-bed stream with occasional deep forced pools (Deadwood River). We consider low and high flows in simple and complex morphologic reaches. Results show that 1D and 2D modeling approaches have effects on both the spatial distribution of the habitat and WUA for both discharge scenarios, but we did not find noticeable differences between complex and simple reaches. In general, the differences in WUA were small, but depended on stream type. Nevertheless, spatially distributed habitat quality difference is considerable in all streams. The steep-confined plane bed stream had larger differences between aquatic habitat quality defined with 1D and 2D flow models compared to results for streams with well defined macro-topographies, such as pool-riffle bed forms. KEY WORDS: one- and two-dimensional hydrodynamic models, habitat modeling, weighted usable area (WUA), hydraulic habitat suitability, high and low discharges, simple and complex reaches
NASA Astrophysics Data System (ADS)
Pascaud, J. M.; Brossard, J.; Lombard, J. M.
1999-09-01
The aim of this work consists in presenting a simple modelling (the molecular collision theory), easily usable in an industrial environment in order to predict the evolution of thermodynamical characteristics of the combustion of two-phase mixtures in a closed or a vented vessel. Basic characteristics of the modelling have been developed for ignition and combustion of propulsive powders and adapted with appropriate parameters linked to simplified kinetics. A simple representation of the combustion phenomena based on energy transfers and the action of specific molecules is presented. The model is generalized to various mixtures such as dust suspensions, liquid fuel drops and hybrid mixtures composed of dust and a gaseous supply such as methane or propane in the general case of vented explosions. The pressure venting due to the vent breaking is calculated from thermodynamical characteristics given by the model and taking into account, the mass rate of discharge of the different products deduced from the standard orifice equations. The application conditions determine the fuel ratio of the used mixtures, the nature of the chemical kinetics and the calculation of a universal set of parameters. The model allows to study the influence of the fuel concentration and the supply of gaseous additives, the influence of the vessel volume (2400ell leq V_bleq 250 000ell) and the influence of the venting pressure or the vent area. The first results have been compared with various experimental works available for two phase mixtures and indicate quite correct predictions.
NASA Astrophysics Data System (ADS)
Ishizaki, N. N.; Dairaku, K.; Ueno, G.
2016-12-01
We have developed a statistical downscaling method for estimating probabilistic climate projection using CMIP5 multi general circulation models (GCMs). A regression model was established so that the combination of weights of GCMs reflects the characteristics of the variation of observations at each grid point. Cross validations were conducted to select GCMs and to evaluate the regression model to avoid multicollinearity. By using spatially high resolution observation system, we conducted statistically downscaled probabilistic climate projections with 20-km horizontal grid spacing. Root mean squared errors for monthly mean air surface temperature and precipitation estimated by the regression method were the smallest compared with the results derived from a simple ensemble mean of GCMs and a cumulative distribution function based bias correction method. Projected changes in the mean temperature and precipitation were basically similar to those of the simple ensemble mean of GCMs. Mean precipitation was generally projected to increase associated with increased temperature and consequent increased moisture content in the air. Weakening of the winter monsoon may affect precipitation decrease in some areas. Temperature increase in excess of 4 K was expected in most areas of Japan in the end of 21st century under RCP8.5 scenario. The estimated probability of monthly precipitation exceeding 300 mm would increase around the Pacific side during the summer and the Japan Sea side during the winter season. This probabilistic climate projection based on the statistical method can be expected to bring useful information to the impact studies and risk assessments.
Electrostatics of cysteine residues in proteins: parameterization and validation of a simple model.
Salsbury, Freddie R; Poole, Leslie B; Fetrow, Jacquelyn S
2012-11-01
One of the most popular and simple models for the calculation of pK(a) s from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pK(a) s. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pK(a) s; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pK(a) s. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pK(a) values (where the calculation should reproduce the pK(a) within experimental error). Both the general behavior of cysteines in proteins and the perturbed pK(a) in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pK(a) should be shifted, and validation of force field parameters for cysteine residues. Copyright © 2012 Wiley Periodicals, Inc.
Mechanochemical pattern formation in simple models of active viscoelastic fluids and solids
NASA Astrophysics Data System (ADS)
Alonso, Sergio; Radszuweit, Markus; Engel, Harald; Bär, Markus
2017-11-01
The cytoskeleton of the organism Physarum polycephalum is a prominent example of a complex active viscoelastic material wherein stresses induce flows along the organism as a result of the action of molecular motors and their regulation by calcium ions. Experiments in Physarum polycephalum have revealed a rich variety of mechanochemical patterns including standing, traveling and rotating waves that arise from instabilities of spatially homogeneous states without gradients in stresses and resulting flows. Herein, we investigate simple models where an active stress induced by molecular motors is coupled to a model describing the passive viscoelastic properties of the cellular material. Specifically, two models for viscoelastic fluids (Maxwell and Jeffrey model) and two models for viscoelastic solids (Kelvin-Voigt and Standard model) are investigated. Our focus is on the analysis of the conditions that cause destabilization of spatially homogeneous states and the related onset of mechano-chemical waves and patterns. We carry out linear stability analyses and numerical simulations in one spatial dimension for different models. In general, sufficiently strong activity leads to waves and patterns. The primary instability is stationary for all active fluids considered, whereas all active solids have an oscillatory primary instability. All instabilities found are of long-wavelength nature reflecting the conservation of the total calcium concentration in the models studied.
A-Priori Tuning of Modified Magnussen Combustion Model
NASA Technical Reports Server (NTRS)
Norris, A. T.
2016-01-01
In the application of CFD to turbulent reacting flows, one of the main limitations to predictive accuracy is the chemistry model. Using a full or skeletal kinetics model may provide good predictive ability, however, at considerable computational cost. Adding the ability to account for the interaction between turbulence and chemistry improves the overall fidelity of a simulation but adds to this cost. An alternative is the use of simple models, such as the Magnussen model, which has negligible computational overhead, but lacks general predictive ability except for cases that can be tuned to the flow being solved. In this paper, a technique will be described that allows the tuning of the Magnussen model for an arbitrary fuel and flow geometry without the need to have experimental data for that particular case. The tuning is based on comparing the results of the Magnussen model and full finite-rate chemistry when applied to perfectly and partially stirred reactor simulations. In addition, a modification to the Magnussen model is proposed that allows the upper kinetic limit for the reaction rate to be set, giving better physical agreement with full kinetic mechanisms. This procedure allows a simple reacting model to be used in a predictive manner, and affords significant savings in computational costs for simulations.
Renton, Michael
2011-01-01
Background and aims Simulations that integrate sub-models of important biological processes can be used to ask questions about optimal management strategies in agricultural and ecological systems. Building sub-models with more detail and aiming for greater accuracy and realism may seem attractive, but is likely to be more expensive and time-consuming and result in more complicated models that lack transparency. This paper illustrates a general integrated approach for constructing models of agricultural and ecological systems that is based on the principle of starting simple and then directly testing for the need to add additional detail and complexity. Methodology The approach is demonstrated using LUSO (Land Use Sequence Optimizer), an agricultural system analysis framework based on simulation and optimization. A simple sensitivity analysis and functional perturbation analysis is used to test to what extent LUSO's crop–weed competition sub-model affects the answers to a number of questions at the scale of the whole farming system regarding optimal land-use sequencing strategies and resulting profitability. Principal results The need for accuracy in the crop–weed competition sub-model within LUSO depended to a small extent on the parameter being varied, but more importantly and interestingly on the type of question being addressed with the model. Only a small part of the crop–weed competition model actually affects the answers to these questions. Conclusions This study illustrates an example application of the proposed integrated approach for constructing models of agricultural and ecological systems based on testing whether complexity needs to be added to address particular questions of interest. We conclude that this example clearly demonstrates the potential value of the general approach. Advantages of this approach include minimizing costs and resources required for model construction, keeping models transparent and easy to analyse, and ensuring the model is well suited to address the question of interest. PMID:22476477
The Sun lightens and enlightens: high noon shadow measurements
NASA Astrophysics Data System (ADS)
Babović, Vukota; Babović, Miloš
2014-11-01
Contemporary physicists and science experts include Eratosthenes’ measurement of the Earth's circumference as one of the most beautiful experiments ever performed in physics. Upon revisiting this famous event in the history of science, we find that some interesting generalizations are possible. On the basis of a rather simple model of the Earth's insolation, we have managed, using some advanced mathematics, to derive a new formula for determining the length of the year, generalized in such a way that it can be used for all planets with sufficiently small eccentricity of the orbit and for all locations with daily sunrises and sunsets. The practical technique that our formula offers is simple to perform, entirely Eratosthenian in spirit, and only requires the angle of the noonday sun to be found on successive days around an equinox. Our results show that this kind of approach to the problem of the Earth's insolation deserves to be included in university courses, especially those which cover astronomy and environmental physics.
Liquid-vapor rectilinear diameter revisited
NASA Astrophysics Data System (ADS)
Garrabos, Y.; Lecoutre, C.; Marre, S.; Beysens, D.; Hahn, I.
2018-02-01
In the modern theory of critical phenomena, the liquid-vapor density diameter in simple fluids is generally expected to deviate from a rectilinear law approaching the critical point. However, by performing precise scannerlike optical measurements of the position of the SF6 liquid-vapor meniscus, in an approach much closer to criticality in temperature and density than earlier measurements, no deviation from a rectilinear diameter can be detected. The observed meniscus position from far (10 K ) to extremely close (1 mK ) to the critical temperature is analyzed using recent theoretical models to predict the complete scaling consequences of a fluid asymmetry. The temperature dependence of the meniscus position appears consistent with the law of rectilinear diameter. The apparent absence of the critical hook in SF6 therefore seemingly rules out the need for the pressure scaling field contribution in the complete scaling theoretical framework in this SF6 analysis. More generally, this work suggests a way to clarify the experimental ambiguities in the simple fluids for the near-critical singularities in the density diameter.
A Simple Proof of an Interesting Fibonacci Generalization. Classroom Notes
ERIC Educational Resources Information Center
Falcon, Sergio
2004-01-01
It is reasonably well known that the ratios of consecutive terms of a Fibonacci series converge to the golden ratio. This note presents a simple, complete proof of an interesting generalization of this result to a whole family of 'precious metal ratios'.
Universality of Generalized Parton Distributions in Light-Front Holographic QCD
NASA Astrophysics Data System (ADS)
de Téramond, Guy F.; Liu, Tianbo; Sufian, Raza Sabbir; Dosch, Hans Günter; Brodsky, Stanley J.; Deur, Alexandre; Hlfhs Collaboration
2018-05-01
The structure of generalized parton distributions is determined from light-front holographic QCD up to a universal reparametrization function w (x ) which incorporates Regge behavior at small x and inclusive counting rules at x →1 . A simple ansatz for w (x ) that fulfills these physics constraints with a single-parameter results in precise descriptions of both the nucleon and the pion quark distribution functions in comparison with global fits. The analytic structure of the amplitudes leads to a connection with the Veneziano model and hence to a nontrivial connection with Regge theory and the hadron spectrum.
Universality of Generalized Parton Distributions in Light-Front Holographic QCD.
de Téramond, Guy F; Liu, Tianbo; Sufian, Raza Sabbir; Dosch, Hans Günter; Brodsky, Stanley J; Deur, Alexandre
2018-05-04
The structure of generalized parton distributions is determined from light-front holographic QCD up to a universal reparametrization function w(x) which incorporates Regge behavior at small x and inclusive counting rules at x→1. A simple ansatz for w(x) that fulfills these physics constraints with a single-parameter results in precise descriptions of both the nucleon and the pion quark distribution functions in comparison with global fits. The analytic structure of the amplitudes leads to a connection with the Veneziano model and hence to a nontrivial connection with Regge theory and the hadron spectrum.
NASA Technical Reports Server (NTRS)
Adams, M. L.; Padovan, J.; Fertis, D. G.
1980-01-01
A general purpose squeeze-film damper interactive force element was developed, coded into a software package (module) and debugged. This software package was applied to nonliner dynamic analyses of some simple rotor systems. Results for pressure distributions show that the long bearing (end sealed) is a stronger bearing as compared to the short bearing as expected. Results of the nonlinear dynamic analysis, using a four degree of freedom simulation model, showed that the orbit of the rotating shaft increases nonlinearity to fill the bearing clearance as the unbalanced weight increases.
A Quantum-Like View to a Generalized Two Players Game
NASA Astrophysics Data System (ADS)
Bagarello, F.
2015-10-01
This paper consider the possibility of using some quantum tools in decision making strategies. In particular, we consider here a dynamical open quantum system helping two players, and , to take their decisions in a specific context. We see that, within our approach, the final choices of the players do not depend in general on their initial mental states, but they are driven essentially by the environment which interacts with them. The model proposed here also considers interactions of different nature between the two players, and it is simple enough to allow for an analytical solution of the equations of motion.
NASA Astrophysics Data System (ADS)
Yuanyuan, Zhang
The stochastic branching model of multi-particle productions in high energy collision has theoretical basis in perturbative QCD, and also successfully describes the experimental data for a wide energy range. However, over the years, little attention has been put on the branching model for supersymmetric (SUSY) particles. In this thesis, a stochastic branching model has been built to describe the pure supersymmetric particle jets evolution. This model is a modified two-phase stochastic branching process, or more precisely a two phase Simple Birth Process plus Poisson Process. The general case that the jets contain both ordinary particle jets and supersymmetric particle jets has also been investigated. We get the multiplicity distribution of the general case, which contains a Hypergeometric function in its expression. We apply this new multiplicity distribution to the current experimental data of pp collision at center of mass energy √s = 0.9, 2.36, 7 TeV. The fitting shows the supersymmetric particles haven't participate branching at current collision energy.
NASA Astrophysics Data System (ADS)
Reis, D. S.; Stedinger, J. R.; Martins, E. S.
2005-10-01
This paper develops a Bayesian approach to analysis of a generalized least squares (GLS) regression model for regional analyses of hydrologic data. The new approach allows computation of the posterior distributions of the parameters and the model error variance using a quasi-analytic approach. Two regional skew estimation studies illustrate the value of the Bayesian GLS approach for regional statistical analysis of a shape parameter and demonstrate that regional skew models can be relatively precise with effective record lengths in excess of 60 years. With Bayesian GLS the marginal posterior distribution of the model error variance and the corresponding mean and variance of the parameters can be computed directly, thereby providing a simple but important extension of the regional GLS regression procedures popularized by Tasker and Stedinger (1989), which is sensitive to the likely values of the model error variance when it is small relative to the sampling error in the at-site estimator.
Static and Vibration Analyses of General Wing Structures Using Equivalent Plate Models
NASA Technical Reports Server (NTRS)
Kapania, Rakesh K.; Liu, Youhua
1999-01-01
An efficient method, using equivalent plate model, is developed for studying the static and vibration analyses of general built-up wing structures composed of skins, spars, and ribs. The model includes the transverse shear effects by treating the built-up wing as a plate following the Reissner-Mindlin theory, the so-called First-order Shear Deformation Theory (FSDT). The Ritz method is used with the Legendre polynomials being employed as the trial functions. This is in contrast to previous equivalent plate model methods which have used simple polynomials, known to be prone to numerical ill-conditioning, as the trial functions. The present developments are evaluated by comparing the results with those obtained using MSC/NASTRAN, for a set of examples. These examples are: (i) free-vibration analysis of a clamped trapezoidal plate with (a) uniform thickness, and (b) non-uniform thickness varying as an airfoil, (ii) free-vibration and static analyses (including skin stress distribution) of a general built-up wing, and (iii) free-vibration and static analyses of a swept-back box wing. The results obtained by the present equivalent plate model are in good agreement with those obtained by the finite element method.
Fermion masses and mixing in general warped extra dimensional models
NASA Astrophysics Data System (ADS)
Frank, Mariana; Hamzaoui, Cherif; Pourtolami, Nima; Toharia, Manuel
2015-06-01
We analyze fermion masses and mixing in a general warped extra dimensional model, where all the Standard Model (SM) fields, including the Higgs, are allowed to propagate in the bulk. In this context, a slightly broken flavor symmetry imposed universally on all fermion fields, without distinction, can generate the full flavor structure of the SM, including quarks, charged leptons and neutrinos. For quarks and charged leptons, the exponential sensitivity of their wave functions to small flavor breaking effects yield hierarchical masses and mixing as it is usual in warped models with fermions in the bulk. In the neutrino sector, the exponential wave-function factors can be flavor blind and thus insensitive to the small flavor symmetry breaking effects, directly linking their masses and mixing angles to the flavor symmetric structure of the five-dimensional neutrino Yukawa couplings. The Higgs must be localized in the bulk and the model is more successful in generalized warped scenarios where the metric background solution is different than five-dimensional anti-de Sitter (AdS5 ). We study these features in two simple frameworks, flavor complimentarity and flavor democracy, which provide specific predictions and correlations between quarks and leptons, testable as more precise data in the neutrino sector becomes available.
Detection of greenhouse-gas-induced climatic change. Progress report, July 1, 1994--July 31, 1995
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, P.D.; Wigley, T.M.L.
1995-07-21
The objective of this research is to assembly and analyze instrumental climate data and to develop and apply climate models as a basis for detecting greenhouse-gas-induced climatic change, and validation of General Circulation Models. In addition to changes due to variations in anthropogenic forcing, including greenhouse gas and aerosol concentration changes, the global climate system exhibits a high degree of internally-generated and externally-forced natural variability. To detect the anthropogenic effect, its signal must be isolated from the ``noise`` of this natural climatic variability. A high quality, spatially extensive data base is required to define the noise and its spatial characteristics.more » To facilitate this, available land and marine data bases will be updated and expanded. The data will be analyzed to determine the potential effects on climate of greenhouse gas and aerosol concentration changes and other factors. Analyses will be guided by a variety of models, from simple energy balance climate models to coupled atmosphere ocean General Circulation Models. These analyses are oriented towards obtaining early evidence of anthropogenic climatic change that would lead either to confirmation, rejection or modification of model projections, and towards the statistical validation of General Circulation Model control runs and perturbation experiments.« less
A general model for the scaling of offspring size and adult size.
Falster, Daniel S; Moles, Angela T; Westoby, Mark
2008-09-01
Understanding evolutionary coordination among different life-history traits is a key challenge for ecology and evolution. Here we develop a general quantitative model predicting how offspring size should scale with adult size by combining a simple model for life-history evolution with a frequency-dependent survivorship model. The key innovation is that larger offspring are afforded three different advantages during ontogeny: higher survivorship per time, a shortened juvenile phase, and advantage during size-competitive growth. In this model, it turns out that size-asymmetric advantage during competition is the factor driving evolution toward larger offspring sizes. For simplified and limiting cases, the model is shown to produce the same predictions as the previously existing theory on which it is founded. The explicit treatment of different survival advantages has biologically important new effects, mainly through an interaction between total maternal investment in reproduction and the duration of competitive growth. This goes on to explain alternative allometries between log offspring size and log adult size, as observed in mammals (slope = 0.95) and plants (slope = 0.54). Further, it suggests how these differences relate quantitatively to specific biological processes during recruitment. In these ways, the model generalizes across previous theory and provides explanations for some differences between major taxa.
Development of orientation tuning in simple cells of primary visual cortex
Moore, Bartlett D.
2012-01-01
Orientation selectivity and its development are basic features of visual cortex. The original model of orientation selectivity proposes that elongated simple cell receptive fields are constructed from convergent input of an array of lateral geniculate nucleus neurons. However, orientation selectivity of simple cells in the visual cortex is generally greater than the linear contributions based on projections from spatial receptive field profiles. This implies that additional selectivity may arise from intracortical mechanisms. The hierarchical processing idea implies mainly linear connections, whereas cortical contributions are generally considered to be nonlinear. We have explored development of orientation selectivity in visual cortex with a focus on linear and nonlinear factors in a population of anesthetized 4-wk postnatal kittens and adult cats. Linear contributions are estimated from receptive field maps by which orientation tuning curves are generated and bandwidth is quantified. Nonlinear components are estimated as the magnitude of the power function relationship between responses measured from drifting sinusoidal gratings and those predicted from the spatial receptive field. Measured bandwidths for kittens are slightly larger than those in adults, whereas predicted bandwidths are substantially broader. These results suggest that relatively strong nonlinearities in early postnatal stages are substantially involved in the development of orientation tuning in visual cortex. PMID:22323631
Schreiber, Roy E; Avram, Liat; Neumann, Ronny
2018-01-09
High-order elementary reactions in homogeneous solutions involving more than two molecules are statistically improbable and very slow to proceed. They are not generally considered in classical transition-state or collision theories. Yet, rather selective, high-yield product formation is common in self-assembly processes that require many reaction steps. On the basis of recent observations of crystallization as well as reactions in dense phases, it is shown that self-assembly can occur by preorganization of reactants in a noncovalent supramolecular assembly, whereby directing forces can lead to an apparent one-step transformation of multiple reactants. A simple and general kinetic model for multiple reactant transformation in a dense phase that can account for many-bodied transformations was developed. Furthermore, the self-assembly of polyfluoroxometalate anion [H 2 F 6 NaW 18 O 56 ] 7- from simple tungstate Na 2 WO 2 F 4 was demonstrated by using 2D 19 F- 19 F NOESY, 2D 19 F- 19 F COSY NMR spectroscopy, a new 2D 19 F{ 183 W} NMR technique, as well as ESI-MS and diffusion NMR spectroscopy, and the crucial involvement of a supramolecular assembly was found. The deterministic kinetic reaction model explains the reaction in a dense phase and supports the suggested self-assembly mechanism. Reactions in dense phases may be of general importance in understanding other self-assembly reactions. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Hetherington, James P J; Warner, Anne; Seymour, Robert M
2006-04-22
Systems Biology requires that biological modelling is scaled up from small components to system level. This can produce exceedingly complex models, which obscure understanding rather than facilitate it. The successful use of highly simplified models would resolve many of the current problems faced in Systems Biology. This paper questions whether the conclusions of simple mathematical models of biological systems are trustworthy. The simplification of a specific model of calcium oscillations in hepatocytes is examined in detail, and the conclusions drawn from this scrutiny generalized. We formalize our choice of simplification approach through the use of functional 'building blocks'. A collection of models is constructed, each a progressively more simplified version of a well-understood model. The limiting model is a piecewise linear model that can be solved analytically. We find that, as expected, in many cases the simpler models produce incorrect results. However, when we make a sensitivity analysis, examining which aspects of the behaviour of the system are controlled by which parameters, the conclusions of the simple model often agree with those of the richer model. The hypothesis that the simplified model retains no information about the real sensitivities of the unsimplified model can be very strongly ruled out by treating the simplification process as a pseudo-random perturbation on the true sensitivity data. We conclude that sensitivity analysis is, therefore, of great importance to the analysis of simple mathematical models in biology. Our comparisons reveal which results of the sensitivity analysis regarding calcium oscillations in hepatocytes are robust to the simplifications necessarily involved in mathematical modelling. For example, we find that if a treatment is observed to strongly decrease the period of the oscillations while increasing the proportion of the cycle during which cellular calcium concentrations are rising, without affecting the inter-spike or maximum calcium concentrations, then it is likely that the treatment is acting on the plasma membrane calcium pump.
An information maximization model of eye movements
NASA Technical Reports Server (NTRS)
Renninger, Laura Walker; Coughlan, James; Verghese, Preeti; Malik, Jitendra
2005-01-01
We propose a sequential information maximization model as a general strategy for programming eye movements. The model reconstructs high-resolution visual information from a sequence of fixations, taking into account the fall-off in resolution from the fovea to the periphery. From this framework we get a simple rule for predicting fixation sequences: after each fixation, fixate next at the location that minimizes uncertainty (maximizes information) about the stimulus. By comparing our model performance to human eye movement data and to predictions from a saliency and random model, we demonstrate that our model is best at predicting fixation locations. Modeling additional biological constraints will improve the prediction of fixation sequences. Our results suggest that information maximization is a useful principle for programming eye movements.
Polymers at interfaces and in colloidal dispersions.
Fleer, Gerard J
2010-09-15
This review is an extended version of the Overbeek lecture 2009, given at the occasion of the 23rd Conference of ECIS (European Colloid and Interface Society) in Antalya, where I received the fifth Overbeek Gold Medal awarded by ECIS. I first summarize the basics of numerical SF-SCF: the Scheutjens-Fleer version of Self-Consistent-Field theory for inhomogeneous systems, including polymer adsorption and depletion. The conformational statistics are taken from the (non-SCF) DiMarzio-Rubin lattice model for homopolymer adsorption, which enumerates the conformational details exactly by a discrete propagator for the endpoint distribution but does not account for polymer-solvent interaction and for the volume-filling constraint. SF-SCF corrects for this by adjusting the field such that it becomes self-consistent. The model can be generalized to more complex systems: polydispersity, brushes, random and block copolymers, polyelectrolytes, branching, surfactants, micelles, membranes, vesicles, wetting, etc. On a mean-field level the results are exact; the disadvantage is that only numerical data are obtained. Extensions to excluded-volume polymers are in progress. Analytical approximations for simple systems are based upon solving the Edwards diffusion equation. This equation is the continuum variant of the lattice propagator, but ignores the finite segment size (analogous to the Poisson-Boltzmann equation without a Stern layer). By using the discrete propagator for segments next to the surface as the boundary condition in the continuum model, the finite segment size can be introduced into the continuum description, like the ion size in the Stern-Poisson-Boltzmann model. In most cases a ground-state approximation is needed to find analytical solutions. In this way realistic analytical approximations for simple cases can be found, including depletion effects that occur in mixtures of colloids plus non-adsorbing polymers. In the final part of this review I discuss a generalization of the free-volume theory (FVT) for the phase behavior of colloids and non-adsorbing polymer. In FVT the polymer is considered to be ideal: the osmotic pressure Pi follows the Van 't Hoff law, the depletion thickness delta equals the radius of gyration. This restricts the validity of FVT to the so-called colloid limit (polymer much smaller than the colloids). We have been able to find simple analytical approximations for Pi and delta which account for non-ideality and include established results for the semidilute limit. So we could generalize FVT to GFVT, and can now also describe the so-called protein limit (polymer larger than the 'protein-like' colloids), where the binodal polymer concentrations scale in a simple way with the polymer/colloid size ratio. For an intermediate case (polymer size approximately colloid size) we could give a quantitative description of careful experimental data. Copyright 2010 Elsevier B.V. All rights reserved.
Software Models Impact Stresses
NASA Technical Reports Server (NTRS)
Hanshaw, Timothy C.; Roy, Dipankar; Toyooka, Mark
1991-01-01
Generalized Impact Stress Software designed to assist engineers in predicting stresses caused by variety of impacts. Program straightforward, simple to implement on personal computers, "user friendly", and handles variety of boundary conditions applied to struck body being analyzed. Applications include mathematical modeling of motions and transient stresses of spacecraft, analysis of slamming of piston, of fast valve shutoffs, and play of rotating bearing assembly. Provides fast and inexpensive analytical tool for analysis of stresses and reduces dependency on expensive impact tests. Written in FORTRAN 77. Requires use of commercial software package PLOT88.
Energy density and energy flow of surface waves in a strongly magnetized graphene
NASA Astrophysics Data System (ADS)
Moradi, Afshin
2018-01-01
General expressions for the energy density and energy flow of plasmonic waves in a two-dimensional massless electron gas (as a simple model of graphene) are obtained by means of the linearized magneto-hydrodynamic model and classical electromagnetic theory when a strong external magnetic field perpendicular to the system is present. Also, analytical expressions for the energy velocity, wave polarization, wave impedance, transverse and longitudinal field strength functions, and attenuation length of surface magneto-plasmon-polariton waves are derived, and numerical results are prepared.
On the improbability of intelligent extraterrestrials
NASA Astrophysics Data System (ADS)
Bond, A.
1982-05-01
Discussions relating to the prevalence of extraterrestrial life generally remain ambiguous due to the lack of a suitable model for the development of biology. In this paper a simple model is proposed based on neutral evolution theory which leads to quantitative values for the genome growth rate within a biosphere. It is hypothesised that the genome size is a measure of organism complexity and hence an indicator of the likelihood of intelligence. The calculations suggest that organisms with the complexity of human beings may be rare and only occur with a probability below once per galaxy.
Is Seismically Determined Q an Intrinsic Material Property?
NASA Astrophysics Data System (ADS)
Langston, C. A.
2003-12-01
The seismic quality factor, Q, has a well-defined physical meaning as an intrinsic material property associated with a visco-elastic or a non-linear stress-strain constitutive relation for a material. Measurement of Q from seismic waves, however, involves interpreting seismic wave amplitude and phase as deviations from some ideal elastic wave propagation model. Thus, assumptions in the elastic wave propagation model become the basis for attributing anelastic properties to the earth continuum. Scientifically, the resulting Q model derived from seismic data is no more than a hypothesis that needs to be verified by other independent experiments concerning the continuum constitutive law and through careful examination of the truth of the assumptions in the wave propagation model. A case in point concerns the anelasticity of Mississippi embayment sediments in the central U.S. that has important implications for evaluation of earthquake strong ground motions. Previous body wave analyses using converted Sp phases have suggested that Qs is ~30 in the sediments based on simple ray theory assumptions. However, detailed modeling of 1D heterogeneity in the sediments shows that Qs cannot be resolved by the Sp data. An independent experiment concerning the amplitude decay of surface waves propagating in the sediments shows that Qs must be generally greater than 80 but is also subject to scattering attenuation. Apparent Q effects seen in direct P and S waves can also be produced by wave tunneling mechanisms in relatively simple 1D heterogeneity. Heterogeneity is a general geophysical attribute of the earth as shown by many high-resolution data sets and should be used as the first litmus test on assumptions made in seismic Q studies before a Q model can be interpreted as an intrinsic material property.
Bustos-Vázquez, Eduardo; Fernández-Niño, Julián Alfredo; Astudillo-Garcia, Claudia Iveth
2017-04-01
Self-rated health is an individual and subjective conceptualization involving the intersection of biological, social and psychological factors. It provides an invaluable and unique evaluation of a person's general health status. To propose and evaluate a simple conceptual model to understand self-rated health and its relationship to multimorbidity, disability and depressive symptoms in Mexican older adults. We conducted a cross-sectional study based on a national representative sample of 8,874 adults of 60 years of age and older. Self-perception of a positive health status was determined according to a Likert-type scale based on the question: "What do you think is your current health status?" Intermediate variables included multimorbidity, disability and depressive symptoms, as well as dichotomous exogenous variables (sex, having a partner, participation in decision-making and poverty). The proposed conceptual model was validated using a general structural equation model with a logit link function for positive self-rated health. A direct association was found between multimorbidity and positive self-rated health (OR=0.48; 95% CI: 0.42-0.55), disability and positive self-rated health (OR=0.35; 95% CI: 0.30-0.40), depressive symptoms and positive self-rated health (OR=0.38; 95% CI: 0.34-0.43). The model also validated indirect associations between disability and depressive symptoms (OR=2.25; 95% CI: 2.01- 2.52), multimorbidity and depressive symptoms (OR=1.79; 95% CI: 1.61-2.00) and multimorbidity and disability (OR=1.98; 95% CI: 1.78-2.20). A parsimonious theoretical model was empirically evaluated, which enabled identifying direct and indirect associations with positive self-rated health.
Models for small-scale structure on cosmic strings. II. Scaling and its stability
NASA Astrophysics Data System (ADS)
Vieira, J. P. P.; Martins, C. J. A. P.; Shellard, E. P. S.
2016-11-01
We make use of the formalism described in a previous paper [Martins et al., Phys. Rev. D 90, 043518 (2014)] to address general features of wiggly cosmic string evolution. In particular, we highlight the important role played by poorly understood energy loss mechanisms and propose a simple Ansatz which tackles this problem in the context of an extended velocity-dependent one-scale model. We find a general procedure to determine all the scaling solutions admitted by a specific string model and study their stability, enabling a detailed comparison with future numerical simulations. A simpler comparison with previous Goto-Nambu simulations supports earlier evidence that scaling is easier to achieve in the matter era than in the radiation era. In addition, we also find that the requirement that a scaling regime be stable seems to notably constrain the allowed range of energy loss parameters.
Ebrahimian, Mehran; Yekehzare, Mohammad; Ejtehadi, Mohammad Reza
2015-12-01
To generalize simple bead-linker model of swimmers to higher dimensions and to demonstrate the chemotaxis ability of such swimmers, here we introduce a low-Reynolds predator, using a two-dimensional triangular bead-spring model. Two-state linkers as mechanochemical enzymes expand as a result of interaction with particular activator substances in the environment, causing the whole body to translate and rotate. The concentration of the chemical stimulator controls expansion versus the contraction rate of each arm and so affects the ability of the body for diffusive movements; also the variation of activator substance's concentration in the environment breaks the symmetry of linkers' preferred state, resulting in the drift of the random walker along the gradient of the density of activators. External food or danger sources may attract or repel the body by producing or consuming the chemical activators of the organism's enzymes, inducing chemotaxis behavior. Generalization of the model to three dimensions is straightforward.
Idealized model of polar cap currents, fields, and auroras
NASA Technical Reports Server (NTRS)
Cornwall, J. M.
1985-01-01
During periods of northward Bz, the electric field applied to the magnetosphere is generally opposite to that occurring during southward Bz and complicated patterns of convection result, showing some features reversed in comparison with the southward Bz case. A study is conducted of a simple generalization of early work on idealized convection models, which allows for coexistence of sunward convection over the central polar cap and antisunward convection elsewhere in the cap. The present model, valid for By approximately 0, has a four-cell convection pattern and is based on the combination of ionospheric current conservation with a relation between parallel auroral currents and parallel potential drops. Global magnetospheric issues involving, e.g., reconnection are not considered. The central result of this paper is an expression giving the parallel potential drop for polar cap auroras (with By approximately 0) in terms of the polar cap convection field profile.
NASA Astrophysics Data System (ADS)
Ebrahimian, Mehran; Yekehzare, Mohammad; Ejtehadi, Mohammad Reza
2015-12-01
To generalize simple bead-linker model of swimmers to higher dimensions and to demonstrate the chemotaxis ability of such swimmers, here we introduce a low-Reynolds predator, using a two-dimensional triangular bead-spring model. Two-state linkers as mechanochemical enzymes expand as a result of interaction with particular activator substances in the environment, causing the whole body to translate and rotate. The concentration of the chemical stimulator controls expansion versus the contraction rate of each arm and so affects the ability of the body for diffusive movements; also the variation of activator substance's concentration in the environment breaks the symmetry of linkers' preferred state, resulting in the drift of the random walker along the gradient of the density of activators. External food or danger sources may attract or repel the body by producing or consuming the chemical activators of the organism's enzymes, inducing chemotaxis behavior. Generalization of the model to three dimensions is straightforward.
Two-electron bond-orbital model, 1
NASA Technical Reports Server (NTRS)
Huang, C.; Moriarty, J. A.; Sher, A.; Breckenridge, R. A.
1975-01-01
Harrison's one-electron bond-orbital model of tetrahedrally coordinated solids was generalized to a two-electron model, using an extension of the method of Falicov and Harris for treating the hydrogen molecule. The six eigenvalues and eigenstates of the two-electron anion-cation Hamiltonian entering this theory can be found exactly general. The two-electron formalism is shown to provide a useful basis for calculating both non-magnetic and magnetic properties of semiconductors in perturbation theory. As an example of the former, expressions for the electric susceptibility and the dielectric constant were calculated. As an example of the latter, new expressions for the nuclear exchanges and pseudo-dipolar coefficients were calculated. A simple theoretical relationship between the dielectric constant and the exchange coefficient was also found in the limit of no correlation. These expressions were quantitatively evaluated in the limit of no correlation for twenty semiconductors.
An approximate JKR solution for a general contact, including rough contacts
NASA Astrophysics Data System (ADS)
Ciavarella, M.
2018-05-01
In the present note, we suggest a simple closed form approximate solution to the adhesive contact problem under the so-called JKR regime. The derivation is based on generalizing the original JKR energetic derivation assuming calculation of the strain energy in adhesiveless contact, and unloading at constant contact area. The underlying assumption is that the contact area distributions are the same as under adhesiveless conditions (for an appropriately increased normal load), so that in general the stress intensity factors will not be exactly equal at all contact edges. The solution is simply that the indentation is δ =δ1 -√{ 2 wA‧ /P″ } where w is surface energy, δ1 is the adhesiveless indentation, A‧ is the first derivative of contact area and P‧‧ the second derivative of the load with respect to δ1. The solution only requires macroscopic quantities, and not very elaborate local distributions, and is exact in many configurations like axisymmetric contacts, but also sinusoidal waves contact and correctly predicts some features of an ideal asperity model used as a test case and not as a real description of a rough contact problem. The solution permits therefore an estimate of the full solution for elastic rough solids with Gaussian multiple scales of roughness, which so far was lacking, using known adhesiveless simple results. The result turns out to depend only on rms amplitude and slopes of the surface, and as in the fractal limit, slopes would grow without limit, tends to the adhesiveless result - although in this limit the JKR model is inappropriate. The solution would also go to adhesiveless result for large rms amplitude of roughness hrms, irrespective of the small scale details, and in agreement with common sense, well known experiments and previous models by the author.
Woods, H Arthur; Dillon, Michael E; Pincebourde, Sylvain
2015-12-01
We analyze the effects of changing patterns of thermal availability, in space and time, on the performance of small ectotherms. We approach this problem by breaking it into a series of smaller steps, focusing on: (1) how macroclimates interact with living and nonliving objects in the environment to produce a mosaic of thermal microclimates and (2) how mobile ectotherms filter those microclimates into realized body temperatures by moving around in them. Although the first step (generation of mosaics) is conceptually straightforward, there still exists no general framework for predicting spatial and temporal patterns of microclimatic variation. We organize potential variation along three axes-the nature of the objects producing the microclimates (abiotic versus biotic), how microclimates translate macroclimatic variation (amplify versus buffer), and the temporal and spatial scales over which microclimatic conditions vary (long versus short). From this organization, we propose several general rules about patterns of microclimatic diversity. To examine the second step (behavioral sampling of locally available microclimates), we construct a set of models that simulate ectotherms moving on a thermal landscape according to simple sets of diffusion-based rules. The models explore the effects of both changes in body size (which affect the time scale over which organisms integrate operative body temperatures) and increases in the mean and variance of temperature on the thermal landscape. Collectively, the models indicate that both simple behavioral rules and interactions between body size and spatial patterns of thermal variation can profoundly affect the distribution of realized body temperatures experienced by ectotherms. These analyses emphasize the rich set of problems still to solve before arriving at a general, predictive theory of the biological consequences of climate change. Copyright © 2014 Elsevier Ltd. All rights reserved.
Modeling Studies of the Effects of Winds and Heat Flux on the Tropical Oceans
NASA Technical Reports Server (NTRS)
Seager, R.
1999-01-01
Over a decade ago, funding from this NASA grant supported the development of the Cane-Zebiak ENSO prediction model which remains in use to this day. It also supported our work developing schemes for modeling the air-sea heat flux in ocean models used for studying climate variability. We introduced a succession of simple boundary layer models that allow the fluxes to be computed internally in the model and avoid the need to specify the atmospheric thermodynamic state. These models have now reached a level of generality that allows modeling of the global, rather than just tropical, ocean, including sea ice cover. The most recent versions of these boundary layer models have been widely distributed around the world and are in use by many ocean modeling groups.
Multiscale Modeling of Mesoscale and Interfacial Phenomena
NASA Astrophysics Data System (ADS)
Petsev, Nikolai Dimitrov
With rapidly emerging technologies that feature interfaces modified at the nanoscale, traditional macroscopic models are pushed to their limits to explain phenomena where molecular processes can play a key role. Often, such problems appear to defy explanation when treated with coarse-grained continuum models alone, yet remain prohibitively expensive from a molecular simulation perspective. A prominent example is surface nanobubbles: nanoscopic gaseous domains typically found on hydrophobic surfaces that have puzzled researchers for over two decades due to their unusually long lifetimes. We show how an entirely macroscopic, non-equilibrium model explains many of their anomalous properties, including their stability and abnormally small gas-side contact angles. From this purely transport perspective, we investigate how factors such as temperature and saturation affect nanobubbles, providing numerous experimentally testable predictions. However, recent work also emphasizes the relevance of molecular-scale phenomena that cannot be described in terms of bulk phases or pristine interfaces. This is true for nanobubbles as well, whose nanoscale heights may require molecular detail to capture the relevant physics, in particular near the bubble three-phase contact line. Therefore, there is a clear need for general ways to link molecular granularity and behavior with large-scale continuum models in the treatment of many interfacial problems. In light of this, we have developed a general set of simulation strategies that couple mesoscale particle-based continuum models to molecular regions simulated through conventional molecular dynamics (MD). In addition, we derived a transport model for binary mixtures that opens the possibility for a wide range of applications in biological and drug delivery problems, and is readily reconciled with our hybrid MD-continuum techniques. Approaches that couple multiple length scales for fluid mixtures are largely absent in the literature, and we provide a novel and general framework for multiscale modeling of systems featuring one or more dissolved species. This makes it possible to retain molecular detail for parts of the problem that require it while using a simple, continuum description for parts where high detail is unnecessary, reducing the number of degrees of freedom (i.e. number of particles) dramatically. This opens the possibility for modeling ion transport in biological processes and biomolecule assembly in ionic solution, as well as electrokinetic phenomena at interfaces such as corrosion. The number of particles in the system is further reduced through an integrated boundary approach, which we apply to colloidal suspensions. In this thesis, we describe this general framework for multiscale modeling single- and multicomponent systems, provide several simple equilibrium and non-equilibrium case studies, and discuss future applications.
NASA Technical Reports Server (NTRS)
Pohorille, Andrew; Wilson, Michael A.
1995-01-01
Molecular dynamics computer simulations of the structure and functions of a simple membrane are performed in order to examine whether membranes provide an environment capable of promoting protobiological evolution. Our model membrane is composed of glycerol 1-monooleate. It is found that the bilayer surface fluctuates in time and space, occasionally creating thinning defects in the membrane. These defects are essential for passive transport of simple ions across membranes because they reduce the Born barrier to this process by approximately 40%. Negative ions are transferred across the bilayer more readily than positive ions due to favorable interactions with the electric field at the membrane-water interface. Passive transport of neutral molecules is, in general, more complex than predicted by the solubility-diffusion model. In particular, molecules which exhibit sufficient hydrophilicity and lipophilicity concentrate near membrane surfaces and experience 'interfacial resistance' to transport. The membrane-water interface forms an environment suitable for heterogeneous catalysis. Several possible mechanisms leading to an increase of reaction rates at the interface are discussed. We conclude that vesicles have many properties that make them very good candidates for earliest protocells. Some potentially fruitful directions of experimental and theoretical research on this subject are proposed.
Collisionless magnetic reconnection in curved spacetime and the effect of black hole rotation
NASA Astrophysics Data System (ADS)
Comisso, Luca; Asenjo, Felipe A.
2018-02-01
Magnetic reconnection in curved spacetime is studied by adopting a general-relativistic magnetohydrodynamic model that retains collisionless effects for both electron-ion and pair plasmas. A simple generalization of the standard Sweet-Parker model allows us to obtain the first-order effects of the gravitational field of a rotating black hole. It is shown that the black hole rotation acts to increase the length of azimuthal reconnection layers, thus leading to a decrease of the reconnection rate. However, when coupled to collisionless thermal-inertial effects, the net reconnection rate is enhanced with respect to what would happen in a purely collisional plasma due to a broadening of the reconnection layer. These findings identify an underlying interaction between gravity and collisionless magnetic reconnection in the vicinity of compact objects.
Fractional power-law spatial dispersion in electrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarasov, Vasily E., E-mail: tarasov@theory.sinp.msu.ru; Departamento de Análisis Matemático, Universidad de La Laguna, 38271 La Laguna, Tenerife; Trujillo, Juan J., E-mail: jtrujill@ullmat.es
2013-07-15
Electric fields in non-local media with power-law spatial dispersion are discussed. Equations involving a fractional Laplacian in the Riesz form that describe the electric fields in such non-local media are studied. The generalizations of Coulomb’s law and Debye’s screening for power-law non-local media are characterized. We consider simple models with anomalous behavior of plasma-like media with power-law spatial dispersions. The suggested fractional differential models for these plasma-like media are discussed to describe non-local properties of power-law type. -- Highlights: •Plasma-like non-local media with power-law spatial dispersion. •Fractional differential equations for electric fields in the media. •The generalizations of Coulomb’s lawmore » and Debye’s screening for the media.« less
Variance-based selection may explain general mating patterns in social insects.
Rueppell, Olav; Johnson, Nels; Rychtár, Jan
2008-06-23
Female mating frequency is one of the key parameters of social insect evolution. Several hypotheses have been suggested to explain multiple mating and considerable empirical research has led to conflicting results. Building on several earlier analyses, we present a simple general model that links the number of queen matings to variance in colony performance and this variance to average colony fitness. The model predicts selection for multiple mating if the average colony succeeds in a focal task, and selection for single mating if the average colony fails, irrespective of the proximate mechanism that links genetic diversity to colony fitness. Empirical support comes from interspecific comparisons, e.g. between the bee genera Apis and Bombus, and from data on several ant species, but more comprehensive empirical tests are needed.
Holographic dark energy in braneworld models with moving branes and the w = -1 crossing
NASA Astrophysics Data System (ADS)
Saridakis, E. N.
2008-04-01
We apply the bulk holographic dark energy in general 5D two-brane models. We extract the Friedmann equation on the physical brane and we show that in the general moving-brane case the effective 4D holographic dark energy behaves as a quintom for a large parameter-space area of a simple solution subclass. We find that wΛ was larger than -1 in the past while its present value is wΛ0≈-1.05, and the phantom bound wΛ = -1 was crossed at zp≈0.41, a result in agreement with observations. Such a behavior arises naturally, without the inclusion of special fields or potential terms, but a fine-tuning between the 4D Planck mass and the brane tension has to be imposed.
Santos, Andrés; Manzano, Gema
2010-04-14
As is well known, approximate integral equations for liquids, such as the hypernetted chain (HNC) and Percus-Yevick (PY) theories, are in general thermodynamically inconsistent in the sense that the macroscopic properties obtained from the spatial correlation functions depend on the route followed. In particular, the values of the fourth virial coefficient B(4) predicted by the HNC and PY approximations via the virial route differ from those obtained via the compressibility route. Despite this, it is shown in this paper that the value of B(4) obtained from the virial route in the HNC theory is exactly three halves the value obtained from the compressibility route in the PY theory, irrespective of the interaction potential (whether isotropic or not), the number of components, and the dimensionality of the system. This simple relationship is confirmed in one-component systems by analytical results for the one-dimensional penetrable-square-well model and the three-dimensional penetrable-sphere model, as well as by numerical results for the one-dimensional Lennard-Jones model, the one-dimensional Gaussian core model, and the three-dimensional square-well model.
Controlled recovery of phylogenetic communities from an evolutionary model using a network approach
NASA Astrophysics Data System (ADS)
Sousa, Arthur M. Y. R.; Vieira, André P.; Prado, Carmen P. C.; Andrade, Roberto F. S.
2016-04-01
This works reports the use of a complex network approach to produce a phylogenetic classification tree of a simple evolutionary model. This approach has already been used to treat proteomic data of actual extant organisms, but an investigation of its reliability to retrieve a traceable evolutionary history is missing. The used evolutionary model includes key ingredients for the emergence of groups of related organisms by differentiation through random mutations and population growth, but purposefully omits other realistic ingredients that are not strictly necessary to originate an evolutionary history. This choice causes the model to depend only on a small set of parameters, controlling the mutation probability and the population of different species. Our results indicate that for a set of parameter values, the phylogenetic classification produced by the used framework reproduces the actual evolutionary history with a very high average degree of accuracy. This includes parameter values where the species originated by the evolutionary dynamics have modular structures. In the more general context of community identification in complex networks, our model offers a simple setting for evaluating the effects, on the efficiency of community formation and identification, of the underlying dynamics generating the network itself.
Numerical model for the thermal behavior of thermocline storage tanks
NASA Astrophysics Data System (ADS)
Ehtiwesh, Ismael A. S.; Sousa, Antonio C. M.
2018-03-01
Energy storage is a critical factor in the advancement of solar thermal power systems for the sustained delivery of electricity. In addition, the incorporation of thermal energy storage into the operation of concentrated solar power systems (CSPs) offers the potential of delivering electricity without fossil-fuel backup even during peak demand, independent of weather conditions and daylight. Despite this potential, some areas of the design and performance of thermocline systems still require further attention for future incorporation in commercial CSPs, particularly, their operation and control. Therefore, the present study aims to develop a simple but efficient numerical model to allow the comprehensive analysis of thermocline storage systems aiming better understanding of their dynamic temperature response. The validation results, despite the simplifying assumptions of the numerical model, agree well with the experiments for the time evolution of the thermocline region. Three different cases are considered to test the versatility of the numerical model; for the particular type of a storage tank with top round impingement inlet, a simple analytical model was developed to take into consideration the increased turbulence level in the mixing region. The numerical predictions for the three cases are in general good agreement against the experimental results.
Multiexponential models of (1+1)-dimensional dilaton gravity and Toda-Liouville integrable models
NASA Astrophysics Data System (ADS)
de Alfaro, V.; Filippov, A. T.
2010-01-01
We study general properties of a class of two-dimensional dilaton gravity (DG) theories with potentials containing several exponential terms. We isolate and thoroughly study a subclass of such theories in which the equations of motion reduce to Toda and Liouville equations. We show that the equation parameters must satisfy a certain constraint, which we find and solve for the most general multiexponential model. It follows from the constraint that integrable Toda equations in DG theories generally cannot appear without accompanying Liouville equations. The most difficult problem in the two-dimensional Toda-Liouville (TL) DG is to solve the energy and momentum constraints. We discuss this problem using the simplest examples and identify the main obstacles to solving it analytically. We then consider a subclass of integrable two-dimensional theories where scalar matter fields satisfy the Toda equations and the two-dimensional metric is trivial. We consider the simplest case in some detail. In this example, we show how to obtain the general solution. We also show how to simply derive wavelike solutions of general TL systems. In the DG theory, these solutions describe nonlinear waves coupled to gravity and also static states and cosmologies. For static states and cosmologies, we propose and study a more general one-dimensional TL model typically emerging in one-dimensional reductions of higher-dimensional gravity and supergravity theories. We especially attend to making the analytic structure of the solutions of the Toda equations as simple and transparent as possible.
The distribution of density in supersonic turbulence
NASA Astrophysics Data System (ADS)
Squire, Jonathan; Hopkins, Philip F.
2017-11-01
We propose a model for the statistics of the mass density in supersonic turbulence, which plays a crucial role in star formation and the physics of the interstellar medium (ISM). The model is derived by considering the density to be arranged as a collection of strong shocks of width ˜ M^{-2}, where M is the turbulent Mach number. With two physically motivated parameters, the model predicts all density statistics for M>1 turbulence: the density probability distribution and its intermittency (deviation from lognormality), the density variance-Mach number relation, power spectra and structure functions. For the proposed model parameters, reasonable agreement is seen between model predictions and numerical simulations, albeit within the large uncertainties associated with current simulation results. More generally, the model could provide a useful framework for more detailed analysis of future simulations and observational data. Due to the simple physical motivations for the model in terms of shocks, it is straightforward to generalize to more complex physical processes, which will be helpful in future more detailed applications to the ISM. We see good qualitative agreement between such extensions and recent simulations of non-isothermal turbulence.
Theory and applications of a deterministic approximation to the coalescent model
Jewett, Ethan M.; Rosenberg, Noah A.
2014-01-01
Under the coalescent model, the random number nt of lineages ancestral to a sample is nearly deterministic as a function of time when nt is moderate to large in value, and it is well approximated by its expectation E[nt]. In turn, this expectation is well approximated by simple deterministic functions that are easy to compute. Such deterministic functions have been applied to estimate allele age, effective population size, and genetic diversity, and they have been used to study properties of models of infectious disease dynamics. Although a number of simple approximations of E[nt] have been derived and applied to problems of population-genetic inference, the theoretical accuracy of the formulas and the inferences obtained using these approximations is not known, and the range of problems to which they can be applied is not well understood. Here, we demonstrate general procedures by which the approximation nt ≈ E[nt] can be used to reduce the computational complexity of coalescent formulas, and we show that the resulting approximations converge to their true values under simple assumptions. Such approximations provide alternatives to exact formulas that are computationally intractable or numerically unstable when the number of sampled lineages is moderate or large. We also extend an existing class of approximations of E[nt] to the case of multiple populations of time-varying size with migration among them. Our results facilitate the use of the deterministic approximation nt ≈ E[nt] for deriving functionally simple, computationally efficient, and numerically stable approximations of coalescent formulas under complicated demographic scenarios. PMID:24412419
Magarey, Roger; Newton, Leslie; Hong, Seung C.; Takeuchi, Yu; Christie, Dave; Jarnevich, Catherine S.; Kohl, Lisa; Damus, Martin; Higgins, Steven I.; Miller, Leah; Castro, Karen; West, Amanda; Hastings, John; Cook, Gericke; Kartesz, John; Koop, Anthony
2018-01-01
This study compares four models for predicting the potential distribution of non-indigenous weed species in the conterminous U.S. The comparison focused on evaluating modeling tools and protocols as currently used for weed risk assessment or for predicting the potential distribution of invasive weeds. We used six weed species (three highly invasive and three less invasive non-indigenous species) that have been established in the U.S. for more than 75 years. The experiment involved providing non-U. S. location data to users familiar with one of the four evaluated techniques, who then developed predictive models that were applied to the United States without knowing the identity of the species or its U.S. distribution. We compared a simple GIS climate matching technique known as Proto3, a simple climate matching tool CLIMEX Match Climates, the correlative model MaxEnt, and a process model known as the Thornley Transport Resistance (TTR) model. Two experienced users ran each modeling tool except TTR, which had one user. Models were trained with global species distribution data excluding any U.S. data, and then were evaluated using the current known U.S. distribution. The influence of weed species identity and modeling tool on prevalence and sensitivity effects was compared using a generalized linear mixed model. Each modeling tool itself had a low statistical significance, while weed species alone accounted for 69.1 and 48.5% of the variance for prevalence and sensitivity, respectively. These results suggest that simple modeling tools might perform as well as complex ones in the case of predicting potential distribution for a weed not yet present in the United States. Considerations of model accuracy should also be balanced with those of reproducibility and ease of use. More important than the choice of modeling tool is the construction of robust protocols and testing both new and experienced users under blind test conditions that approximate operational conditions.
NDE Research At Nondestructive Measurement Science At NASA Langley
1989-06-01
our staff include: ultrasonics, nonlinear acoustics , thermal acoustics and diffusion, magnetics , fiber optics, and x-ray tomography . We have a...based on the simple assumption that acoustic waves interact with the sample and reveal "important" properties . In practice, such assumptions have...between the acoustic wave and the media. The most useful models can generally be inverted to determine the physical properties or geometry of the
Easy way to determine quantitative spatial resolution distribution for a general inverse problem
NASA Astrophysics Data System (ADS)
An, M.; Feng, M.
2013-12-01
The spatial resolution computation of a solution was nontrivial and more difficult than solving an inverse problem. Most geophysical studies, except for tomographic studies, almost uniformly neglect the calculation of a practical spatial resolution. In seismic tomography studies, a qualitative resolution length can be indicatively given via visual inspection of the restoration of a synthetic structure (e.g., checkerboard tests). An effective strategy for obtaining quantitative resolution length is to calculate Backus-Gilbert resolution kernels (also referred to as a resolution matrix) by matrix operation. However, not all resolution matrices can provide resolution length information, and the computation of resolution matrix is often a difficult problem for very large inverse problems. A new class of resolution matrices, called the statistical resolution matrices (An, 2012, GJI), can be directly determined via a simple one-parameter nonlinear inversion performed based on limited pairs of random synthetic models and their inverse solutions. The total procedure were restricted to forward/inversion processes used in the real inverse problem and were independent of the degree of inverse skill used in the solution inversion. Spatial resolution lengths can be directly given during the inversion. Tests on 1D/2D/3D model inversion demonstrated that this simple method can be at least valid for a general linear inverse problem.
Simple graph models of information spread in finite populations
Voorhees, Burton; Ryder, Bergerud
2015-01-01
We consider several classes of simple graphs as potential models for information diffusion in a structured population. These include biases cycles, dual circular flows, partial bipartite graphs and what we call ‘single-link’ graphs. In addition to fixation probabilities, we study structure parameters for these graphs, including eigenvalues of the Laplacian, conductances, communicability and expected hitting times. In several cases, values of these parameters are related, most strongly so for partial bipartite graphs. A measure of directional bias in cycles and circular flows arises from the non-zero eigenvalues of the antisymmetric part of the Laplacian and another measure is found for cycles as the value of the transition probability for which hitting times going in either direction of the cycle are equal. A generalization of circular flow graphs is used to illustrate the possibility of tuning edge weights to match pre-specified values for graph parameters; in particular, we show that generalizations of circular flows can be tuned to have fixation probabilities equal to the Moran probability for a complete graph by tuning vertex temperature profiles. Finally, single-link graphs are introduced as an example of a graph involving a bottleneck in the connection between two components and these are compared to the partial bipartite graphs. PMID:26064661
Relative age effects in fitness testing in a general school sample: how relative are they?
Veldhuizen, Scott; Cairney, John; Hay, John; Faught, Brent
2015-01-01
When children or adolescents are grouped by age or year of birth, older individuals tend to outperform younger ones. These phenomena are known as relative age effects (RAEs). RAEs may result directly from differences in maturation, but may also be associated with psychological, pedagogic or other factors. In this article, we attempt to quantify RAEs in a simple fitness task and to identify the mechanisms operating. Data come from a 5-year study of 2278 individuals that included repeated administrations of the 20 m shuttle run. We use mixed-effect modelling to characterise change over time and then examine residuals from these models for evidence of an effect for age relative to peers or for season of birth. Age alone appears to account for RAEs in our sample, with no effects for age relative to peers or month of birth. Age grouping produces large disparities for girls under 12, moderate ones for boys of all ages and negligible ones for girls between 12 and 15. RAEs for this task and population appear to arise from simple age differences. Similar methods may be useful in determining whether other explanations of RAEs are necessary in other contexts. Evaluation processes that take age into account have the potential to mitigate RAEs in general settings.
NASA Astrophysics Data System (ADS)
Kustova, E. V.; Savelev, A. S.; Kunova, O. V.
2018-05-01
Theoretical models for the vibrational state-resolved Zeldovich reaction are assessed by comparison with the results of quasi-classical trajectory (QCT) calculations. An error in the model of Aliat is corrected; the model is generalized taking into account NO vibrational states. The proposed model is fairly simple and can be easily implemented to the software for non-equilibrium flow modeling. It provides a good agreement with the QCT rate coefficients in the whole range of temperatures and reagent/product vibrational states. The developed models are tested in simulations of vibrational and chemical relaxation of air mixture behind a shock wave. The importance of accounting for excitated NO vibrational states and accurate prediction of Zeldovich reactions rates is shown.
Density-dependence as a size-independent regulatory mechanism.
de Vladar, Harold P
2006-01-21
The growth function of populations is central in biomathematics. The main dogma is the existence of density-dependence mechanisms, which can be modelled with distinct functional forms that depend on the size of the population. One important class of regulatory functions is the theta-logistic, which generalizes the logistic equation. Using this model as a motivation, this paper introduces a simple dynamical reformulation that generalizes many growth functions. The reformulation consists of two equations, one for population size, and one for the growth rate. Furthermore, the model shows that although population is density-dependent, the dynamics of the growth rate does not depend either on population size, nor on the carrying capacity. Actually, the growth equation is uncoupled from the population size equation, and the model has only two parameters, a Malthusian parameter rho and a competition coefficient theta. Distinct sign combinations of these parameters reproduce not only the family of theta-logistics, but also the van Bertalanffy, Gompertz and Potential Growth equations, among other possibilities. It is also shown that, except for two critical points, there is a general size-scaling relation that includes those appearing in the most important allometric theories, including the recently proposed Metabolic Theory of Ecology. With this model, several issues of general interest are discussed such as the growth of animal population, extinctions, cell growth and allometry, and the effect of environment over a population.
Economic modeling of HIV treatments.
Simpson, Kit N
2010-05-01
To review the general literature on microeconomic modeling and key points that must be considered in the general assessment of economic modeling reports, discuss the evolution of HIV economic models and identify models that illustrate this development over time, as well as examples of current studies. Recommend improvements in HIV economic modeling. Recent economic modeling studies of HIV include examinations of scaling up antiretroviral (ARV) in South Africa, screening prior to use of abacavir, preexposure prophylaxis, early start of ARV in developing countries and cost-effectiveness comparisons of specific ARV drugs using data from clinical trials. These studies all used extensively published second-generation Markov models in their analyses. There have been attempts to simplify approaches to cost-effectiveness estimates by using simple decision trees or cost-effectiveness calculations with short-time horizons. However, these approaches leave out important cumulative economic effects that will not appear early in a treatment. Many economic modeling studies were identified in the 'gray' literature, but limited descriptions precluded an assessment of their adherence to modeling guidelines, and thus to the validity of their findings. There is a need for developing third-generation models to accommodate new knowledge about adherence, adverse effects, and viral resistance.
NASA Astrophysics Data System (ADS)
Snadden, John; Ridout, David; Wood, Simon
2018-05-01
The modular properties of the simple vertex operator superalgebra associated with the affine Kac-Moody superalgebra \\widehat{{osp}} (1|2) at level -5/4 are investigated. After classifying the relaxed highest-weight modules over this vertex operator superalgebra, the characters and supercharacters of the simple weight modules are computed and their modular transforms are determined. This leads to a complete list of the Grothendieck fusion rules by way of a continuous superalgebraic analog of the Verlinde formula. All Grothendieck fusion coefficients are observed to be non-negative integers. These results indicate that the extension to general admissible levels will follow using the same methodology once the classification of relaxed highest-weight modules is completed.
Games among relatives revisited.
Allen, Benjamin; Nowak, Martin A
2015-08-07
We present a simple model for the evolution of social behavior in family-structured, finite sized populations. Interactions are represented as evolutionary games describing frequency-dependent selection. Individuals interact more frequently with siblings than with members of the general population, as quantified by an assortment parameter r, which can be interpreted as "relatedness". Other models, mostly of spatially structured populations, have shown that assortment can promote the evolution of cooperation by facilitating interaction between cooperators, but this effect depends on the details of the evolutionary process. For our model, we find that sibling assortment promotes cooperation in stringent social dilemmas such as the Prisoner's Dilemma, but not necessarily in other situations. These results are obtained through straightforward calculations of changes in gene frequency. We also analyze our model using inclusive fitness. We find that the quantity of inclusive fitness does not exist for general games. For special games, where inclusive fitness exists, it provides less information than the straightforward analysis. Copyright © 2015 Elsevier Ltd. All rights reserved.
General Mechanism of Two-State Protein Folding Kinetics
Rollins, Geoffrey C.; Dill, Ken A.
2016-01-01
We describe here a general model of the kinetic mechanism of protein folding. In the Foldon Funnel Model, proteins fold in units of secondary structures, which form sequentially along the folding pathway, stabilized by tertiary interactions. The model predicts that the free energy landscape has a volcano shape, rather than a simple funnel, that folding is two-state (single-exponential) when secondary structures are intrinsically unstable, and that each structure along the folding path is a transition state for the previous structure. It shows how sequential pathways are consistent with multiple stochastic routes on funnel landscapes, and it gives good agreement with the 9 order of magnitude dependence of folding rates on protein size for a set of 93 proteins, at the same time it is consistent with the near independence of folding equilibrium constant on size. This model gives estimates of folding rates of proteomes, leading to a median folding time in Escherichia coli of about 5 s. PMID:25056406
NASA Astrophysics Data System (ADS)
Vieira, V. M. N. C. S.; Sahlée, E.; Jurus, P.; Clementi, E.; Pettersson, H.; Mateus, M.
2015-09-01
Earth-System and regional models, forecasting climate change and its impacts, simulate atmosphere-ocean gas exchanges using classical yet too simple generalizations relying on wind speed as the sole mediator while neglecting factors as sea-surface agitation, atmospheric stability, current drag with the bottom, rain and surfactants. These were proved fundamental for accurate estimates, particularly in the coastal ocean, where a significant part of the atmosphere-ocean greenhouse gas exchanges occurs. We include several of these factors in a customizable algorithm proposed for the basis of novel couplers of the atmospheric and oceanographic model components. We tested performances with measured and simulated data from the European coastal ocean, having found our algorithm to forecast greenhouse gas exchanges largely different from the forecasted by the generalization currently in use. Our algorithm allows calculus vectorization and parallel processing, improving computational speed roughly 12× in a single cpu core, an essential feature for Earth-System models applications.
A simple nonlinear model for the return to isotropy in turbulence
NASA Technical Reports Server (NTRS)
Sarkar, Sutanu; Speziale, Charles G.
1990-01-01
A quadratic nonlinear generalization of the linear Rotta model for the slow pressure-strain correlation of turbulence is developed. The model is shown to satisfy realizability and to give rise to no stable nontrivial equilibrium solutions for the anisotropy tensor in the case of vanishing mean velocity gradients. The absence of stable nontrivial equilibrium solutions is a necessary condition to ensure that the model predicts a return to isotropy for all relaxational turbulent flows. Both the phase space dynamics and the temporal behavior of the model are examined and compared against experimental data for the return to isotropy problem. It is demonstrated that the quadratic model successfully captures the experimental trends which clearly exhibit nonlinear behavior. Direct comparisons are also made with the predictions of the Rotta model and the Lumley model.
Meta-analysis of pesticide sorption in subsoils
NASA Astrophysics Data System (ADS)
Jarvis, Nicholas
2017-04-01
It has been known for several decades that sorption koc values tend to be larger in soils that are low in organic carbon (i.e. subsoils). Nevertheless, in a regulatory context, the models used to assess leaching of pesticides to groundwater still rely on a constant koc value, which is usually measured on topsoil samples. This is mainly because the general applicability of any improved model approach that is also simple enough to use for regulatory purposes has not been demonstrated. The objective of this study was therefore first to summarize and generalize available literature data in order to assess the magnitude of any systematic increase of koc values in subsoil and to test an alternative model of subsoil sorption that could be useful in pesticide risk assessment and management. To this end, a database containing the results of batch sorption experiments for pesticides was compiled from published studies in the literature, which placed at least as much emphasis on measurements in subsoil horizons as in topsoil. The database includes 967 data entries from 46 studies and for 34 different active substances (15 non-ionic compounds, 13 weak acids, 6 weak bases). In order to minimize pH effects on sorption, data for weak acids and bases were only included if the soil pH was more than two units larger than the compound pKa. A simple empirical model, whereby the sorption constant is given as a power law function of the soil organic carbon content, gave good fits to most data sets. Overall, the apparent koc value, koc(app), for non-ionic compounds and weak bases roughly doubled as the soil organic carbon content decreased by a factor of ten. The typical increase in koc(app) was even larger for weak acids: on average koc(app) increased by a factor of six as soil organic carbon content decreased by a factor of ten. These results suggest the koc concept currently used in leaching models should be replaced by an alternative approach that gives a more realistic representation of pesticide sorption in subsoil. The model tested in this study appears to be widely applicable and simple enough to parameterize for risk assessment purposes. However, more data on subsoil sorption should first be included in the analysis to enable reliable estimation of worst-case percentile values of the power law exponent in the model.
Scalar-tensor extension of the ΛCDM model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Algoner, W.C.; Velten, H.E.S.; Zimdahl, W., E-mail: w.algoner@cosmo-ufes.org, E-mail: velten@pq.cnpq.br, E-mail: winfried.zimdahl@pq.cnpq.br
2016-11-01
We construct a cosmological scalar-tensor-theory model in which the Brans-Dicke type scalar Φ enters the effective (Jordan-frame) Hubble rate as a simple modification of the Hubble rate of the ΛCDM model. This allows us to quantify differences between the background dynamics of scalar-tensor theories and general relativity (GR) in a transparent and observationally testable manner in terms of one single parameter. Problems of the mapping of the scalar-field degrees of freedom on an effective fluid description in a GR context are discused. Data from supernovae, the differential age of old galaxies and baryon acoustic oscillations are shown to strongly limitmore » potential deviations from the standard model.« less
Classical and quantum aspects of Yang-Baxter Wess-Zumino models
NASA Astrophysics Data System (ADS)
Demulder, Saskia; Driezen, Sibylle; Sevrin, Alexander; Thompson, Daniel C.
2018-03-01
We investigate the integrable Yang-Baxter deformation of the 2d Principal Chiral Model with a Wess-Zumino term. For arbitrary groups, the one-loop β-functions are calculated and display a surprising connection between classical and quantum physics: the classical integrability condition is necessary to prevent new couplings being generated by renormalisation. We show these theories admit an elegant realisation of Poisson-Lie T-duality acting as a simple inversion of coupling constants. The self-dual point corresponds to the Wess-Zumino-Witten model and is the IR fixed point under RG. We address the possibility of having supersymmetric extensions of these models showing that extended supersymmetry is not possible in general.
Nonequilibrium thermodynamics of the shear-transformation-zone model
NASA Astrophysics Data System (ADS)
Luo, Alan M.; Ã-ttinger, Hans Christian
2014-02-01
The shear-transformation-zone (STZ) model has been applied numerous times to describe the plastic deformation of different types of amorphous systems. We formulate this model within the general equation for nonequilibrium reversible-irreversible coupling (GENERIC) framework, thereby clarifying the thermodynamic structure of the constitutive equations and guaranteeing thermodynamic consistency. We propose natural, physically motivated forms for the building blocks of the GENERIC, which combine to produce a closed set of time evolution equations for the state variables, valid for any choice of free energy. We demonstrate an application of the new GENERIC-based model by choosing a simple form of the free energy. In addition, we present some numerical results and contrast those with the original STZ equations.
Nanopore Current Oscillations: Nonlinear Dynamics on the Nanoscale.
Hyland, Brittany; Siwy, Zuzanna S; Martens, Craig C
2015-05-21
In this Letter, we describe theoretical modeling of an experimentally realized nanoscale system that exhibits the general universal behavior of a nonlinear dynamical system. In particular, we consider the description of voltage-induced current fluctuations through a single nanopore from the perspective of nonlinear dynamics. We briefly review the experimental system and its behavior observed and then present a simple phenomenological nonlinear model that reproduces the qualitative behavior of the experimental data. The model consists of a two-dimensional deterministic nonlinear bistable oscillator experiencing both dissipation and random noise. The multidimensionality of the model and the interplay between deterministic and stochastic forces are both required to obtain a qualitatively accurate description of the physical system.
Model reductions using a projection formulation
NASA Technical Reports Server (NTRS)
De Villemagne, Christian; Skelton, Robert E.
1987-01-01
A new methodology for model reduction of MIMO systems exploits the notion of an oblique projection. A reduced model is uniquely defined by a projector whose range space and orthogonal to the null space are chosen among the ranges of generalized controllability and observability matrices. The reduced order models match various combinations (chosen by the designer) of four types of parameters of the full order system associated with (1) low frequency response, (2) high frequency response, (3) low frequency power spectral density, and (4) high frequency power spectral density. Thus, the proposed method is a computationally simple substitute for many existing methods, has an extreme flexibility to embrace combinations of existing methods and offers some new features.
Longevity suppresses conflict in animal societies
Port, Markus; Cant, Michael A.
2013-01-01
Models of social conflict in animal societies generally assume that within-group conflict reduces the value of a communal resource. For many animals, however, the primary cost of conflict is increased mortality. We develop a simple inclusive fitness model of social conflict that takes this cost into account. We show that longevity substantially reduces the level of within-group conflict, which can lead to the evolution of peaceful animal societies if relatedness among group members is high. By contrast, peaceful outcomes are never possible in models where the primary cost of social conflict is resource depletion. Incorporating mortality costs into models of social conflict can explain why many animal societies are so remarkably peaceful despite great potential for conflict. PMID:24088564
Simple theoretical models for composite rotor blades
NASA Technical Reports Server (NTRS)
Valisetty, R. R.; Rehfield, L. W.
1984-01-01
The development of theoretical rotor blade structural models for designs based upon composite construction is discussed. Care was exercised to include a member of nonclassical effects that previous experience indicated would be potentially important to account for. A model, representative of the size of a main rotor blade, is analyzed in order to assess the importance of various influences. The findings of this model study suggest that for the slenderness and closed cell construction considered, the refinements are of little importance and a classical type theory is adequate. The potential of elastic tailoring is dramatically demonstrated, so the generality of arbitrary ply layup in the cell wall is needed to exploit this opportunity.
A mean spherical model for soft potentials: The hard core revealed as a perturbation
NASA Technical Reports Server (NTRS)
Rosenfeld, Y.; Ashcroft, N. W.
1978-01-01
The mean spherical approximation for fluids is extended to treat the case of dense systems interacting via soft-potentials. The extension takes the form of a generalized statement concerning the behavior of the direct correlation function c(r) and radial distribution g(r). From a detailed analysis that views the hard core portion of a potential as a perturbation on the whole, a specific model is proposed which possesses analytic solutions for both Coulomb and Yukawa potentials, in addition to certain other remarkable properties. A variational principle for the model leads to a relatively simple method for obtaining numerical solutions.
Applications of Perron-Frobenius theory to population dynamics.
Li, Chi-Kwong; Schneider, Hans
2002-05-01
By the use of Perron-Frobenius theory, simple proofs are given of the Fundamental Theorem of Demography and of a theorem of Cushing and Yicang on the net reproductive rate occurring in matrix models of population dynamics. The latter result, which is closely related to the Stein-Rosenberg theorem in numerical linear algebra, is further refined with some additional nonnegative matrix theory. When the fertility matrix is scaled by the net reproductive rate, the growth rate of the model is $1$. More generally, we show how to achieve a given growth rate for the model by scaling the fertility matrix. Demographic interpretations of the results are given.
Troy: A simple nonlinear mathematical perspective
NASA Astrophysics Data System (ADS)
Flores, J. C.; Bologna, Mauro
2013-10-01
In this paper, we propose a mathematical model for the Trojan war that, supposedly, took place around 1180 BC. Supported by archaeological findings and by Homer’s Iliad, we estimate the numbers of warriors, the struggle rate parameters, the number of individuals per hectare, and other related quantities. We show that the long siege of the city, described in the Iliad, is compatible with a power-law behaviour for the time evolution of the number of individuals. We are able to evaluate the parameters of our model during the phase of the siege and the fall. The proposed model is general, and it can be applied to other historical conflicts.
Goodness-of-fit tests for open capture-recapture models
Pollock, K.H.; Hines, J.E.; Nichols, J.D.
1985-01-01
General goodness-of-fit tests for the Jolly-Seber model are proposed. These tests are based on conditional arguments using minimal sufficient statistics. The tests are shown to be of simple hypergeometric form so that a series of independent contingency table chi-square tests can be performed. The relationship of these tests to other proposed tests is discussed. This is followed by a simulation study of the power of the tests to detect departures from the assumptions of the Jolly-Seber model. Some meadow vole capture-recapture data are used to illustrate the testing procedure which has been implemented in a computer program available from the authors.
Corrigendum: New Form of Kane's Equations of Motion for Constrained Systems
NASA Technical Reports Server (NTRS)
Roithmayr, Carlos M.; Bajodah, Abdulrahman H.; Hodges, Dewey H.; Chen, Ye-Hwa
2007-01-01
A correction to the previously published article "New Form of Kane's Equations of Motion for Constrained Systems" is presented. Misuse of the transformation matrix between time rates of change of the generalized coordinates and generalized speeds (sometimes called motion variables) resulted in a false conclusion concerning the symmetry of the generalized inertia matrix. The generalized inertia matrix (sometimes referred to as the mass matrix) is in fact symmetric and usually positive definite when one forms nonminimal Kane's equations for holonomic or simple nonholonomic systems, systems subject to nonlinear nonholonomic constraints, and holonomic or simple nonholonomic systems subject to impulsive constraints according to Refs. 1, 2, and 3, respectively. The mass matrix is of course symmetric when one forms minimal equations for holonomic or simple nonholonomic systems using Kane s method as set forth in Ref. 4.
NASA Astrophysics Data System (ADS)
da Silva, Roberto; Vainstein, Mendeli H.; Lamb, Luis C.; Prado, Sandra D.
2013-03-01
We propose a novel probabilistic model that outputs the final standings of a soccer league, based on a simple dynamics that mimics a soccer tournament. In our model, a team is created with a defined potential (ability) which is updated during the tournament according to the results of previous games. The updated potential modifies a team future winning/losing probabilities. We show that this evolutionary game is able to reproduce the statistical properties of final standings of actual editions of the Brazilian tournament (Brasileirão) if the starting potential is the same for all teams. Other leagues such as the Italian (Calcio) and the Spanish (La Liga) tournaments have notoriously non-Gaussian traces and cannot be straightforwardly reproduced by this evolutionary non-Markovian model with simple initial conditions. However, we show that by setting the initial abilities based on data from previous tournaments, our model is able to capture the stylized statistical features of double round robin system (DRRS) tournaments in general. A complete understanding of these phenomena deserves much more attention, but we suggest a simple explanation based on data collected in Brazil: here several teams have been crowned champion in previous editions corroborating that the champion typically emerges from random fluctuations that partly preserve the Gaussian traces during the tournament. On the other hand, in the Italian and Spanish cases, only a few teams in recent history have won their league tournaments. These leagues are based on more robust and hierarchical structures established even before the beginning of the tournament. For the sake of completeness, we also elaborate a totally Gaussian model (which equalizes the winning, drawing, and losing probabilities) and we show that the scores of the Brazilian tournament “Brasileirão” cannot be reproduced. This shows that the evolutionary aspects are not superfluous and play an important role which must be considered in other alternative models. Finally, we analyze the distortions of our model in situations where a large number of teams is considered, showing the existence of a transition from a single to a double peaked histogram of the final classification scores. An interesting scaling is presented for different sized tournaments.
NASA Astrophysics Data System (ADS)
Vincenzo, F.; Matteucci, F.; Spitoni, E.
2017-04-01
We present a theoretical method for solving the chemical evolution of galaxies by assuming an instantaneous recycling approximation for chemical elements restored by massive stars and the delay time distribution formalism for delayed chemical enrichment by Type Ia Supernovae. The galaxy gas mass assembly history, together with the assumed stellar yields and initial mass function, represents the starting point of this method. We derive a simple and general equation, which closely relates the Laplace transforms of the galaxy gas accretion history and star formation history, which can be used to simplify the problem of retrieving these quantities in the galaxy evolution models assuming a linear Schmidt-Kennicutt law. We find that - once the galaxy star formation history has been reconstructed from our assumptions - the differential equation for the evolution of the chemical element X can be suitably solved with classical methods. We apply our model to reproduce the [O/Fe] and [Si/Fe] versus [Fe/H] chemical abundance patterns as observed at the solar neighbourhood by assuming a decaying exponential infall rate of gas and different delay time distributions for Type Ia Supernovae; we also explore the effect of assuming a non-linear Schmidt-Kennicutt law, with the index of the power law being k = 1.4. Although approximate, we conclude that our model with the single-degenerate scenario for Type Ia Supernovae provides the best agreement with the observed set of data. Our method can be used by other complementary galaxy stellar population synthesis models to predict also the chemical evolution of galaxies.
NASA Astrophysics Data System (ADS)
Millar, R.; Ingram, W.; Allen, M. R.; Lowe, J.
2013-12-01
Temperature and precipitation patterns are the climate variables with the greatest impacts on both natural and human systems. Due to the small spatial scales and the many interactions involved in the global hydrological cycle, in general circulation models (GCMs) representations of precipitation changes are subject to considerable uncertainty. Quantifying and understanding the causes of uncertainty (and identifying robust features of predictions) in both global and local precipitation change is an essential challenge of climate science. We have used the huge distributed computing capacity of the climateprediction.net citizen science project to examine parametric uncertainty in an ensemble of 20,000 perturbed-physics versions of the HadCM3 general circulation model. The ensemble has been selected to have a control climate in top-of-atmosphere energy balance [Yamazaki et al. 2013, J.G.R.]. We force this ensemble with several idealised climate-forcing scenarios including carbon dioxide step and transient profiles, solar radiation management geoengineering experiments with stratospheric aerosols, and short-lived climate forcing agents. We will present the results from several of these forcing scenarios under GCM parametric uncertainty. We examine the global mean precipitation energy budget to understand the robustness of a simple non-linear global precipitation model [Good et al. 2012, Clim. Dyn.] as a better explanation of precipitation changes in transient climate projections under GCM parametric uncertainty than a simple linear tropospheric energy balance model. We will also present work investigating robust conclusions about precipitation changes in a balanced ensemble of idealised solar radiation management scenarios [Kravitz et al. 2011, Atmos. Sci. Let.].
Principal Curves on Riemannian Manifolds.
Hauberg, Soren
2016-09-01
Euclidean statistics are often generalized to Riemannian manifolds by replacing straight-line interpolations with geodesic ones. While these Riemannian models are familiar-looking, they are restricted by the inflexibility of geodesics, and they rely on constructions which are optimal only in Euclidean domains. We consider extensions of Principal Component Analysis (PCA) to Riemannian manifolds. Classic Riemannian approaches seek a geodesic curve passing through the mean that optimizes a criteria of interest. The requirements that the solution both is geodesic and must pass through the mean tend to imply that the methods only work well when the manifold is mostly flat within the support of the generating distribution. We argue that instead of generalizing linear Euclidean models, it is more fruitful to generalize non-linear Euclidean models. Specifically, we extend the classic Principal Curves from Hastie & Stuetzle to data residing on a complete Riemannian manifold. We show that for elliptical distributions in the tangent of spaces of constant curvature, the standard principal geodesic is a principal curve. The proposed model is simple to compute and avoids many of the pitfalls of traditional geodesic approaches. We empirically demonstrate the effectiveness of the Riemannian principal curves on several manifolds and datasets.
Conditional Monte Carlo randomization tests for regression models.
Parhat, Parwen; Rosenberger, William F; Diao, Guoqing
2014-08-15
We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Charnay, B.; Bézard, B.; Baudino, J.-L.; Bonnefoy, M.; Boccaletti, A.; Galicher, R.
2018-02-01
We developed a simple, physical, and self-consistent cloud model for brown dwarfs and young giant exoplanets. We compared different parametrizations for the cloud particle size, by fixing either particle radii or the mixing efficiency (parameter f sed), or by estimating particle radii from simple microphysics. The cloud scheme with simple microphysics appears to be the best parametrization by successfully reproducing the observed photometry and spectra of brown dwarfs and young giant exoplanets. In particular, it reproduces the L–T transition, due to the condensation of silicate and iron clouds below the visible/near-IR photosphere. It also reproduces the reddening observed for low-gravity objects, due to an increase of cloud optical depth for low gravity. In addition, we found that the cloud greenhouse effect shifts chemical equilibrium, increasing the abundances of species stable at high temperature. This effect should significantly contribute to the strong variation of methane abundance at the L–T transition and to the methane depletion observed on young exoplanets. Finally, we predict the existence of a continuum of brown dwarfs and exoplanets for absolute J magnitude = 15–18 and J-K color = 0–3, due to the evolution of the L–T transition with gravity. This self-consistent model therefore provides a general framework to understand the effects of clouds and appears well-suited for atmospheric retrievals.
Simple atmospheric perturbation models for sonic-boom-signature distortion studies
NASA Technical Reports Server (NTRS)
Ehernberger, L. J.; Wurtele, Morton G.; Sharman, Robert D.
1994-01-01
Sonic-boom propagation from flight level to ground is influenced by wind and speed-of-sound variations resulting from temperature changes in both the mean atmospheric structure and small-scale perturbations. Meteorological behavior generally produces complex combinations of atmospheric perturbations in the form of turbulence, wind shears, up- and down-drafts and various wave behaviors. Differences between the speed of sound at the ground and at flight level will influence the threshold flight Mach number for which the sonic boom first reaches the ground as well as the width of the resulting sonic-boom carpet. Mean atmospheric temperature and wind structure as a function of altitude vary with location and time of year. These average properties of the atmosphere are well-documented and have been used in many sonic-boom propagation assessments. In contrast, smaller scale atmospheric perturbations are also known to modulate the shape and amplitude of sonic-boom signatures reaching the ground, but specific perturbation models have not been established for evaluating their effects on sonic-boom propagation. The purpose of this paper is to present simple examples of atmospheric vertical temperature gradients, wind shears, and wave motions that can guide preliminary assessments of nonturbulent atmospheric perturbation effects on sonic-boom propagation to the ground. The use of simple discrete atmospheric perturbation structures can facilitate the interpretation of the resulting sonic-boom propagation anomalies as well as intercomparisons among varied flight conditions and propagation models.
NASA Technical Reports Server (NTRS)
Liang, XU; Lettenmaier, Dennis P.; Wood, Eric F.; Burges, Stephen J.
1994-01-01
A generalization of the single soil layer variable infiltration capacity (VIC) land surface hydrological model previously implemented in the Geophysical Fluid Dynamics Laboratory (GFDL) general circulation model (GCM) is described. The new model is comprised of a two-layer characterization of the soil column, and uses an aerodynamic representation of the latent and sensible heat fluxes at the land surface. The infiltration algorithm for the upper layer is essentially the same as for the single layer VIC model, while the lower layer drainage formulation is of the form previously implemented in the Max-Planck-Institut GCM. The model partitions the area of interest (e.g., grid cell) into multiple land surface cover types; for each land cover type the fraction of roots in the upper and lower zone is specified. Evapotranspiration consists of three components: canopy evaporation, evaporation from bare soils, and transpiration, which is represented using a canopy and architectural resistance formulation. Once the latent heat flux has been computed, the surface energy balance is iterated to solve for the land surface temperature at each time step. The model was tested using long-term hydrologic and climatological data for Kings Creek, Kansas to estimate and validate the hydrological parameters, and surface flux data from three First International Satellite Land Surface Climatology Project Field Experiment (FIFE) intensive field campaigns in the summer-fall of 1987 to validate the surface energy fluxes.
Friction on the Bond and the Vibrational Relaxation in Simple Liquids.
NASA Astrophysics Data System (ADS)
Mishra, Bimalendu Kumar
In chapter 1, the energy relaxation of a stiff Morse oscillator dissolved in a simple LJ fluid is calculated using a reversible integrator (r-RESPA) in molecular dynamics generated from the Trotter factorization of the classical propagator. We compare the "real" relaxation from full MD simulations with that predicted by the Generalized Langevin Equation (GLE) with memory friction determined from the full Molecular Dynamics for a series of fluid densities. It is found that the GLE gives very good agreement with MD for the vibrational energy relaxation for this nonlinear oscillator far from equilibrium only for high density fluids, but reduced densities rho < 0.5 the energy relaxation from the MD simulation becomes considered slower than that from the GLE. An analysis of the statistical properties of the random force shows that as the density is lowered the non-Gaussian behavior of the random force becomes more prominent. This behavior is consistent with a simple model in which the oscillator undergoes generalized Langevin dynamics between strong binary collisions with solvent atoms. In chapter 2, molecular hydrodynamics is used to calculate the memory friction on the intramolecular vibrational coordinate of a homonuclear diatomic molecule dissolved in a simple liquid. The predicted memory friction is then compared to recent computer experiments. Agreement with the experimental memory functions is obtained when the linearized hydrodynamics is modified to include gaussian viscoelasticity and compressibility. The hydrodynamic friction on the bond appears to agree qualitatively very well, although quantitative agreement is not found at high frequencies. Various limits of the hydrodynamic friction are discussed.
Doubly self-consistent field theory of grafted polymers under simple shear in steady state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suo, Tongchuan; Whitmore, Mark D., E-mail: mark-whitmore@umanitoba.ca
2014-03-21
We present a generalization of the numerical self-consistent mean-field theory of polymers to the case of grafted polymers under simple shear. The general theoretical framework is presented, and then applied to three different chain models: rods, Gaussian chains, and finitely extensible nonlinear elastic (FENE) chains. The approach is self-consistent at two levels. First, for any flow field, the polymer density profile and effective potential are calculated self-consistently in a manner similar to the usual self-consistent field theory of polymers, except that the calculation is inherently two-dimensional even for a laterally homogeneous system. Second, through the use of a modified Brinkmanmore » equation, the flow field and the polymer profile are made self-consistent with respect to each other. For all chain models, we find that reasonable levels of shear cause the chains to tilt, but it has very little effect on the overall thickness of the polymer layer, causing a small decrease for rods, and an increase of no more than a few percent for the Gaussian and FENE chains. Using the FENE model, we also probe the individual bond lengths, bond correlations, and bond angles along the chains, the effects of the shear on them, and the solvent and bonded stress profiles. We find that the approximations needed within the theory for the Brinkman equation affect the bonded stress, but none of the other quantities.« less
NASA Astrophysics Data System (ADS)
Kurosu, Keita; Takashina, Masaaki; Koizumi, Masahiko; Das, Indra J.; Moskvin, Vadim P.
2014-10-01
Although three general-purpose Monte Carlo (MC) simulation tools: Geant4, FLUKA and PHITS have been used extensively, differences in calculation results have been reported. The major causes are the implementation of the physical model, preset value of the ionization potential or definition of the maximum step size. In order to achieve artifact free MC simulation, an optimized parameters list for each simulation system is required. Several authors have already proposed the optimized lists, but those studies were performed with a simple system such as only a water phantom. Since particle beams have a transport, interaction and electromagnetic processes during beam delivery, establishment of an optimized parameters-list for whole beam delivery system is therefore of major importance. The purpose of this study was to determine the optimized parameters list for GATE and PHITS using proton treatment nozzle computational model. The simulation was performed with the broad scanning proton beam. The influences of the customizing parameters on the percentage depth dose (PDD) profile and the proton range were investigated by comparison with the result of FLUKA, and then the optimal parameters were determined. The PDD profile and the proton range obtained from our optimized parameters list showed different characteristics from the results obtained with simple system. This led to the conclusion that the physical model, particle transport mechanics and different geometry-based descriptions need accurate customization in planning computational experiments for artifact-free MC simulation.
Using emergent order to shape a space society
NASA Technical Reports Server (NTRS)
Graps, Amara L.
1993-01-01
A fast-growing movement in the scientific community is reshaping the way that we view the world around us. The short-hand name for this movement is 'chaos'. Chaos is a science of the global, nonlinear nature of systems. The center of this set of ideas is that simple, deterministic systems can breed complexity. Systems as complex as the human body, ecology, the mind or a human society. While it is true that simple laws can breed complexity, the other side is that complex systems can breed order. It is the latter that I will focus on in this paper. In the past, nonlinear was nearly synonymous with unsolvable because no general analytic solutions exist. Mathematically, an essential difference exists between linear and nonlinear systems. For linear systems, you just break up the complicated system into many simple pieces and patch together the separated solutions for each piece to form a solution to the full problem. In contrast, solutions to a nonlinear system cannot be added to form a new solution. The system must be treated in its full complexity. While it is true that no general analytical approach exists for reducing a complex system such as a society, it can be modeled. The technical involves a mathematical construct called phase space. In this space stable structures can appear which I use as analogies for the stable structures that appear in a complex system such as an ecology, the mind or a society. The common denominator in all of these systems is that they rely on a process called feedback loops. Feedback loops link the microscopic (individual) parts to the macroscopic (global) parts. The key, then, in shaping a space society, is in effectively using feedback loops. This paper will illustrate how one can model a space society by using methods that chaoticists have developed over the last hundred years. And I will show that common threads exist in the modeling of biological, economical, philosophical, and sociological systems.
NASA Astrophysics Data System (ADS)
Guha, Anirban
2017-11-01
Theoretical studies on linear shear instabilities as well as different kinds of wave interactions often use simple velocity and/or density profiles (e.g. constant, piecewise) for obtaining good qualitative and quantitative predictions of the initial disturbances. Moreover, such simple profiles provide a minimal model to obtain a mechanistic understanding of shear instabilities. Here we have extended this minimal paradigm into nonlinear domain using vortex method. Making use of unsteady Bernoulli's equation in presence of linear shear, and extending Birkhoff-Rott equation to multiple interfaces, we have numerically simulated the interaction between multiple fully nonlinear waves. This methodology is quite general, and has allowed us to simulate diverse problems that can be essentially reduced to the minimal system with interacting waves, e.g. spilling and plunging breakers, stratified shear instabilities (Holmboe, Taylor-Caulfield, stratified Rayleigh), jet flows, and even wave-topography interaction problem like Bragg resonance. We found that the minimal models capture key nonlinear features (e.g. wave breaking features like cusp formation and roll-ups) which are observed in experiments and/or extensive simulations with smooth, realistic profiles.
Quantitative proteomic analysis reveals a simple strategy of global resource allocation in bacteria
Hui, Sheng; Silverman, Josh M; Chen, Stephen S; Erickson, David W; Basan, Markus; Wang, Jilong; Hwa, Terence; Williamson, James R
2015-01-01
A central aim of cell biology was to understand the strategy of gene expression in response to the environment. Here, we study gene expression response to metabolic challenges in exponentially growing Escherichia coli using mass spectrometry. Despite enormous complexity in the details of the underlying regulatory network, we find that the proteome partitions into several coarse-grained sectors, with each sector's total mass abundance exhibiting positive or negative linear relations with the growth rate. The growth rate-dependent components of the proteome fractions comprise about half of the proteome by mass, and their mutual dependencies can be characterized by a simple flux model involving only two effective parameters. The success and apparent generality of this model arises from tight coordination between proteome partition and metabolism, suggesting a principle for resource allocation in proteome economy of the cell. This strategy of global gene regulation should serve as a basis for future studies on gene expression and constructing synthetic biological circuits. Coarse graining may be an effective approach to derive predictive phenomenological models for other ‘omics’ studies. PMID:25678603
Supernova shock breakout through a wind
NASA Astrophysics Data System (ADS)
Balberg, Shmuel; Loeb, Abraham
2011-06-01
The breakout of a supernova shock wave through the progenitor star's outer envelope is expected to appear as an X-ray flash. However, if the supernova explodes inside an optically thick wind, the breakout flash is delayed. We present a simple model for estimating the conditions at shock breakout in a wind based on the general observable quantities in the X-ray flash light curve; the total energy EX, and the diffusion time after the peak, tdiff. We base the derivation on the self-similar solution for the forward-reverse shock structure expected for an ejecta plowing through a pre-existing wind at large distances from the progenitor's surface. We find simple quantitative relations for the shock radius and velocity at breakout. By relating the ejecta density profile to the pre-explosion structure of the progenitor, the model can also be extended to constrain the combination of explosion energy and ejecta mass. For the observed case of XRO08109/SN2008D, our model provides reasonable constraints on the breakout radius, explosion energy and ejecta mass, and predicts a high shock velocity which naturally accounts for the observed non-thermal spectrum.
Measuring effective temperatures in a generalized Gibbs ensemble
NASA Astrophysics Data System (ADS)
Foini, Laura; Gambassi, Andrea; Konik, Robert; Cugliandolo, Leticia F.
2017-05-01
The local physical properties of an isolated quantum statistical system in the stationary state reached long after a quench are generically described by the Gibbs ensemble, which involves only its Hamiltonian and the temperature as a parameter. If the system is instead integrable, additional quantities conserved by the dynamics intervene in the description of the stationary state. The resulting generalized Gibbs ensemble involves a number of temperature-like parameters, the determination of which is practically difficult. Here we argue that in a number of simple models these parameters can be effectively determined by using fluctuation-dissipation relationships between response and correlation functions of natural observables, quantities which are accessible in experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ishida, Muneyuki; Ishida, Shin; Ishida, Taku
1998-05-29
The relation between scattering and production amplitudes are investigated, using a simple field theoretical model, from the general viewpoint of unitarity and the applicability of final state interaction (FSI-) theorem. The IA-method and VMW-method, which are applied to our phenomenological analyses [2,3] suggesting the {sigma}-existence, are obtained as the physical state representations of scattering and production amplitudes, respectively. Moreover, the VMW-method is shown to be an effective method to obtain the resonance properties from general production processes, while the conventional analyses based on the 'universality' of {pi}{pi}-scattering amplitude are powerless for this purpose.
Relation between scattering and production amplitude—Case of intermediate σ-particle in ππ-system—
NASA Astrophysics Data System (ADS)
Ishida, Muneyuki; Ishida, Shin; Ishida, Taku
1998-05-01
The relation between scattering and production amplitudes are investigated, using a simple field theoretical model, from the general viewpoint of unitarity and the applicability of final state interaction (FSI-) theorem. The IA-method and VMW-method, which are applied to our phenomenological analyses [2,3] suggesting the σ-existence, are obtained as the physical state representations of scattering and production amplitudes, respectively. Moreover, the VMW-method is shown to be an effective method to obtain the resonance properties from general production processes, while the conventional analyses based on the "universality" of ππ-scattering amplitude are powerless for this purpose.
Biased Metropolis Sampling for Rugged Free Energy Landscapes
NASA Astrophysics Data System (ADS)
Berg, Bernd A.
2003-11-01
Metropolis simulations of all-atom models of peptides (i.e. small proteins) are considered. Inspired by the funnel picture of Bryngelson and Wolyness, a transformation of the updating probabilities of the dihedral angles is defined, which uses probability densities from a higher temperature to improve the algorithmic performance at a lower temperature. The method is suitable for canonical as well as for generalized ensemble simulations. A simple approximation to the full transformation is tested at room temperature for Met-Enkephalin in vacuum. Integrated autocorrelation times are found to be reduced by factors close to two and a similar improvement due to generalized ensemble methods enters multiplicatively.
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.
2004-01-01
Differential Evolution (DE) is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. The DE algorithm has been recently extended to multiobjective optimization problem by using a Pareto-based approach. In this paper, a Pareto DE algorithm is applied to multiobjective aerodynamic shape optimization problems that are characterized by computationally expensive objective function evaluations. To improve computational expensive the algorithm is coupled with generalized response surface meta-models based on artificial neural networks. Results are presented for some test optimization problems from the literature to demonstrate the capabilities of the method.
Recurrence relations in one-dimensional Ising models.
da Conceição, C M Silva; Maia, R N P
2017-09-01
The exact finite-size partition function for the nonhomogeneous one-dimensional (1D) Ising model is found through an approach using algebra operators. Specifically, in this paper we show that the partition function can be computed through a trace from a linear second-order recurrence relation with nonconstant coefficients in matrix form. A relation between the finite-size partition function and the generalized Lucas polynomials is found for the simple homogeneous model, thus establishing a recursive formula for the partition function. This is an important property and it might indicate the possible existence of recurrence relations in higher-dimensional Ising models. Moreover, assuming quenched disorder for the interactions within the model, the quenched averaged magnetic susceptibility displays a nontrivial behavior due to changes in the ferromagnetic concentration probability.
A Simple Mathematical Model for Standard Model of Elementary Particles and Extension Thereof
NASA Astrophysics Data System (ADS)
Sinha, Ashok
2016-03-01
An algebraically (and geometrically) simple model representing the masses of the elementary particles in terms of the interaction (strong, weak, electromagnetic) constants is developed, including the Higgs bosons. The predicted Higgs boson mass is identical to that discovered by LHC experimental programs; while possibility of additional Higgs bosons (and their masses) is indicated. The model can be analyzed to explain and resolve many puzzles of particle physics and cosmology including the neutrino masses and mixing; origin of the proton mass and the mass-difference between the proton and the neutron; the big bang and cosmological Inflation; the Hubble expansion; etc. A novel interpretation of the model in terms of quaternion and rotation in the six-dimensional space of the elementary particle interaction-space - or, equivalently, in six-dimensional spacetime - is presented. Interrelations among particle masses are derived theoretically. A new approach for defining the interaction parameters leading to an elegant and symmetrical diagram is delineated. Generalization of the model to include supersymmetry is illustrated without recourse to complex mathematical formulation and free from any ambiguity. This Abstract represents some results of the Author's Independent Theoretical Research in Particle Physics, with possible connection to the Superstring Theory. However, only very elementary mathematics and physics is used in my presentation.
Classification framework for partially observed dynamical systems
NASA Astrophysics Data System (ADS)
Shen, Yuan; Tino, Peter; Tsaneva-Atanasova, Krasimira
2017-04-01
We present a general framework for classifying partially observed dynamical systems based on the idea of learning in the model space. In contrast to the existing approaches using point estimates of model parameters to represent individual data items, we employ posterior distributions over model parameters, thus taking into account in a principled manner the uncertainty due to both the generative (observational and/or dynamic noise) and observation (sampling in time) processes. We evaluate the framework on two test beds: a biological pathway model and a stochastic double-well system. Crucially, we show that the classification performance is not impaired when the model structure used for inferring posterior distributions is much more simple than the observation-generating model structure, provided the reduced-complexity inferential model structure captures the essential characteristics needed for the given classification task.
Geochemistry of the Birch Creek Drainage Basin, Idaho
Swanson, Shawn A.; Rosentreter, Jeffrey J.; Bartholomay, Roy C.; Knobel, LeRoy L.
2003-01-01
The U.S. Survey and Idaho State University, in cooperation with the U.S. Department of Energy, are conducting studies to describe the chemical character of ground water that moves as underflow from drainage basins into the eastern Snake River Plain aquifer (ESRPA) system at and near the Idaho National Engineering and Environmental Laboratory (INEEL) and the effects of these recharge waters on the geochemistry of the ESRPA system. Each of these recharge waters has a hydrochemical character related to geochemical processes, especially water-rock interactions, that occur during migration to the ESRPA. Results of these studies will benefit ongoing and planned geochemical modeling of the ESRPA at the INEEL by providing model input on the hydrochemical character of water from each drainage basin. During 2000, water samples were collected from five wells and one surface-water site in the Birch Creek drainage basin and analyzed for selected inorganic constituents, nutrients, dissolved organic carbon, tritium, measurements of gross alpha and beta radioactivity, and stable isotopes. Four duplicate samples also were collected for quality assurance. Results, which include analyses of samples previously collected from four other sites, in the basin, show that most water from the Birch Creek drainage basin has a calcium-magnesium bicarbonate character. The Birch Creek Valley can be divided roughly into three hydrologic areas. In the northern part, ground water is forced to the surface by a basalt barrier and the sampling sites were either surface water or shallow wells. Water chemistry in this area was characterized by simple evaporation models, simple calcite-carbon dioxide models, or complex models involving carbonate and silicate minerals. The central part of the valley is filled by sedimentary material and the sampling sites were wells that are deeper than those in the northern part. Water chemistry in this area was characterized by simple calcite-dolomite-carbon dioxide models. In the southern part, ground water enters the ESRPA. In this area, the sampling sites were wells with depths and water levels much deeper than those in the northern and central parts of the valley. The calcium and carbon water chemistry in this area was characterized by a simple calcite-carbon dioxide model, but complex calcite-silicate models more accurately accounted for mass transfer in these areas. Throughout the geochemical system, calcite precipitated if it was an active phase in the models. Carbon dioxide either precipitated (outgassed) or dissolved depending on the partial pressure of carbon dioxide in water from the modeled sites. Dolomite was an active phase only in models from the central part of the system. Generally the entire geochemical system could be modeled with either evaporative models, carbonate models, or carbonate-silicate models. In both of the latter types of models, a significant amount of calcite precipitated relative to the mass transfer to and from the other active phases. The amount of calcite precipitated in the more complex models was consistent with the amount of calcite precipitated in the simpler models. This consistency suggests that, although the simpler models can predict calcium and carbon concentrations in Birch Creek Valley ground and surface water, silicate-mineral-based models are required to account for the other constituents. The amount of mass transfer to and from the silicate mineral phases was generally small compared with that in the carbonate phases. It appears that the water chemistry of well USGS 126B represents the chemistry of water recharging the ESRPA by means of underflow from the Birch Creek Valley.
Hertäg, Loreen; Hass, Joachim; Golovko, Tatiana; Durstewitz, Daniel
2012-01-01
For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean-input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx) model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ("in vivo-like") input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a "high-throughput" model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available.
Symmetry in the Generalized Rotor Model for Extremely Floppy Molecules
NASA Astrophysics Data System (ADS)
Schmiedt, Hanno; Jensen, Per; Schlemmer, Stephan
2016-06-01
Protonated methane CH_5^+ is unique: It is an extremely fluxional molecule. All attempts to assign quantum numbers to the high-resolution transitions obtained over the last 20 years have failed because molecular rotation and vibration cannot be separated in the conventional way. The first step towards a theoretical description is to include internal rotational degrees of freedom into the overall ones, which can be used to formulate a fundamentally new zero order approximation for the (now) generalized rotational states and energies. Predictions from this simple five-dimensional rotor model compare very favorably with the combination differences of protonated methane found in recent low temperature experiments. This talk will focus on symmetry aspects and implications of permutation symmetry for the generalized rotational states. Furthermore, refinements of the theory will be discussed, ranging from the generalization to even higher-dimensional rotors to explicit symmetry breaking and corresponding energy splittings. The latter includes the link to well-known theories of internal rotation dynamics and will show the general validity of the presented theory. Schmiedt, H., et al.; J. Chem. Phys. 143 (15), 154302 (2015) Wodraszka, R. et al.; J. Phys. Chem. Lett. 6, 4229-4232 (2015) Asvany, O. et al.; Science, 347, (6228), 1346-1349 (2015)
Unidirectional random growth with resetting
NASA Astrophysics Data System (ADS)
Biró, T. S.; Néda, Z.
2018-06-01
We review stochastic processes without detailed balance condition and derive their H-theorem. We obtain stationary distributions and investigate their stability in terms of generalized entropic distances beyond the Kullback-Leibler formula. A simple stochastic model with local growth rates and direct resetting to the ground state is investigated and applied to various networks, scientific citations and Facebook popularity, hadronic yields in high energy particle reactions, income and wealth distributions, biodiversity and settlement size distributions.
2008-09-01
rich mix of medical services that range from simple ambulatory visits to plastic surgery , neuro- surgery , general surgery , bariatric , ophthalmology...CENTER SAN DIEGO NMCSD is a 266-bed tertiary care facility providing patient services ranging from same day surgery to brain surgery . The hospital...orthopedics, cardiology, thoracic surgery , vascular surgery , transient ischemic attack/cerebro vascular accident (TIA/CVA), OB/GYN, urology, non
NASA Technical Reports Server (NTRS)
Golombeck, M.; Rapp, D.
1996-01-01
The size-frequency distribution of rocks and the Vicking landing sites and a variety of rocky locations on the Earth that formed from a number of geologic processes all have the general shape of simple exponential curves, which have been combined with remote sensing data and models on rock abundance to predict the frequency of boulders potentially hazardous to future Mars landers and rovers.
Elastic and viscoelastic calculations of stresses in sedimentary basins
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warpinski, N.R.
This study presents a method for estimating the stress state within reservoirs at depth using a time-history approach for both elastic and viscoelastic rock behavior. Two features of this model are particularly significant for stress calculations. The first is the time-history approach, where we assume that the present in situ stress is a result of the entire history of the rock mass, rather than due only to the present conditions. The model can incorporate: (1) changes in pore pressure due to gas generation; (2) temperature gradients and local thermal episodes; (3) consolidation and diagenesis through time-varying material properties; and (4)more » varying tectonic episodes. The second feature is the use of a new viscoelastic model. Rather than assume a form of the relaxation function, a complete viscoelastic solution is obtained from the elastic solution through the viscoelastic correspondence principal. Simple rate models are then applied to obtain the final rock behavior. Example calculations for some simple cases are presented that show the contribution of individual stress or strain components. Finally, a complete example of the stress history of rocks in the Piceance basin is attempted. This calculation compares favorably with present-day stress data in this location. This model serves as a predictor for natural fracture genesis and expected rock fracturing from the model is compared with actual fractures observed in this region. These results show that most current estimates of in situ stress at depth do not incorporate all of the important mechanisms and a more complete formulation, such as this study, is required for acceptable stress calculations. The method presented here is general and is applicable to any basin having a relatively simple geologic history. 25 refs., 18 figs.« less
A Simple Demonstration of a General Rule for the Variation of Magnetic Field with Distance
ERIC Educational Resources Information Center
Kodama, K.
2009-01-01
We describe a simple experiment demonstrating the variation in magnitude of a magnetic field with distance. The method described requires only an ordinary magnetic compass and a permanent magnet. The proposed graphical analysis illustrates a unique method for deducing a general rule of magnetostatics. (Contains 1 table and 6 figures.)
METALLICITY GRADIENTS THROUGH DISK INSTABILITY: A SIMPLE MODEL FOR THE MILKY WAY'S BOXY BULGE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martinez-Valpuesta, Inma; Gerhard, Ortwin, E-mail: imv@mpe.mpg.de, E-mail: gerhard@mpe.mpg.de
2013-03-20
Observations show a clear vertical metallicity gradient in the Galactic bulge, which is often taken as a signature of dissipative processes in the formation of a classical bulge. Various evidence shows, however, that the Milky Way is a barred galaxy with a boxy bulge representing the inner three-dimensional part of the bar. Here we show with a secular evolution N-body model that a boxy bulge formed through bar and buckling instabilities can show vertical metallicity gradients similar to the observed gradient if the initial axisymmetric disk had a comparable radial metallicity gradient. In this framework, the range of metallicities inmore » bulge fields constrains the chemical structure of the Galactic disk at early times before bar formation. Our secular evolution model was previously shown to reproduce inner Galaxy star counts and we show here that it also has cylindrical rotation. We use it to predict a full mean metallicity map across the Galactic bulge from a simple metallicity model for the initial disk. This map shows a general outward gradient on the sky as well as longitudinal perspective asymmetries. We also briefly comment on interpreting metallicity gradient observations in external boxy bulges.« less
A hydrodynamic model for cooperating solidary countries
NASA Astrophysics Data System (ADS)
De Luca, Roberto; Di Mauro, Marco; Falzarano, Angelo; Naddeo, Adele
2017-07-01
The goal of international trade theories is to explain the exchange of goods and services between different countries, aiming to benefit from it. Albeit the idea is very simple and known since ancient history, smart policy and business strategies need to be implemented by each subject, resulting in a complex as well as not obvious interplay. In order to understand such a complexity, different theories have been developed since the sixteenth century and today new ideas still continue to enter the game. Among them, the so called classical theories are country-based and range from Absolute and Comparative Advantage theories by A. Smith and D. Ricardo to Factor Proportions theory by E. Heckscher and B. Ohlin. In this work we build a simple hydrodynamic model, able to reproduce the main conclusions of Comparative Advantage theory in its simplest setup, i.e. a two-country world with country A and country B exchanging two goods within a genuine exchange-based economy and a trade flow ruled only by market forces. The model is further generalized by introducing money in order to discuss its role in shaping trade patterns. Advantages and drawbacks of the model are also discussed together with perspectives for its improvement.
NASA Astrophysics Data System (ADS)
Tian, D.; Medina, H.
2017-12-01
Post-processing of medium range reference evapotranspiration (ETo) forecasts based on numerical weather prediction (NWP) models has the potential of improving the quality and utility of these forecasts. This work compares the performance of several post-processing methods for correcting ETo forecasts over the continental U.S. generated from The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) database using data from Europe (EC), the United Kingdom (MO), and the United States (NCEP). The pondered post-processing techniques are: simple bias correction, the use of multimodels, the Ensemble Model Output Statistics (EMOS, Gneitting et al., 2005) and the Bayesian Model Averaging (BMA, Raftery et al., 2005). ETo estimates based on quality-controlled U.S. Regional Climate Reference Network measurements, and computed with the FAO 56 Penman Monteith equation, are adopted as baseline. EMOS and BMA are generally the most efficient post-processing techniques of the ETo forecasts. Nevertheless, the simple bias correction of the best model is commonly much more rewarding than using multimodel raw forecasts. Our results demonstrate the potential of different forecasting and post-processing frameworks in operational evapotranspiration and irrigation advisory systems at national scale.
Spatial Evolution of Human Dialects
NASA Astrophysics Data System (ADS)
Burridge, James
2017-07-01
The geographical pattern of human dialects is a result of history. Here, we formulate a simple spatial model of language change which shows that the final result of this historical evolution may, to some extent, be predictable. The model shows that the boundaries of language dialect regions are controlled by a length minimizing effect analogous to surface tension, mediated by variations in population density which can induce curvature, and by the shape of coastline or similar borders. The predictability of dialect regions arises because these effects will drive many complex, randomized early states toward one of a smaller number of stable final configurations. The model is able to reproduce observations and predictions of dialectologists. These include dialect continua, isogloss bundling, fanning, the wavelike spread of dialect features from cities, and the impact of human movement on the number of dialects that an area can support. The model also provides an analytical form for Séguy's curve giving the relationship between geographical and linguistic distance, and a generalization of the curve to account for the presence of a population center. A simple modification allows us to analytically characterize the variation of language use by age in an area undergoing linguistic change.
Anticipatory Cognitive Systems: a Theoretical Model
NASA Astrophysics Data System (ADS)
Terenzi, Graziano
This paper deals with the problem of understanding anticipation in biological and cognitive systems. It is argued that a physical theory can be considered as biologically plausible only if it incorporates the ability to describe systems which exhibit anticipatory behaviors. The paper introduces a cognitive level description of anticipation and provides a simple theoretical characterization of anticipatory systems on this level. Specifically, a simple model of a formal anticipatory neuron and a model (i.e. the τ-mirror architecture) of an anticipatory neural network which is based on the former are introduced and discussed. The basic feature of this architecture is that a part of the network learns to represent the behavior of the other part over time, thus constructing an implicit model of its own functioning. As a consequence, the network is capable of self-representation; anticipation, on a oscopic level, is nothing but a consequence of anticipation on a microscopic level. Some learning algorithms are also discussed together with related experimental tasks and possible integrations. The outcome of the paper is a formal characterization of anticipation in cognitive systems which aims at being incorporated in a comprehensive and more general physical theory.
Sky camera geometric calibration using solar observations
Urquhart, Bryan; Kurtz, Ben; Kleissl, Jan
2016-09-05
A camera model and associated automated calibration procedure for stationary daytime sky imaging cameras is presented. The specific modeling and calibration needs are motivated by remotely deployed cameras used to forecast solar power production where cameras point skyward and use 180° fisheye lenses. Sun position in the sky and on the image plane provides a simple and automated approach to calibration; special equipment or calibration patterns are not required. Sun position in the sky is modeled using a solar position algorithm (requiring latitude, longitude, altitude and time as inputs). Sun position on the image plane is detected using a simple image processing algorithm. Themore » performance evaluation focuses on the calibration of a camera employing a fisheye lens with an equisolid angle projection, but the camera model is general enough to treat most fixed focal length, central, dioptric camera systems with a photo objective lens. Calibration errors scale with the noise level of the sun position measurement in the image plane, but the calibration is robust across a large range of noise in the sun position. In conclusion, calibration performance on clear days ranged from 0.94 to 1.24 pixels root mean square error.« less