Sample records for model assuming local

  1. A Multilevel Testlet Model for Dual Local Dependence

    ERIC Educational Resources Information Center

    Jiao, Hong; Kamata, Akihito; Wang, Shudong; Jin, Ying

    2012-01-01

    The applications of item response theory (IRT) models assume local item independence and that examinees are independent of each other. When a representative sample for psychometric analysis is selected using a cluster sampling method in a testlet-based assessment, both local item dependence and local person dependence are likely to be induced.…

  2. Modeling Ignition of HMX with the Gibbs Formulation

    NASA Astrophysics Data System (ADS)

    Lee, Kibaek; Stewart, D. Scott

    2017-06-01

    We present a HMX model with the Gibbs formulation in which stress tensor and temperature are assumed to be in local equilibrium, but phase/chemical changes are not assumed to be in equilibrium. We assume multi-components for HMX including beta- and delta-phase, liquid, and gas phase of HMX and its gas products. Isotropic small strain solid model, modified Fried Howard liquid EOS, and ideal gas EOS are used for its relevant component. Phase/chemical changes are characterized as reactions and are in individual reaction rate. Maxwell-Stefan model is used for diffusion. Excited gas products in the local domain lead unreacted HMX solid to the ignition event. Density of the mixture, stress, strain, displacement, mass fractions, and temperature are considered in 1D domain with time histories. Office of Naval Research and Air Force Office of Scientific Research.

  3. STEADY-STATE MODEL OF SOLAR WIND ELECTRONS REVISITED

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Peter H.; Kim, Sunjung; Choe, G. S., E-mail: yoonp@umd.edu

    2015-10-20

    In a recent paper, Kim et al. put forth a steady-state model for the solar wind electrons. The model assumed local equilibrium between the halo electrons, characterized by an intermediate energy range, and the whistler-range fluctuations. The basic wave–particle interaction is assumed to be the cyclotron resonance. Similarly, it was assumed that a dynamical steady state is established between the highly energetic superhalo electrons and high-frequency Langmuir fluctuations. Comparisons with the measured solar wind electron velocity distribution function (VDF) during quiet times were also made, and reasonable agreements were obtained. In such a model, however, only the steady-state solution for themore » Fokker–Planck type of electron particle kinetic equation was considered. The present paper complements the previous analysis by considering both the steady-state particle and wave kinetic equations. It is shown that the model halo and superhalo electron VDFs, as well as the assumed wave intensity spectra for the whistler and Langmuir fluctuations, approximately satisfy the quasi-linear wave kinetic equations in an approximate sense, thus further validating the local equilibrium model constructed in the paper by Kim et al.« less

  4. An Extension of IRT-Based Equating to the Dichotomous Testlet Response Theory Model

    ERIC Educational Resources Information Center

    Tao, Wei; Cao, Yi

    2016-01-01

    Current procedures for equating number-correct scores using traditional item response theory (IRT) methods assume local independence. However, when tests are constructed using testlets, one concern is the violation of the local item independence assumption. The testlet response theory (TRT) model is one way to accommodate local item dependence.…

  5. Modelling the breakup of solid aggregates in turbulent flows

    NASA Astrophysics Data System (ADS)

    B?Bler, Matth?Us U.; Morbidelli, Massimo; Ba?Dyga, Jerzy

    The breakup of solid aggregates suspended in a turbulent flow is considered. The aggregates are assumed to be small with respect to the Kolmogorov length scale and the flow is assumed to be homogeneous. Further, it is assumed that breakup is caused by hydrodynamic stresses acting on the aggregates, and breakup is therefore assumed to follow a first-order kinetic where KB(x) is the breakup rate function and x is the aggregate mass. To model KB(x), it is assumed that an aggregate breaks instantaneously when the surrounding flow is violent enough to create a hydrodynamic stress that exceeds a critical value required to break the aggregate. For aggregates smaller than the Kolmogorov length scale the hydrodynamic stress is determined by the viscosity and local energy dissipation rate whose fluctuations are highly intermittent. Hence, the first-order breakup kinetics are governed by the frequency with which the local energy dissipation rate exceeds a critical value (that corresponds to the critical stress). A multifractal model is adopted to describe the statistical properties of the local energy dissipation rate, and a power-law relation is used to relate the critical energy dissipation rate above which breakup occurs to the aggregate mass. The model leads to an expression for KB(x) that is zero below a limiting aggregate mass, and diverges for x . When simulating the breakup process, the former leads to an asymptotic mean aggregate size whose scaling with the mean energy dissipation rate differs by one third from the scaling expected in a non-fluctuating flow.

  6. Locally Dependent Latent Trait Model and the Dutch Identity Revisited.

    ERIC Educational Resources Information Center

    Ip, Edward H.

    2002-01-01

    Proposes a class of locally dependent latent trait models for responses to psychological and educational tests. Focuses on models based on a family of conditional distributions, or kernel, that describes joint multiple item responses as a function of student latent trait, not assuming conditional independence. Also proposes an EM algorithm for…

  7. Evaluation of earthquake potential in China

    NASA Astrophysics Data System (ADS)

    Rong, Yufang

    I present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (that is, the probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. I test all three estimates, and another published estimate, against earthquake data. I constructed a special earthquake catalog which combines previous catalogs covering different times. I estimated moment magnitudes for some events using regression relationships that are derived in this study. I used the special catalog to construct the smoothed seismicity model and to test all models retrospectively. In all the models, I adopted a kind of Gutenberg-Richter magnitude distribution with modifications at higher magnitude. The assumed magnitude distribution depends on three parameters: a multiplicative " a-value," the slope or "b-value," and a "corner magnitude" marking a rapid decrease of earthquake rate with magnitude. I assumed the "b-value" to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and declines as a negative power of the epicentral distance out to a few hundred kilometers. I derived the upper magnitude limit from the special catalog, and estimated local "a-values" from smoothed seismicity. I have begun a "prospective" test, and earthquakes since the beginning of 2000 are quite compatible with the model. For the geologic estimations, I adopted the seismic source zones that are used in the published Global Seismic Hazard Assessment Project (GSHAP) model. The zones are divided according to geological, geodetic and seismicity data. Corner magnitudes are estimated from fault length, while fault slip rates and an assumed locking depth determine earthquake rates. The geological model fits the earthquake data better than the GSHAP model. By smoothing geodetic strain rate, another potential model was constructed and tested. I derived the upper magnitude limit from the Special catalog, and assume local "a-values" proportional to geodetic strain rates. "Prospective" tests show that the geodetic strain rate model is quite compatible with earthquakes. By assuming the smoothed seismicity model as a null hypothesis, I tested every other model against it. Test results indicate that the smoothed seismicity model performs best.

  8. New analytical solutions for chemical evolution models: characterizing the population of star-forming and passive galaxies

    NASA Astrophysics Data System (ADS)

    Spitoni, E.; Vincenzo, F.; Matteucci, F.

    2017-03-01

    Context. Analytical models of chemical evolution, including inflow and outflow of gas, are important tools for studying how the metal content in galaxies evolves as a function of time. Aims: We present new analytical solutions for the evolution of the gas mass, total mass, and metallicity of a galactic system when a decaying exponential infall rate of gas and galactic winds are assumed. We apply our model to characterize a sample of local star-forming and passive galaxies from the Sloan Digital Sky Survey data, with the aim of reproducing their observed mass-metallicity relation. Methods: We derived how the two populations of star-forming and passive galaxies differ in their particular distribution of ages, formation timescales, infall masses, and mass loading factors. Results: We find that the local passive galaxies are, on average, older and assembled on shorter typical timescales than the local star-forming galaxies; on the other hand, the star-forming galaxies with higher masses generally show older ages and longer typical formation timescales compared than star-forming galaxies with lower masses. The local star-forming galaxies experience stronger galactic winds than the passive galaxy population. Exploring the effect of assuming different initial mass functions in our model, we show that to reproduce the observed mass-metallicity relation, stronger winds are requested if the initial mass function is top-heavy. Finally, our analytical models predict the assumed sample of local galaxies to lie on a tight surface in the 3D space defined by stellar metallicity, star formation rate, and stellar mass, in agreement with the well-known fundamental relation from adopting gas-phase metallicity. Conclusions: By using a new analytical model of chemical evolution, we characterize an ensemble of SDSS galaxies in terms of their infall timescales, infall masses, and mass loading factors. Local passive galaxies are, on average, older and assembled on shorter typical timescales than the local star-forming galaxies. Moreover, the local star-forming galaxies show stronger galactic winds than the passive galaxy population. Finally, we find that the fundamental relation between metallicity, mass, and star formation rate for these local galaxies is still valid when adopting the average galaxy stellar metallicity.

  9. Local Equating Using the Rasch Model, the OPLM, and the 2PL IRT Model--or--What Is It Anyway if the Model Captures Everything There Is to Know about the Test Takers?

    ERIC Educational Resources Information Center

    von Davier, Matthias; González B., Jorge; von Davier, Alina A.

    2013-01-01

    Local equating (LE) is based on Lord's criterion of equity. It defines a family of true transformations that aim at the ideal of equitable equating. van der Linden (this issue) offers a detailed discussion of common issues in observed-score equating relative to this local approach. By assuming an underlying item response theory model, one of…

  10. Local stability condition of the equilibrium of an oligopoly market with bounded rationality adjustment

    NASA Astrophysics Data System (ADS)

    Ibrahim, Adyda; Saaban, Azizan; Zaibidi, Nerda Zura

    2017-11-01

    This paper considers an n-firm oligopoly market where each firm produces a single homogenous product under a constant unit cost. Nonlinearity is introduced into the model of this oligopoly market by assuming the market has an isoelastic demand function. Furthermore, instead of the usual assumption of perfectly rational firms, they are assumed to be boundedly rational in adjusting their outputs at each period. The equilibrium of this n discrete dimensional system is obtained and its local stability is calculated.

  11. A Comparison of Different Psychometric Approaches to Modeling Testlet Structures: An Example with C-Tests

    ERIC Educational Resources Information Center

    Schroeders, Ulrich; Robitzsch, Alexander; Schipolowski, Stefan

    2014-01-01

    C-tests are a specific variant of cloze tests that are considered time-efficient, valid indicators of general language proficiency. They are commonly analyzed with models of item response theory assuming local item independence. In this article we estimated local interdependencies for 12 C-tests and compared the changes in item difficulties,…

  12. Earthquake Potential Models for China

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Jackson, D. D.

    2002-12-01

    We present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. We tested all three estimates, and the published Global Seismic Hazard Assessment Project (GSHAP) model, against earthquake data. We constructed a special earthquake catalog which combines previous catalogs covering different times. We used the special catalog to construct our smoothed seismicity model and to evaluate all models retrospectively. All our models employ a modified Gutenberg-Richter magnitude distribution with three parameters: a multiplicative ``a-value," the slope or ``b-value," and a ``corner magnitude" marking a strong decrease of earthquake rate with magnitude. We assumed the b-value to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and approximately as the reciprocal of the epicentral distance out to a few hundred kilometers. We derived the upper magnitude limit from the special catalog and estimated local a-values from smoothed seismicity. Earthquakes since January 1, 2000 are quite compatible with the model. For the geologic forecast we adopted the seismic source zones (based on geological, geodetic and seismicity data) of the GSHAP model. For each zone, we estimated a corner magnitude by applying the Wells and Coppersmith [1994] relationship to the longest fault in the zone, and we determined the a-value from fault slip rates and an assumed locking depth. The geological model fits the earthquake data better than the GSHAP model. We also applied the Wells and Coppersmith relationship to individual faults, but the results conflicted with the earthquake record. For our geodetic model we derived the uniform upper magnitude limit from the special catalog and assumed local a-values proportional to maximum horizontal strain rate. In prospective tests the geodetic model agrees well with earthquake occurrence. The smoothed seismicity model performs best of the four models.

  13. Growing hair on the extremal BTZ black hole

    NASA Astrophysics Data System (ADS)

    Harms, B.; Stern, A.

    2017-06-01

    We show that the nonlinear σ-model in an asymptotically AdS3 space-time admits a novel local symmetry. The field action is assumed to be quartic in the nonlinear σ-model fields and minimally coupled to gravity. The local symmetry transformation simultaneously twists the nonlinear σ-model fields and changes the space-time metric, and it can be used to map the extremal BTZ black hole to infinitely many hairy black hole solutions.

  14. Salience from the decision perspective: You know where it is before you know it is there.

    PubMed

    Zehetleitner, Michael; Müller, Hermann J

    2010-12-31

    In visual search for feature contrast ("odd-one-out") singletons, identical manipulations of salience, whether by varying target-distractor similarity or dimensional redundancy of target definition, had smaller effects on reaction times (RTs) for binary localization decisions than for yes/no detection decisions. According to formal models of binary decisions, identical differences in drift rates would yield larger RT differences for slow than for fast decisions. From this principle and the present findings, it follows that decisions on the presence of feature contrast singletons are slower than decisions on their location. This is at variance with two classes of standard models of visual search and object recognition that assume a serial cascade of first detection, then localization and identification of a target object, but also inconsistent with models assuming that as soon as a target is detected all its properties, spatial as well as non-spatial (e.g., its category), are available immediately. As an alternative, we propose a model of detection and localization tasks based on random walk processes, which can account for the present findings.

  15. On Local Homogeneity and Stochastically Ordered Mixed Rasch Models

    ERIC Educational Resources Information Center

    Kreiner, Svend; Hansen, Mogens; Hansen, Carsten Rosenberg

    2006-01-01

    Mixed Rasch models add latent classes to conventional Rasch models, assuming that the Rasch model applies within each class and that relative difficulties of items are different in two or more latent classes. This article considers a family of stochastically ordered mixed Rasch models, with ordinal latent classes characterized by increasing total…

  16. Dynamical heterogeneities and mechanical non-linearities: Modeling the onset of plasticity in polymer in the glass transition.

    PubMed

    Masurel, R J; Gelineau, P; Lequeux, F; Cantournet, S; Montes, H

    2017-12-27

    In this paper we focus on the role of dynamical heterogeneities on the non-linear response of polymers in the glass transition domain. We start from a simple coarse-grained model that assumes a random distribution of the initial local relaxation times and that quantitatively describes the linear viscoelasticity of a polymer in the glass transition regime. We extend this model to non-linear mechanics assuming a local Eyring stress dependence of the relaxation times. Implementing the model in a finite element mechanics code, we derive the mechanical properties and the local mechanical fields at the beginning of the non-linear regime. The model predicts a narrowing of distribution of relaxation times and the storage of a part of the mechanical energy --internal stress-- transferred to the material during stretching in this temperature range. We show that the stress field is not spatially correlated under and after loading and follows a Gaussian distribution. In addition the strain field exhibits shear bands, but the strain distribution is narrow. Hence, most of the mechanical quantities can be calculated analytically, in a very good approximation, with the simple assumption that the strain rate is constant.

  17. A dynamic model of a cantilever beam with a closed, embedded horizontal crack including local flexibilities at crack tips

    NASA Astrophysics Data System (ADS)

    Liu, J.; Zhu, W. D.; Charalambides, P. G.; Shao, Y. M.; Xu, Y. F.; Fang, X. M.

    2016-11-01

    As one of major failure modes of mechanical structures subjected to periodic loads, embedded cracks due to fatigue can cause catastrophic failure of machineries. Understanding the dynamic characteristics of a structure with an embedded crack is helpful for early crack detection and diagnosis. In this work, a new three-segment beam model with local flexibilities at crack tips is developed to investigate the vibration of a cantilever beam with a closed, fully embedded horizontal crack, which is assumed to be not located at its clamped or free end or distributed near its top or bottom side. The three-segment beam model is assumed to be a linear elastic system, and it does not account for the nonlinear crack closure effect; the top and bottom segments always stay in contact at their interface during the beam vibration. It can model the effects of local deformations in the vicinity of the crack tips, which cannot be captured by previous methods in the literature. The middle segment of the beam containing the crack is modeled by a mechanically consistent, reduced bending moment. Each beam segment is assumed to be an Euler-Bernoulli beam, and the compliances at the crack tips are analytically determined using a J-integral approach and verified using commercial finite element software. Using compatibility conditions at the crack tips and the transfer matrix method, the nature frequencies and mode shapes of the cracked cantilever beam are obtained. The three-segment beam model is used to investigate the effects of local flexibilities at crack tips on the first three natural frequencies and mode shapes of the cracked cantilever beam. A stationary wavelet transform (SWT) method is used to process the mode shapes of the cracked cantilever beam; jumps in single-level SWT decomposition detail coefficients can be used to identify the length and location of an embedded horizontal crack.

  18. Healing of a mechano-responsive material

    NASA Astrophysics Data System (ADS)

    Vetter, A.; Sander, O.; Duda, G. N.; Weinkamer, R.

    2013-12-01

    While contribution of physics to model fracture of materials is significant, the “reversed” process of healing is hardly investigated. Inspired by fracture healing that occurs as a self-repair process in nature, e.g. in bone, we computationally study the conditions under which a material can repair itself. In our model the material around a fracture is assumed mechano-responsive: it processes the information of i) local stiffness and ii) local strain and responds by local stiffening. Depending on how information i) and ii) is processed, healing evolves via fundamentally different paths.

  19. The Local Structure of Globalization. The Network Dynamics of Foreign Direct Investments in the International Electricity Industry

    NASA Astrophysics Data System (ADS)

    Koskinen, Johan; Lomi, Alessandro

    2013-05-01

    We study the evolution of the network of foreign direct investment (FDI) in the international electricity industry during the period 1994-2003. We assume that the ties in the network of investment relations between countries are created and deleted in continuous time, according to a conditional Gibbs distribution. This assumption allows us to take simultaneously into account the aggregate predictions of the well-established gravity model of international trade as well as local dependencies between network ties connecting the countries in our sample. According to the modified version of the gravity model that we specify, the probability of observing an investment tie between two countries depends on the mass of the economies involved, their physical distance, and the tendency of the network to self-organize into local configurations of network ties. While the limiting distribution of the data generating process is an exponential random graph model, we do not assume the system to be in equilibrium. We find evidence of the effects of the standard gravity model of international trade on evolution of the global FDI network. However, we also provide evidence of significant dyadic and extra-dyadic dependencies between investment ties that are typically ignored in available research. We show that local dependencies between national electricity industries are sufficient for explaining global properties of the network of foreign direct investments. We also show, however, that network dependencies vary significantly over time giving rise to a time-heterogeneous localized process of network evolution.

  20. Modeling of spacecraft charging

    NASA Technical Reports Server (NTRS)

    Whipple, E. C., Jr.

    1977-01-01

    Three types of modeling of spacecraft charging are discussed: statistical models, parametric models, and physical models. Local time dependence of circuit upset for DoD and communication satellites, and electron current to a sphere with an assumed Debye potential distribution are presented. Four regions were involved in spacecraft charging: (1) undisturbed plasma, (2) plasma sheath region, (3) spacecraft surface, and (4) spacecraft equivalent circuit.

  1. Estimating the remaining useful life of bearings using a neuro-local linear estimator-based method.

    PubMed

    Ahmad, Wasim; Ali Khan, Sheraz; Kim, Jong-Myon

    2017-05-01

    Estimating the remaining useful life (RUL) of a bearing is required for maintenance scheduling. While the degradation behavior of a bearing changes during its lifetime, it is usually assumed to follow a single model. In this letter, bearing degradation is modeled by a monotonically increasing function that is globally non-linear and locally linearized. The model is generated using historical data that is smoothed with a local linear estimator. A neural network learns this model and then predicts future levels of vibration acceleration to estimate the RUL of a bearing. The proposed method yields reasonably accurate estimates of the RUL of a bearing at different points during its operational life.

  2. Bell Test experiments explained without entanglement

    NASA Astrophysics Data System (ADS)

    Boyd, Jeffrey

    2011-04-01

    by Jeffrey H. Boyd. Jeffreyhboyd@gmail.com. John Bell proposed a test of what was called "local realism." However that is a different view of reality than we hold. Bell incorrectly assumed the validity of wave particle dualism. According to our model waves are independent of particles; wave interference precedes the emission of a particle. This results in two conclusions. First the proposed inequalities that apply to "local realism" in Bell's theorem do not apply to this model. The alleged mathematics of "local realism" is therefore wrong. Second, we can explain the Bell Test experimental results (such as the experiments done at Innsbruck) without any need for entanglement, non-locality, or particle superposition.

  3. Relativity, anomalies and objectivity loophole in recent tests of local realism

    NASA Astrophysics Data System (ADS)

    Bednorz, Adam

    2017-11-01

    Local realism is in conflict with special quantum Bell-type models. Recently, several experiments have demonstrated violation of local realism if we trust their setup assuming special relativity valid. In this paper we question the assumption of relativity, point out not commented anomalies and show that the experiments have not closed objectivity loophole because clonability of the result has not been demonstrated. We propose several improvements in further experimental tests of local realism make the violation more convincing.

  4. The Army’s Local Economic Effects

    DTIC Science & Technology

    2015-01-01

    region. An I/O model is a representation of the linkages between major sectors of a regional economy in which each sector of the regional economy is...assumed to require inputs from the other sectors to produce output. These inputs can come from local sources within the region, from other domestic...ment of the Army in a congressional district. An I/O model is a representation of the linkages between major sectors of a regional economy (and, to a

  5. Local Infrasound Variability Related to In Situ Atmospheric Observation

    NASA Astrophysics Data System (ADS)

    Kim, Keehoon; Rodgers, Arthur; Seastrand, Douglas

    2018-04-01

    Local infrasound is widely used to constrain source parameters of near-surface events (e.g., chemical explosions and volcanic eruptions). While atmospheric conditions are critical to infrasound propagation and source parameter inversion, local atmospheric variability is often ignored by assuming homogeneous atmospheres, and their impact on the source inversion uncertainty has never been accounted for due to the lack of quantitative understanding of infrasound variability. We investigate atmospheric impacts on local infrasound propagation by repeated explosion experiments with a dense acoustic network and in situ atmospheric measurement. We perform full 3-D waveform simulations with local atmospheric data and numerical weather forecast model to quantify atmosphere-dependent infrasound variability and address the advantage and restriction of local weather data/numerical weather model for sound propagation simulation. Numerical simulations with stochastic atmosphere models also showed nonnegligible influence of atmospheric heterogeneity on infrasound amplitude, suggesting an important role of local turbulence.

  6. Anomalous Anderson localization

    NASA Astrophysics Data System (ADS)

    Deng, Wenji

    2000-04-01

    We propose a generalized Anderson model and study numerically the localization phenomena in one dimension. In our model, not all the sites take on-site random site energy. The on-site energy εn on the nth site is assigned as follows. If n+P-1=0 ( mod P) , where P is a positive integer, εn is assumed to be randomly distributed between - W/2 and W/2. On the other lattice sites, the site energy is fixed, say εn=0.The localization length ξ defined as | t| 2=e -2 L/ ξ, where t is the transmission coefficient, is calculated using the transfer matrix method. It is found that the single-electron states with wave vectors k= π/P, 2 π/P,…,(P-1) π/P are no longer localized as in the standard Anderson model. Compared with the smooth localization length spectrum of the Anderson model, there appear P-1 sharp peaks periodically located at P-1 values of wave vector on the localization length spectrum of the generalized Anderson model with parameter P.

  7. Energy approach to brittle fracture in strain-gradient modelling.

    PubMed

    Placidi, Luca; Barchiesi, Emilio

    2018-02-01

    In this paper, we exploit some results in the theory of irreversible phenomena to address the study of quasi-static brittle fracture propagation in a two-dimensional isotropic continuum. The elastic strain energy density of the body has been assumed to be geometrically nonlinear and to depend on the strain gradient. Such generalized continua often arise in the description of microstructured media. These materials possess an intrinsic length scale, which determines the size of internal boundary layers. In particular, the non-locality conferred by this internal length scale avoids the concentration of deformations, which is usually observed when dealing with local models and which leads to mesh dependency. A scalar Lagrangian damage field, ranging from zero to one, is introduced to describe the internal state of structural degradation of the material. Standard Lamé and second-gradient elastic coefficients are all assumed to decrease as damage increases and to be locally zero if the value attained by damage is one. This last situation is associated with crack formation and/or propagation. Numerical solutions of the model are provided in the case of an obliquely notched rectangular specimen subjected to monotonous tensile and shear loading tests, and brittle fracture propagation is discussed.

  8. A 1-D evolutionary model for icy satellites, applied to Enceladus

    NASA Astrophysics Data System (ADS)

    Malamud, Uri; Prialnik, Dina

    2016-04-01

    We develop a long-term 1-D evolution model for icy satellites that couples multiple processes: water migration and differentiation, geochemical reactions and silicate phase transitions, compaction by self-gravity, and ablation. The model further considers the following energy sources and sinks: tidal heating, radiogenic heating, geochemical energy released by serpentinization or absorbed by mineral dehydration, gravitational energy and insolation, and heat transport by conduction, convection, and advection. We apply the model to Enceladus, by guessing the initial conditions that would render a structure compatible with present-day observations, assuming the initial structure to have been homogeneous. Assuming the satellite has been losing water continually along its evolution, we postulate that it was formed as a more massive, more icy and more porous satellite, and gradually transformed into its present day state due to sustained long-term tidal heating. We consider several initial compositions and evolution scenarios and follow the evolution for the age of the Solar System, testing the present day model results against the available observational constraints. Our model shows the present configuration to be differentiated into a pure icy mantle, several tens of km thick, overlying a rocky core, composed of dehydrated rock at the center and hydrated rock in the outer part. For Enceladus, it predicts a higher rock/ice mass ratio than previously assumed and a thinner ice mantle, compatible with recent estimates based on gravity field measurements. Although, obviously, the model cannot be used to explain local phenomena, it sheds light on the internal structure invoked in explanations of localized features and activities.

  9. Fine-scale population dynamics in a marine fish species inferred from dynamic state-space models.

    PubMed

    Rogers, Lauren A; Storvik, Geir O; Knutsen, Halvor; Olsen, Esben M; Stenseth, Nils C

    2017-07-01

    Identifying the spatial scale of population structuring is critical for the conservation of natural populations and for drawing accurate ecological inferences. However, population studies often use spatially aggregated data to draw inferences about population trends and drivers, potentially masking ecologically relevant population sub-structure and dynamics. The goals of this study were to investigate how population dynamics models with and without spatial structure affect inferences on population trends and the identification of intrinsic drivers of population dynamics (e.g. density dependence). Specifically, we developed dynamic, age-structured, state-space models to test different hypotheses regarding the spatial structure of a population complex of coastal Atlantic cod (Gadus morhua). Data were from a 93-year survey of juvenile (age 0 and 1) cod sampled along >200 km of the Norwegian Skagerrak coast. We compared two models: one which assumes all sampled cod belong to one larger population, and a second which assumes that each fjord contains a unique population with locally determined dynamics. Using the best supported model, we then reconstructed the historical spatial and temporal dynamics of Skagerrak coastal cod. Cross-validation showed that the spatially structured model with local dynamics had better predictive ability. Furthermore, posterior predictive checks showed that a model which assumes one homogeneous population failed to capture the spatial correlation pattern present in the survey data. The spatially structured model indicated that population trends differed markedly among fjords, as did estimates of population parameters including density-dependent survival. Recent biomass was estimated to be at a near-record low all along the coast, but the finer scale model indicated that the decline occurred at different times in different regions. Warm temperatures were associated with poor recruitment, but local changes in habitat and fishing pressure may have played a role in driving local dynamics. More generally, we demonstrated how state-space models can be used to test evidence for population spatial structure based on survey time-series data. Our study shows the importance of considering spatially structured dynamics, as the inferences from such an approach can lead to a different ecological understanding of the drivers of population declines, and fundamentally different management actions to restore populations. © 2017 The Authors. Journal of Animal Ecology published by John Wiley & Sons Ltd on behalf of British Ecological Society.

  10. Variational Bayesian Inversion of Quasi-Localized Seismic Attributes for the Spatial Distribution of Geological Facies

    NASA Astrophysics Data System (ADS)

    Nawaz, Muhammad Atif; Curtis, Andrew

    2018-04-01

    We introduce a new Bayesian inversion method that estimates the spatial distribution of geological facies from attributes of seismic data, by showing how the usual probabilistic inverse problem can be solved using an optimization framework still providing full probabilistic results. Our mathematical model consists of seismic attributes as observed data, which are assumed to have been generated by the geological facies. The method infers the post-inversion (posterior) probability density of the facies plus some other unknown model parameters, from the seismic attributes and geological prior information. Most previous research in this domain is based on the localized likelihoods assumption, whereby the seismic attributes at a location are assumed to depend on the facies only at that location. Such an assumption is unrealistic because of imperfect seismic data acquisition and processing, and fundamental limitations of seismic imaging methods. In this paper, we relax this assumption: we allow probabilistic dependence between seismic attributes at a location and the facies in any neighbourhood of that location through a spatial filter. We term such likelihoods quasi-localized.

  11. Locality and Unitarity of Scattering Amplitudes from Singularities and Gauge Invariance

    NASA Astrophysics Data System (ADS)

    Arkani-Hamed, Nima; Rodina, Laurentiu; Trnka, Jaroslav

    2018-06-01

    We conjecture that the leading two-derivative tree-level amplitudes for gluons and gravitons can be derived from gauge invariance together with mild assumptions on their singularity structure. Assuming locality (that the singularities are associated with the poles of cubic graphs), we prove that gauge invariance in just n -1 particles together with minimal power counting uniquely fixes the amplitude. Unitarity in the form of factorization then follows from locality and gauge invariance. We also give evidence for a stronger conjecture: assuming only that singularities occur when the sum of a subset of external momenta go on shell, we show in nontrivial examples that gauge invariance and power counting demand a graph structure for singularities. Thus, both locality and unitarity emerge from singularities and gauge invariance. Similar statements hold for theories of Goldstone bosons like the nonlinear sigma model and Dirac-Born-Infeld by replacing the condition of gauge invariance with an appropriate degree of vanishing in soft limits.

  12. On the equivalence between traction- and stress-based approaches for the modeling of localized failure in solids

    NASA Astrophysics Data System (ADS)

    Wu, Jian-Ying; Cervera, Miguel

    2015-09-01

    This work investigates systematically traction- and stress-based approaches for the modeling of strong and regularized discontinuities induced by localized failure in solids. Two complementary methodologies, i.e., discontinuities localized in an elastic solid and strain localization of an inelastic softening solid, are addressed. In the former it is assumed a priori that the discontinuity forms with a continuous stress field and along the known orientation. A traction-based failure criterion is introduced to characterize the discontinuity and the orientation is determined from Mohr's maximization postulate. If the displacement jumps are retained as independent variables, the strong/regularized discontinuity approaches follow, requiring constitutive models for both the bulk and discontinuity. Elimination of the displacement jumps at the material point level results in the embedded/smeared discontinuity approaches in which an overall inelastic constitutive model fulfilling the static constraint suffices. The second methodology is then adopted to check whether the assumed strain localization can occur and identify its consequences on the resulting approaches. The kinematic constraint guaranteeing stress boundedness and continuity upon strain localization is established for general inelastic softening solids. Application to a unified stress-based elastoplastic damage model naturally yields all the ingredients of a localized model for the discontinuity (band), justifying the first methodology. Two dual but not necessarily equivalent approaches, i.e., the traction-based elastoplastic damage model and the stress-based projected discontinuity model, are identified. The former is equivalent to the embedded and smeared discontinuity approaches, whereas in the later the discontinuity orientation and associated failure criterion are determined consistently from the kinematic constraint rather than given a priori. The bi-directional connections and equivalence conditions between the traction- and stress-based approaches are classified. Closed-form results under plane stress condition are also given. A generic failure criterion of either elliptic, parabolic or hyperbolic type is analyzed in a unified manner, with the classical von Mises (J2), Drucker-Prager, Mohr-Coulomb and many other frequently employed criteria recovered as its particular cases.

  13. Robust Measurement via A Fused Latent and Graphical Item Response Theory Model.

    PubMed

    Chen, Yunxiao; Li, Xiaoou; Liu, Jingchen; Ying, Zhiliang

    2018-03-12

    Item response theory (IRT) plays an important role in psychological and educational measurement. Unlike the classical testing theory, IRT models aggregate the item level information, yielding more accurate measurements. Most IRT models assume local independence, an assumption not likely to be satisfied in practice, especially when the number of items is large. Results in the literature and simulation studies in this paper reveal that misspecifying the local independence assumption may result in inaccurate measurements and differential item functioning. To provide more robust measurements, we propose an integrated approach by adding a graphical component to a multidimensional IRT model that can offset the effect of unknown local dependence. The new model contains a confirmatory latent variable component, which measures the targeted latent traits, and a graphical component, which captures the local dependence. An efficient proximal algorithm is proposed for the parameter estimation and structure learning of the local dependence. This approach can substantially improve the measurement, given no prior information on the local dependence structure. The model can be applied to measure both a unidimensional latent trait and multidimensional latent traits.

  14. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    NASA Astrophysics Data System (ADS)

    Irving, J.; Koepke, C.; Elsheikh, A. H.

    2017-12-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion procedure. In each case, the developed model-error approach enables to remove posterior bias and obtain a more realistic characterization of uncertainty.

  15. Strategies for sperm chemotaxis in the siphonophores and ascidians: a numerical simulation study.

    PubMed

    Ishikawa, Makiko; Tsutsui, Hidekazu; Cosson, Jacky; Oka, Yoshitaka; Morisawa, Masaaki

    2004-04-01

    Chemotactic swimming behaviors of spermatozoa toward an egg have been reported in various species. The strategies underlying these behaviors, however, are poorly understood. We focused on two types of chemotaxis, one in the siphonophores and the second in the ascidians, and then proposed two models based on experimental data. Both models assumed that the radius of the path curvature of a swimming spermatozoon depends on [Ca(2+)](i), the intracellular calcium concentration. The chemotaxis in the siphonophores could be simulated in a model that assumes that [Ca(2+)](i) depends on the local concentration of the attractant in the vicinity of the spermatozoon and that a substantial time period is required for the clearance of transient high [Ca(2+)](i). In the case of ascidians, trajectories similar to those in experiments could be adequately simulated by a variant of this model that assumes that [Ca(2+)](i) depends on the time derivative of the attractant concentration. The properties of these strategies and future problems are discussed in relation to these models.

  16. On Selective Harvesting of an Inshore-Offshore Fishery: A Bioeconomic Model

    ERIC Educational Resources Information Center

    Purohit, D.; Chaudhuri, K. S.

    2004-01-01

    A bioeconomic model is developed for the selective harvesting of a single species, inshore-offshore fishery, assuming that the growth of the species is governed by the Gompertz law. The dynamical system governing the fishery is studied in depth; the local and global stability of its non-trivial steady state are examined. Existence of a bionomic…

  17. Chimera regimes in a ring of oscillators with local nonlinear interaction

    NASA Astrophysics Data System (ADS)

    Shepelev, Igor A.; Zakharova, Anna; Vadivasova, Tatiana E.

    2017-03-01

    One of important problems concerning chimera states is the conditions of their existence and stability. Until now, it was assumed that chimeras could arise only in ensembles with nonlocal character of interactions. However, this assumption is not exactly right. In some special cases chimeras can be realized for local type of coupling [1-3]. We propose a simple model of ensemble with local coupling when chimeras are realized. This model is a ring of linear oscillators with the local nonlinear unidirectional interaction. Chimera structures in the ring are found using computer simulations for wide area of values of parameters. Diagram of the regimes on plane of control parameters is plotted and scenario of chimera destruction are studied when the parameters are changed.

  18. The Role of Law-of-the-Wall and Roughness Scale in the Surface Stress Model for LES of the Rough-wall Boundary Layer

    NASA Astrophysics Data System (ADS)

    Brasseur, James; Paes, Paulo; Chamecki, Marcelo

    2017-11-01

    Large-eddy simulation (LES) of the high Reynolds number rough-wall boundary layer requires both a subfilter-scale model for the unresolved inertial term and a ``surface stress model'' (SSM) for space-time local surface momentum flux. Standard SSMs assume proportionality between the local surface shear stress vector and the local resolved-scale velocity vector at the first grid level. Because the proportionality coefficient incorporates a surface roughness scale z0 within a functional form taken from law-of-the-wall (LOTW), it is commonly stated that LOTW is ``assumed,'' and therefore ``forced'' on the LES. We show that this is not the case; the LOTW form is the ``drag law'' used to relate friction velocity to mean resolved velocity at the first grid level consistent with z0 as the height where mean velocity vanishes. Whereas standard SSMs do not force LOTW on the prediction, we show that parameterized roughness does not match ``true'' z0 when LOTW is not predicted, or does not exist. By extrapolating mean velocity, we show a serious mismatch between true z0 and parameterized z0 in the presence of a spurious ``overshoot'' in normalized mean velocity gradient. We shall discuss the source of the problem and its potential resolution.

  19. Birefringence and hidden photons

    NASA Astrophysics Data System (ADS)

    Arza, Ariel; Gamboa, J.

    2018-05-01

    We study a model where photons interact with hidden photons and millicharged particles through a kinetic mixing term. Particularly, we focus on vacuum birefringence effects and we find a bound for the millicharged parameter assuming that hidden photons are a piece of the local dark matter density.

  20. The mechanical heterogeneity of the hard callus influences local tissue strains during bone healing: a finite element study based on sheep experiments.

    PubMed

    Vetter, A; Liu, Y; Witt, F; Manjubala, I; Sander, O; Epari, D R; Fratzl, P; Duda, G N; Weinkamer, R

    2011-02-03

    During secondary fracture healing, various tissue types including new bone are formed. The local mechanical strains play an important role in tissue proliferation and differentiation. To further our mechanobiological understanding of fracture healing, a precise assessment of local strains is mandatory. Until now, static analyses using Finite Elements (FE) have assumed homogenous material properties. With the recent quantification of both the spatial tissue patterns (Vetter et al., 2010) and the development of elastic modulus of newly formed bone during healing (Manjubala et al., 2009), it is now possible to incorporate this heterogeneity. Therefore, the aim of this study is to investigate the effect of this heterogeneity on the strain patterns at six successive healing stages. The input data of the present work stemmed from a comprehensive cross-sectional study of sheep with a tibial osteotomy (Epari et al., 2006). In our FE model, each element containing bone was described by a bulk elastic modulus, which depended on both the local area fraction and the local elastic modulus of the bone material. The obtained strains were compared with the results of hypothetical FE models assuming homogeneous material properties. The differences in the spatial distributions of the strains between the heterogeneous and homogeneous FE models were interpreted using a current mechanobiological theory (Isakson et al., 2006). This interpretation showed that considering the heterogeneity of the hard callus is most important at the intermediate stages of healing, when cartilage transforms to bone via endochondral ossification. Copyright © 2010 Elsevier Ltd. All rights reserved.

  1. Superconductivity modelling: Homogenization of Bean`s model in three dimensions, and the problem of transverse conductivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bossavit, A.

    The authors show how to pass from the local Bean`s model, assumed to be valid as a behavior law for a homogeneous superconductor, to a model of similar form, valid on a larger space scale. The process, which can be iterated to higher and higher space scales, consists in solving for the fields e and j over a ``periodicity cell`` with periodic boundary conditions.

  2. Situating School District Resource Decision Making in Policy Context

    ERIC Educational Resources Information Center

    Spain, Angeline K.

    2016-01-01

    Decentralization and deregulation policies assume that local educational leaders make better resource decisions than state policy makers do. Conceptual models drawn from organizational theory, however, offer competing predictions about how district central office administrators are likely to leverage their professional expertise in devolved…

  3. Model specification in oral health-related quality of life research.

    PubMed

    Kieffer, Jacobien M; Verrips, Erik; Hoogstraten, Johan

    2009-10-01

    The aim of this study was to analyze conventional wisdom regarding the construction and analysis of oral health-related quality of life (OHRQoL) questionnaires and to outline statistical complications. Most methods used for developing and analyzing questionnaires, such as factor analysis and Cronbach's alpha, presume psychological constructs to be latent, inferring a reflective measurement model with the underlying assumption of local independence. Local independence implies that the latent variable explains why the variables observed are related. Many OHRQoL questionnaires are analyzed as if they were based on a reflective measurement model; local independence is thus assumed. This assumption requires these questionnaires to consist solely of items that reflect, instead of determine, OHRQoL. The tenability of this assumption is the main topic of the present study. It is argued that OHRQoL questionnaires are a mix of both a formative measurement model and a reflective measurement model, thus violating the assumption of local independence. The implications are discussed.

  4. A simple and complete model for wind turbine wakes over complex terrain

    NASA Astrophysics Data System (ADS)

    Rommelfanger, Nick; Rajborirug, Mai; Luzzatto-Fegiz, Paolo

    2017-11-01

    Simple models for turbine wakes have been used extensively in the wind energy community, both as independent tools, as well as to complement more refined and computationally-intensive techniques. These models typically prescribe empirical relations for how the wake radius grows with downstream distance x and obtain the wake velocity at each x through the application of either mass conservation, or of both mass and momentum conservation (e.g. Katić et al. 1986; Frandsen et al. 2006; Bastankhah & Porté-Agel 2014). Since these models assume a global behavior of the wake (for example, linear spreading with x) they cannot respond to local changes in background flow, as may occur over complex terrain. Instead of assuming a global wake shape, we develop a model by relying on a local assumption for the growth of the turbulent interface. To this end, we introduce to wind turbine wakes the use of the entrainment hypothesis, which has been used extensively in other areas of geophysical fluid dynamics. We obtain two coupled ordinary differential equations for mass and momentum conservation, which can be readily solved with a prescribed background pressure gradient. Our model is in good agreement with published data for the development of wakes over complex terrain.

  5. Non-universal Z‧ from fluxed GUTs

    NASA Astrophysics Data System (ADS)

    Crispim Romao, Miguel; King, Stephen F.; Leontaris, George K.

    2018-07-01

    We make a first systematic study of non-universal TeV scale neutral gauge bosons Z‧ arising naturally from a class of F-theory inspired models broken via SU (5) by flux. The phenomenological models we consider may originate from semi-local F-theory GUTs arising from a single E8 point of local enhancement, assuming the minimal Z2 monodromy in order to allow for a renormalisable top quark Yukawa coupling. We classify such non-universal anomaly-free U(1) ‧ models requiring a minimal low energy spectrum and also allowing for a vector-like family. We discuss to what extent such models can account for the anomalous B-decay ratios RK and RK*.

  6. Effective electric fields along realistic DTI-based neural trajectories for modelling the stimulation mechanisms of TMS

    NASA Astrophysics Data System (ADS)

    De Geeter, N.; Crevecoeur, G.; Leemans, A.; Dupré, L.

    2015-01-01

    In transcranial magnetic stimulation (TMS), an applied alternating magnetic field induces an electric field in the brain that can interact with the neural system. It is generally assumed that this induced electric field is the crucial effect exciting a certain region of the brain. More specifically, it is the component of this field parallel to the neuron’s local orientation, the so-called effective electric field, that can initiate neuronal stimulation. Deeper insights on the stimulation mechanisms can be acquired through extensive TMS modelling. Most models study simple representations of neurons with assumed geometries, whereas we embed realistic neural trajectories computed using tractography based on diffusion tensor images. This way of modelling ensures a more accurate spatial distribution of the effective electric field that is in addition patient and case specific. The case study of this paper focuses on the single pulse stimulation of the left primary motor cortex with a standard figure-of-eight coil. Including realistic neural geometry in the model demonstrates the strong and localized variations of the effective electric field between the tracts themselves and along them due to the interplay of factors such as the tract’s position and orientation in relation to the TMS coil, the neural trajectory and its course along the white and grey matter interface. Furthermore, the influence of changes in the coil orientation is studied. Investigating the impact of tissue anisotropy confirms that its contribution is not negligible. Moreover, assuming isotropic tissues lead to errors of the same size as rotating or tilting the coil with 10 degrees. In contrast, the model proves to be less sensitive towards the not well-known tissue conductivity values.

  7. The Principal's Role in Site-Based Management.

    ERIC Educational Resources Information Center

    Drury, William R.

    1993-01-01

    In existing school-based management models, the principal's role ranges from chairing the local council to being a coach/facilitator. With teachers and parents assuming greater control over governance, curriculum, and budgeting, paranoid principals may establish more formal bargaining relationships with district boards. Caution is advised, because…

  8. The Pasinetti-Solow Growth Model with Optimal Saving Behaviour: A Local Bifurcation Analysis

    NASA Astrophysics Data System (ADS)

    Commendatore, P.; Palmisani, C.

    We present a discrete time version of the Pasinetti-Solow economic growth model. Workers and capitalists are assumed to save on the basis of rational choices. Workers face a finite time horizon and base their consumption choices on a life-cycle motive, whereas capitalists behave like an infinitely-lived dynasty. The accumulation of both capitalists' and workers' wealth through time is reduced to a two-dimensional map whose local asymptotic stability properties are studied. Various types of bifurcation emerge (flip, Neimark-Sacker, saddle-node and transcritical): a precondition for chaotic dynamics.

  9. Predicting surface vibration from underground railways through inhomogeneous soil

    NASA Astrophysics Data System (ADS)

    Jones, Simon; Hunt, Hugh

    2012-04-01

    Noise and vibration from underground railways is a major source of disturbance to inhabitants near subways. To help designers meet noise and vibration limits, numerical models are used to understand vibration propagation from these underground railways. However, the models commonly assume the ground is homogeneous and neglect to include local variability in the soil properties. Such simplifying assumptions add a level of uncertainty to the predictions which is not well understood. The goal of the current paper is to quantify the effect of soil inhomogeneity on surface vibration. The thin-layer method (TLM) is suggested as an efficient and accurate means of simulating vibration from underground railways in arbitrarily layered half-spaces. Stochastic variability of the soil's elastic modulus is introduced using a K-L expansion; the modulus is assumed to have a log-normal distribution and a modified exponential covariance kernel. The effect of horizontal soil variability is investigated by comparing the stochastic results for soils varied only in the vertical direction to soils with 2D variability. Results suggest that local soil inhomogeneity can significantly affect surface velocity predictions; 90 percent confidence intervals showing 8 dB averages and peak values up to 12 dB are computed. This is a significant source of uncertainty and should be considered when using predictions from models assuming homogeneous soil properties. Furthermore, the effect of horizontal variability of the elastic modulus on the confidence interval appears to be negligible. This suggests that only vertical variation needs to be taken into account when modelling ground vibration from underground railways.

  10. The importance of regional models in assessing canine cancer incidences in Switzerland

    PubMed Central

    Leyk, Stefan; Brunsdon, Christopher; Graf, Ramona; Pospischil, Andreas; Fabrikant, Sara Irina

    2018-01-01

    Fitting canine cancer incidences through a conventional regression model assumes constant statistical relationships across the study area in estimating the model coefficients. However, it is often more realistic to consider that these relationships may vary over space. Such a condition, known as spatial non-stationarity, implies that the model coefficients need to be estimated locally. In these kinds of local models, the geographic scale, or spatial extent, employed for coefficient estimation may also have a pervasive influence. This is because important variations in the local model coefficients across geographic scales may impact the understanding of local relationships. In this study, we fitted canine cancer incidences across Swiss municipal units through multiple regional models. We computed diagnostic summaries across the different regional models, and contrasted them with the diagnostics of the conventional regression model, using value-by-alpha maps and scalograms. The results of this comparative assessment enabled us to identify variations in the goodness-of-fit and coefficient estimates. We detected spatially non-stationary relationships, in particular, for the variables related to biological risk factors. These variations in the model coefficients were more important at small geographic scales, making a case for the need to model canine cancer incidences locally in contrast to more conventional global approaches. However, we contend that prior to undertaking local modeling efforts, a deeper understanding of the effects of geographic scale is needed to better characterize and identify local model relationships. PMID:29652921

  11. The importance of regional models in assessing canine cancer incidences in Switzerland.

    PubMed

    Boo, Gianluca; Leyk, Stefan; Brunsdon, Christopher; Graf, Ramona; Pospischil, Andreas; Fabrikant, Sara Irina

    2018-01-01

    Fitting canine cancer incidences through a conventional regression model assumes constant statistical relationships across the study area in estimating the model coefficients. However, it is often more realistic to consider that these relationships may vary over space. Such a condition, known as spatial non-stationarity, implies that the model coefficients need to be estimated locally. In these kinds of local models, the geographic scale, or spatial extent, employed for coefficient estimation may also have a pervasive influence. This is because important variations in the local model coefficients across geographic scales may impact the understanding of local relationships. In this study, we fitted canine cancer incidences across Swiss municipal units through multiple regional models. We computed diagnostic summaries across the different regional models, and contrasted them with the diagnostics of the conventional regression model, using value-by-alpha maps and scalograms. The results of this comparative assessment enabled us to identify variations in the goodness-of-fit and coefficient estimates. We detected spatially non-stationary relationships, in particular, for the variables related to biological risk factors. These variations in the model coefficients were more important at small geographic scales, making a case for the need to model canine cancer incidences locally in contrast to more conventional global approaches. However, we contend that prior to undertaking local modeling efforts, a deeper understanding of the effects of geographic scale is needed to better characterize and identify local model relationships.

  12. Early vision and focal attention

    NASA Astrophysics Data System (ADS)

    Julesz, Bela

    1991-07-01

    At the thirty-year anniversary of the introduction of the technique of computer-generated random-dot stereograms and random-dot cinematograms into psychology, the impact of the technique on brain research and on the study of artificial intelligence is reviewed. The main finding-that stereoscopic depth perception (stereopsis), motion perception, and preattentive texture discrimination are basically bottom-up processes, which occur without the help of the top-down processes of cognition and semantic memory-greatly simplifies the study of these processes of early vision and permits the linking of human perception with monkey neurophysiology. Particularly interesting are the unexpected findings that stereopsis (assumed to be local) is a global process, while texture discrimination (assumed to be a global process, governed by statistics) is local, based on some conspicuous local features (textons). It is shown that the top-down process of "shape (depth) from shading" does not affect stereopsis, and some of the models of machine vision are evaluated. The asymmetry effect of human texture discrimination is discussed, together with recent nonlinear spatial filter models and a novel extension of the texton theory that can cope with the asymmetry problem. This didactic review attempts to introduce the physicist to the field of psychobiology and its problems-including metascientific problems of brain research, problems of scientific creativity, the state of artificial intelligence research (including connectionist neural networks) aimed at modeling brain activity, and the fundamental role of focal attention in mental events.

  13. Central Limit Theorem for Exponentially Quasi-local Statistics of Spin Models on Cayley Graphs

    NASA Astrophysics Data System (ADS)

    Reddy, Tulasi Ram; Vadlamani, Sreekar; Yogeshwaran, D.

    2018-04-01

    Central limit theorems for linear statistics of lattice random fields (including spin models) are usually proven under suitable mixing conditions or quasi-associativity. Many interesting examples of spin models do not satisfy mixing conditions, and on the other hand, it does not seem easy to show central limit theorem for local statistics via quasi-associativity. In this work, we prove general central limit theorems for local statistics and exponentially quasi-local statistics of spin models on discrete Cayley graphs with polynomial growth. Further, we supplement these results by proving similar central limit theorems for random fields on discrete Cayley graphs taking values in a countable space, but under the stronger assumptions of α -mixing (for local statistics) and exponential α -mixing (for exponentially quasi-local statistics). All our central limit theorems assume a suitable variance lower bound like many others in the literature. We illustrate our general central limit theorem with specific examples of lattice spin models and statistics arising in computational topology, statistical physics and random networks. Examples of clustering spin models include quasi-associated spin models with fast decaying covariances like the off-critical Ising model, level sets of Gaussian random fields with fast decaying covariances like the massive Gaussian free field and determinantal point processes with fast decaying kernels. Examples of local statistics include intrinsic volumes, face counts, component counts of random cubical complexes while exponentially quasi-local statistics include nearest neighbour distances in spin models and Betti numbers of sub-critical random cubical complexes.

  14. Transport-related measures to mitigate climate change in Basel, Switzerland: A health-effectiveness comparison study.

    PubMed

    Perez, L; Trüeb, S; Cowie, H; Keuken, M P; Mudu, P; Ragettli, M S; Sarigiannis, D A; Tobollik, M; Tuomisto, J; Vienneau, D; Sabel, C; Künzli, N

    2015-12-01

    Local strategies to reduce green-house gases (GHG) imply changes of non-climatic exposure patterns. To assess the health impacts of locally relevant transport-related climate change policies in Basel, Switzerland. We modelled change in mortality and morbidity for the year 2020 based on several locally relevant transport scenarios including all decided transport policies up to 2020, additional realistic and hypothesized traffic reductions, as well as ambitious diffusion levels of electric cars. The scenarios were compared to the reference condition in 2010 assumed as status quo. The changes in non-climatic population exposure included ambient air pollution, physical activity, and noise. As secondary outcome, changes in Disability-Adjusted Life Years (DALYs) were put into perspective with predicted changes of CO2 emissions and fuel consumption. Under the scenario that assumed a strict particle emissions standard in diesel cars and all planned transport measures, 3% of premature deaths could be prevented from projected PM2.5 exposure reduction. A traffic reduction scenario assuming more active trips provided only minor added health benefits for any of the changes in exposure considered. A hypothetical strong support to electric vehicles diffusion would have the largest health effectiveness given that the energy production in Basel comes from renewable sources. The planned local transport related GHG emission reduction policies in Basel are sensible for mitigating climate change and improving public health. In this context, the most effective policy remains increasing zero-emission vehicles. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Origami rules for the construction of localized eigenstates of the Hubbard model in decorated lattices

    NASA Astrophysics Data System (ADS)

    Dias, R. G.; Gouveia, J. D.

    2015-11-01

    We present a method of construction of exact localized many-body eigenstates of the Hubbard model in decorated lattices, both for U = 0 and U → ∞. These states are localized in what concerns both hole and particle movement. The starting point of the method is the construction of a plaquette or a set of plaquettes with a higher symmetry than that of the whole lattice. Using a simple set of rules, the tight-binding localized state in such a plaquette can be divided, folded and unfolded to new plaquette geometries. This set of rules is also valid for the construction of a localized state for one hole in the U → ∞ limit of the same plaquette, assuming a spin configuration which is a uniform linear combination of all possible permutations of the set of spins in the plaquette.

  16. A local leaky-box model for the local stellar surface density-gas surface density-gas phase metallicity relation

    NASA Astrophysics Data System (ADS)

    Zhu, Guangtun Ben; Barrera-Ballesteros, Jorge K.; Heckman, Timothy M.; Zakamska, Nadia L.; Sánchez, Sebastian F.; Yan, Renbin; Brinkmann, Jonathan

    2017-07-01

    We revisit the relation between the stellar surface density, the gas surface density and the gas-phase metallicity of typical disc galaxies in the local Universe with the SDSS-IV/MaNGA survey, using the star formation rate surface density as an indicator for the gas surface density. We show that these three local parameters form a tight relationship, confirming previous works (e.g. by the PINGS and CALIFA surveys), but with a larger sample. We present a new local leaky-box model, assuming star-formation history and chemical evolution is localized except for outflowing materials. We derive closed-form solutions for the evolution of stellar surface density, gas surface density and gas-phase metallicity, and show that these parameters form a tight relation independent of initial gas density and time. We show that, with canonical values of model parameters, this predicted relation match the observed one well. In addition, we briefly describe a pathway to improving the current semi-analytic models of galaxy formation by incorporating the local leaky-box model in the cosmological context, which can potentially explain simultaneously multiple properties of Milky Way-type disc galaxies, such as the size growth and the global stellar mass-gas metallicity relation.

  17. A Local Realistic Reconciliation of the EPR Paradox

    NASA Astrophysics Data System (ADS)

    Sanctuary, Bryan

    2014-03-01

    The exact violation of Bell's Inequalities is obtained with a local realistic model for spin. The model treats one particle that comprises a quantum ensemble and simulates the EPR data one coincidence at a time as a product state. Such a spin is represented by operators σx , iσy ,σz in its body frame rather than the usual set of σX ,σY ,σZ in the laboratory frame. This model, assumed valid in the absence of a measuring probe, contains both quantum polarizations and coherences. Each carries half the EPR correlation, but only half can be measured using coincidence techniques. The model further predicts the filter angles that maximize the spin correlation in EPR experiments.

  18. The formation of arcs in the dynamic spectra of Jovian decameter bursts

    NASA Technical Reports Server (NTRS)

    Goldstein, M. L.; Thieman, J. R.

    1980-01-01

    A model is presented that can account for several features of the dynamic spectral arcs observed at decameter wavelengths by the planetary radio astronomy experiment on Voyagers 1 and 2. It is shown that refraction of an extraordinary mode wave initially excited nearly orthogonal to the local magnetic field is significantly influenced by the local plasma density, being greater the higher the density. It is assumed that the source of the decameter radiation lies along the L = 6 flux tube and that the highest frequencies are produced at the lowest altitudes, where both the plasma density and magnetic field gradients are largest. It is further assumed that the decameter radiation is emitted into a thin conical sheet, consistent with both observation and theory. In the model the emission cone angle of the sheet is chosen to vary with frequency so that it is relatively small at both high and low frequencies, but approximately 80 deg at intermediate frequencies. The resulting emission pattern as seen by a distant observer is shown to resemble the observed arc pattern. The model is compared and contrasted with examples of Voyager radio data.

  19. Heterodyne efficiency of a coherent free-space optical communication model through atmospheric turbulence.

    PubMed

    Ren, Yongxiong; Dang, Anhong; Liu, Ling; Guo, Hong

    2012-10-20

    The heterodyne efficiency of a coherent free-space optical (FSO) communication model under the effects of atmospheric turbulence and misalignment is studied in this paper. To be more general, both the transmitted beam and local oscillator beam are assumed to be partially coherent based on the Gaussian Schell model (GSM). By using the derived analytical form of the cross-spectral function of a GSM beam propagating through atmospheric turbulence, a closed-form expression of heterodyne efficiency is derived, assuming that the propagation directions for the transmitted and local oscillator beams are slightly different. Then the impacts of atmospheric turbulence, configuration of the two beams (namely, beam radius and spatial coherence width), detector radius, and misalignment angle over heterodyne efficiency are examined. Numerical results suggest that the beam radius of the two overlapping beams can be optimized to achieve a maximum heterodyne efficiency according to the turbulence conditions and the detector radius. It is also found that atmospheric turbulence conditions will significantly degrade the efficiency of heterodyne detection, and compared to fully coherent beams, partially coherent beams are less sensitive to the changes in turbulence conditions and more robust against misalignment at the receiver.

  20. Local versus global knowledge in the Barabási-Albert scale-free network model.

    PubMed

    Gómez-Gardeñes, Jesús; Moreno, Yamir

    2004-03-01

    The scale-free model of Barabási and Albert (BA) gave rise to a burst of activity in the field of complex networks. In this paper, we revisit one of the main assumptions of the model, the preferential attachment (PA) rule. We study a model in which the PA rule is applied to a neighborhood of newly created nodes and thus no global knowledge of the network is assumed. We numerically show that global properties of the BA model such as the connectivity distribution and the average shortest path length are quite robust when there is some degree of local knowledge. In contrast, other properties such as the clustering coefficient and degree-degree correlations differ and approach the values measured for real-world networks.

  1. A novel multi-segment path analysis based on a heterogeneous velocity model for the localization of acoustic emission sources in complex propagation media.

    PubMed

    Gollob, Stephan; Kocur, Georg Karl; Schumacher, Thomas; Mhamdi, Lassaad; Vogel, Thomas

    2017-02-01

    In acoustic emission analysis, common source location algorithms assume, independently of the nature of the propagation medium, a straight (shortest) wave path between the source and the sensors. For heterogeneous media such as concrete, the wave travels in complex paths due to the interaction with the dissimilar material contents and with the possible geometrical and material irregularities present in these media. For instance, cracks and large air voids present in concrete influence significantly the way the wave travels, by causing wave path deviations. Neglecting these deviations by assuming straight paths can introduce significant errors to the source location results. In this paper, a novel source localization method called FastWay is proposed. It accounts, contrary to most available shortest path-based methods, for the different effects of material discontinuities (cracks and voids). FastWay, based on a heterogeneous velocity model, uses the fastest rather than the shortest travel paths between the source and each sensor. The method was evaluated both numerically and experimentally and the results from both evaluation tests show that, in general, FastWay was able to locate sources of acoustic emissions more accurately and reliably than the traditional source localization methods. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Estimators of The Magnitude-Squared Spectrum and Methods for Incorporating SNR Uncertainty

    PubMed Central

    Lu, Yang; Loizou, Philipos C.

    2011-01-01

    Statistical estimators of the magnitude-squared spectrum are derived based on the assumption that the magnitude-squared spectrum of the noisy speech signal can be computed as the sum of the (clean) signal and noise magnitude-squared spectra. Maximum a posterior (MAP) and minimum mean square error (MMSE) estimators are derived based on a Gaussian statistical model. The gain function of the MAP estimator was found to be identical to the gain function used in the ideal binary mask (IdBM) that is widely used in computational auditory scene analysis (CASA). As such, it was binary and assumed the value of 1 if the local SNR exceeded 0 dB, and assumed the value of 0 otherwise. By modeling the local instantaneous SNR as an F-distributed random variable, soft masking methods were derived incorporating SNR uncertainty. The soft masking method, in particular, which weighted the noisy magnitude-squared spectrum by the a priori probability that the local SNR exceeds 0 dB was shown to be identical to the Wiener gain function. Results indicated that the proposed estimators yielded significantly better speech quality than the conventional MMSE spectral power estimators, in terms of yielding lower residual noise and lower speech distortion. PMID:21886543

  3. The spatiotemporal MEG covariance matrix modeled as a sum of Kronecker products.

    PubMed

    Bijma, Fetsje; de Munck, Jan C; Heethaar, Rob M

    2005-08-15

    The single Kronecker product (KP) model for the spatiotemporal covariance of MEG residuals is extended to a sum of Kronecker products. This sum of KP is estimated such that it approximates the spatiotemporal sample covariance best in matrix norm. Contrary to the single KP, this extension allows for describing multiple, independent phenomena in the ongoing background activity. Whereas the single KP model can be interpreted by assuming that background activity is generated by randomly distributed dipoles with certain spatial and temporal characteristics, the sum model can be physiologically interpreted by assuming a composite of such processes. Taking enough terms into account, the spatiotemporal sample covariance matrix can be described exactly by this extended model. In the estimation of the sum of KP model, it appears that the sum of the first 2 KP describes between 67% and 93%. Moreover, these first two terms describe two physiological processes in the background activity: focal, frequency-specific alpha activity, and more widespread non-frequency-specific activity. Furthermore, temporal nonstationarities due to trial-to-trial variations are not clearly visible in the first two terms, and, hence, play only a minor role in the sample covariance matrix in terms of matrix power. Considering the dipole localization, the single KP model appears to describe around 80% of the noise and seems therefore adequate. The emphasis of further improvement of localization accuracy should be on improving the source model rather than the covariance model.

  4. Olive flowering phenology variation between different cultivars in Spain and Italy: modeling analysis

    NASA Astrophysics Data System (ADS)

    Garcia-Mozo, H.; Orlandi, F.; Galan, C.; Fornaciari, M.; Romano, B.; Ruiz, L.; Diaz de La Guardia, C.; Trigo, M. M.; Chuine, I.

    2009-03-01

    Phenology data are sensitive data to identify how plants are adapted to local climate and how they respond to climatic changes. Modeling flowering phenology allows us to identify the meteorological variables determining the reproductive cycle. Phenology of temperate of woody plants is assumed to be locally adapted to climate. Nevertheless, recent research shows that local adaptation may not be an important constraint in predicting phenological responses. We analyzed variations in flowering dates of Olea europaea L. at different sites of Spain and Italy, testing for a genetic differentiation of flowering phenology among olive varieties to estimate whether local modeling is necessary for olive or not. We build models for the onset and peak dates flowering in different sites of Andalusia and Puglia. Process-based phenological models using temperature as input variable and photoperiod as the threshold date to start temperature accumulation were developed to predict both dates. Our results confirm and update previous results that indicated an advance in olive onset dates. The results indicate that both internal and external validity were higher in the models that used the photoperiod as an indicator to start to cumulate temperature. The use of the unified model for modeling the start and peak dates in the different localities provides standardized results for the comparative study. The use of regional models grouping localities by varieties and climate similarities indicate that local adaptation would not be an important factor in predicting olive phenological responses face to the global temperature increase.

  5. Spatiotemporal integration for tactile localization during arm movements: a probabilistic approach.

    PubMed

    Maij, Femke; Wing, Alan M; Medendorp, W Pieter

    2013-12-01

    It has been shown that people make systematic errors in the localization of a brief tactile stimulus that is delivered to the index finger while they are making an arm movement. Here we modeled these spatial errors with a probabilistic approach, assuming that they follow from temporal uncertainty about the occurrence of the stimulus. In the model, this temporal uncertainty converts into a spatial likelihood about the external stimulus location, depending on arm velocity. We tested the prediction of the model that the localization errors depend on arm velocity. Participants (n = 8) were instructed to localize a tactile stimulus that was presented to their index finger while they were making either slow- or fast-targeted arm movements. Our results confirm the model's prediction that participants make larger localization errors when making faster arm movements. The model, which was used to fit the errors for both slow and fast arm movements simultaneously, accounted very well for all the characteristics of these data with temporal uncertainty in stimulus processing as the only free parameter. We conclude that spatial errors in dynamic tactile perception stem from the temporal precision with which tactile inputs are processed.

  6. Kinetic Model of Electric Potentials in Localized Collisionless Plasma Structures under Steady Quasi-gyrotropic Conditions

    NASA Technical Reports Server (NTRS)

    Schindler, K.; Birn, J.; Hesse, M.

    2012-01-01

    Localized plasma structures, such as thin current sheets, generally are associated with localized magnetic and electric fields. In space plasmas localized electric fields not only play an important role for particle dynamics and acceleration but may also have significant consequences on larger scales, e.g., through magnetic reconnection. Also, it has been suggested that localized electric fields generated in the magnetosphere are directly connected with quasi-steady auroral arcs. In this context, we present a two-dimensional model based on Vlasov theory that provides the electric potential for a large class of given magnetic field profiles. The model uses an expansion for small deviation from gyrotropy and besides quasineutrality it assumes that electrons and ions have the same number of particles with their generalized gyrocenter on any given magnetic field line. Specializing to one dimension, a detailed discussion concentrates on the electric potential shapes (such as "U" or "S" shapes) associated with magnetic dips, bumps, and steps. Then, it is investigated how the model responds to quasi-steady evolution of the plasma. Finally, the model proves useful in the interpretation of the electric potentials taken from two existing particle simulations.

  7. Rational Adaptation under Task and Processing Constraints: Implications for Testing Theories of Cognition and Action

    ERIC Educational Resources Information Center

    Howes, Andrew; Lewis, Richard L.; Vera, Alonso

    2009-01-01

    The authors assume that individuals adapt rationally to a utility function given constraints imposed by their cognitive architecture and the local task environment. This assumption underlies a new approach to modeling and understanding cognition--cognitively bounded rational analysis--that sharpens the predictive acuity of general, integrated…

  8. Gas dynamics in the impulsive phase of solar flares. I Thick-target heating by nonthermal electrons

    NASA Technical Reports Server (NTRS)

    Nagai, F.; Emslie, A. G.

    1984-01-01

    A numerical investigation is carried out of the gas dynamical response of the solar atmosphere to a flare energy input in the form of precipitating nonthermal electrons. Rather than discussing the origin of these electrons, the spectral and temporal characteristics of the injected flux are inferred through a thick-target model of hard X-ray bremsstrahlung production. It is assumed that the electrons spiral about preexisting magnetic field lines, making it possible for a one-dimensional spatial treatment to be performed. It is also assumed that all electron energy losses are due to Coulomb collisions with ambient particles; that is, return-current ohmic effects and collective plasma processes are neglected. The results are contrasted with earlier work on conductive heating of the flare atmosphere. A local temperature peak is seen at a height of approximately 1500 km above the photosphere. This derives from a spatial maximum in the energy deposition rate from an electron beam. It is noted that such a feature is not present in conductively heated models. The associated localized region of high pressure drives material both upward and downward.

  9. On radiating baroclinic instability of zonally varying flow

    NASA Technical Reports Server (NTRS)

    Finley, Catherine A.; Nathan, Terrence R.

    1993-01-01

    A quasi-geostrophic, two-layer, beta-plane model is used to study the baroclinic instability characteristics of a zonally inhomogeneous flow. It is assumed that the disturbance varied slowly in the cross-stream direction, and the stability problem was formulated as a 1D initial value problem. Emphasis is placed on determining how the vertically averaged wind, local maximum in vertical wind shear, and length of the locally supercritical region combine to yield local instabilities. Analysis of the local disturbance energetics reveals that, for slowly varying basic states, the baroclinic energy conversion predominates within the locally unstable region. Using calculations of the basic state tendencies, it is shown that the net effect of the local instabilities is to redistribute energy from the baroclinic to the barotropic component of the basic state flow.

  10. Observational constraint on spherical inhomogeneity with CMB and local Hubble parameter

    NASA Astrophysics Data System (ADS)

    Tokutake, Masato; Ichiki, Kiyotomo; Yoo, Chul-Moon

    2018-03-01

    We derive an observational constraint on a spherical inhomogeneity of the void centered at our position from the angular power spectrum of the cosmic microwave background (CMB) and local measurements of the Hubble parameter. The late time behaviour of the void is assumed to be well described by the so-called Λ-Lemaȋtre-Tolman-Bondi (ΛLTB) solution. Then, we restrict the models to the asymptotically homogeneous models each of which is approximated by a flat Friedmann-Lemaȋtre-Robertson-Walker model. The late time ΛLTB models are parametrized by four parameters including the value of the cosmological constant and the local Hubble parameter. The other two parameters are used to parametrize the observed distance-redshift relation. Then, the ΛLTB models are constructed so that they are compatible with the given distance-redshift relation. Including conventional parameters for the CMB analysis, we characterize our models by seven parameters in total. The local Hubble measurements are reflected in the prior distribution of the local Hubble parameter. As a result of a Markov-Chains-Monte-Carlo analysis for the CMB temperature and polarization anisotropies, we found that the inhomogeneous universe models with vanishing cosmological constant are ruled out as is expected. However, a significant under-density around us is still compatible with the angular power spectrum of CMB and the local Hubble parameter.

  11. A theoretical derivation of the dilatancy equation for brittle rocks based on Maxwell model

    NASA Astrophysics Data System (ADS)

    Li, Jie; Huang, Houxu; Wang, Mingyang

    2017-03-01

    In this paper, the micro-cracks in the brittle rocks are assumed to be penny shaped and evenly distributed; the damage and dilatancy of the brittle rocks is attributed to the growth and expansion of numerous micro-cracks under the local tensile stress. A single crack's behaviour under the local tensile stress is generalized to all cracks based on the distributed damage mechanics. The relationship between the local tensile stress and the external loading is derived based on the Maxwell model. The damage factor corresponding to the external loading is represented using the p-alpha ( p- α) model. A dilatancy equation that can build up a link between the external loading and the rock dilatancy is established. A test of dilatancy of a brittle rock under triaxial compression is conducted; the comparison between experimental results and our theoretical results shows good consistency.

  12. Effective stochastic generator with site-dependent interactions

    NASA Astrophysics Data System (ADS)

    Khamehchi, Masoumeh; Jafarpour, Farhad H.

    2017-11-01

    It is known that the stochastic generators of effective processes associated with the unconditioned dynamics of rare events might consist of non-local interactions; however, it can be shown that there are special cases for which these generators can include local interactions. In this paper, we investigate this possibility by considering systems of classical particles moving on a one-dimensional lattice with open boundaries. The particles might have hard-core interactions similar to the particles in an exclusion process, or there can be many arbitrary particles at a single site in a zero-range process. Assuming that the interactions in the original process are local and site-independent, we will show that under certain constraints on the microscopic reaction rules, the stochastic generator of an unconditioned process can be local but site-dependent. As two examples, the asymmetric zero-temperature Glauber model and the A-model with diffusion are presented and studied under the above-mentioned constraints.

  13. 2-Point microstructure archetypes for improved elastic properties

    NASA Astrophysics Data System (ADS)

    Adams, Brent L.; Gao, Xiang

    2004-01-01

    Rectangular models of material microstructure are described by their 1- and 2-point (spatial) correlation statistics of placement of local state. In the procedure described here the local state space is described in discrete form; and the focus is on placement of local state within a finite number of cells comprising rectangular models. It is illustrated that effective elastic properties (generalized Hashin Shtrikman bounds) can be obtained that are linear in components of the correlation statistics. Within this framework the concept of an eigen-microstructure within the microstructure hull is useful. Given the practical innumerability of the microstructure hull, however, we introduce a method for generating a sequence of archetypes of eigen-microstructure, from the 2-point correlation statistics of local state, assuming that the 1-point statistics are stationary. The method is illustrated by obtaining an archetype for an imaginary two-phase material where the objective is to maximize the combination C_{xxxx}^{*} + C_{xyxy}^{*}

  14. Local buckling of composite channel columns

    NASA Astrophysics Data System (ADS)

    Szymczak, Czesław; Kujawa, Marcin

    2018-05-01

    The investigation concerns local buckling of compressed flanges of axially compressed composite channel columns. Cooperation of the member flange and web is taken into account here. The buckling mode of the member flange is defined by rotation angle a flange about the line of its connection with the web. The channel column under investigation is made of unidirectional fibre-reinforced laminate. Two approaches to member orthotropic material modelling are performed: the homogenization with the aid of theory of mixture and periodicity cell or homogenization upon the Voigt-Reuss hypothesis. The fundamental differential equation of local buckling is derived with the aid of the stationary total potential energy principle. The critical buckling stress corresponding to a number of buckling half-waves is assumed to be a minimum eigenvalue of the equation. Some numerical examples dealing with columns are given here. The analytical results are compared with the finite element stability analysis carried out by means of ABAQUS software. The paper is focused on a close analytical solution of the critical buckling stress and the associated buckling mode while the web-flange cooperation is assumed.

  15. The Error in Total Error Reduction

    PubMed Central

    Witnauer, James E.; Urcelay, Gonzalo P.; Miller, Ralph R.

    2013-01-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modelling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. PMID:23891930

  16. Stability and Bifurcation of a Fishery Model with Crowley-Martin Functional Response

    NASA Astrophysics Data System (ADS)

    Maiti, Atasi Patra; Dubey, B.

    To understand the dynamics of a fishery system, a nonlinear mathematical model is proposed and analyzed. In an aquatic environment, we considered two populations: one is prey and another is predator. Here both the fish populations grow logistically and interaction between them is of Crowley-Martin type functional response. It is assumed that both the populations are harvested and the harvesting effort is assumed to be dynamical variable and tax is considered as a control variable. The existence of equilibrium points and their local stability are examined. The existence of Hopf-bifurcation, stability and direction of Hopf-bifurcation are also analyzed with the help of Center Manifold theorem and normal form theory. The global stability behavior of the positive equilibrium point is also discussed. In order to find the value of optimal tax, the optimal harvesting policy is used. To verify our analytical findings, an extensive numerical simulation is carried out for this model system.

  17. The structure of evaporating and combusting sprays: Measurements and predictions

    NASA Technical Reports Server (NTRS)

    Shuen, J. S.; Solomon, A. S. P.; Faeth, F. M.

    1983-01-01

    The structure of particle-laden jets and nonevaporating and evaporating sprays was measured in order to evaluate models of these processes. Three models are being evaluated: (1) a locally homogeneous flow model, where slip between the phases is neglected and the flow is assumed to be in local thermodynamic equilibrium; (2) a deterministic separated flow model, where slip and finite interphase transport rates are considered but effects of particle/drop dispersion by turbulence and effects of turbulence on interphase transport rates are ignored; and (3) a stochastic separated flow model, where effects of interphase slip, turbulent dispersion and turbulent fluctuations are considered using random sampling for turbulence properties in conjunction with random-walk computations for particle motion. All three models use a k-e-g turbulence model. All testing and data reduction are completed for the particle laden jets. Mean and fluctuating velocities of the continuous phase and mean mixture fraction were measured in the evaporating sprays.

  18. Modeling and Analysis of a Nonlinear Age-Structured Model for Tumor Cell Populations with Quiescence

    NASA Astrophysics Data System (ADS)

    Liu, Zijian; Chen, Jing; Pang, Jianhua; Bi, Ping; Ruan, Shigui

    2018-05-01

    We present a nonlinear first-order hyperbolic partial differential equation model to describe age-structured tumor cell populations with proliferating and quiescent phases at the avascular stage in vitro. The division rate of the proliferating cells is assumed to be nonlinear due to the limitation of the nutrient and space. The model includes a proportion of newborn cells that enter directly the quiescent phase with age zero. This proportion can reflect the effect of treatment by drugs such as erlotinib. The existence and uniqueness of solutions are established. The local and global stabilities of the trivial steady state are investigated. The existence and local stability of the positive steady state are also analyzed. Numerical simulations are performed to verify the results and to examine the impacts of parameters on the nonlinear dynamics of the model.

  19. Do Students Really Notice? A Study of the Impact of a Local Systemic Reform.

    ERIC Educational Resources Information Center

    Shymansky, James A.; Yore, Larry D.; Dunkhase, John A.; Hand, Brian M.

    This paper describes a major reform effort of an elementary science curriculum called the Science: Parents, Activities, and Literature (Science PALs) Project. The goal of the project was to move teachers towards an interactive-constructivist model of teaching and learning that assumes a middle-of-the-road interpretation of constructivism where…

  20. The Cortical Organization of Lexical Knowledge: A Dual Lexicon Model of Spoken Language Processing

    ERIC Educational Resources Information Center

    Gow, David W., Jr.

    2012-01-01

    Current accounts of spoken language assume the existence of a lexicon where wordforms are stored and interact during spoken language perception, understanding and production. Despite the theoretical importance of the wordform lexicon, the exact localization and function of the lexicon in the broader context of language use is not well understood.…

  1. Collaborative localization in wireless sensor networks via pattern recognition in radio irregularity using omnidirectional antennas.

    PubMed

    Jiang, Joe-Air; Chuang, Cheng-Long; Lin, Tzu-Shiang; Chen, Chia-Pang; Hung, Chih-Hung; Wang, Jiing-Yi; Liu, Chang-Wang; Lai, Tzu-Yun

    2010-01-01

    In recent years, various received signal strength (RSS)-based localization estimation approaches for wireless sensor networks (WSNs) have been proposed. RSS-based localization is regarded as a low-cost solution for many location-aware applications in WSNs. In previous studies, the radiation patterns of all sensor nodes are assumed to be spherical, which is an oversimplification of the radio propagation model in practical applications. In this study, we present an RSS-based cooperative localization method that estimates unknown coordinates of sensor nodes in a network. Arrangement of two external low-cost omnidirectional dipole antennas is developed by using the distance-power gradient model. A modified robust regression is also proposed to determine the relative azimuth and distance between a sensor node and a fixed reference node. In addition, a cooperative localization scheme that incorporates estimations from multiple fixed reference nodes is presented to improve the accuracy of the localization. The proposed method is tested via computer-based analysis and field test. Experimental results demonstrate that the proposed low-cost method is a useful solution for localizing sensor nodes in unknown or changing environments.

  2. Bell's Inequality: Revolution in Quantum Physics or Just AN Inadequate Mathematical Model?

    NASA Astrophysics Data System (ADS)

    Khrennikov, Andrei

    The main aim of this review is to stress the role of mathematical models in physics. The Bell inequality (BI) is often called the "most famous inequality of the 20th century." It is commonly accepted that its violation in corresponding experiments induced a revolution in quantum physics. Unlike "old quantum mechanics" (of Einstein, Schrodinger Bohr, Heisenberg, Pauli, Landau, Fock), "modern quantum mechanics" (of Bell, Aspect, Zeilinger, Shimony, Green-berger, Gisin, Mermin) takes seriously so called quantum non-locality. We will show that the conclusion that one has to give up the realism (i.e., a possibility to assign results of measurements to physical systems) or the locality (i.e., to assume action at a distance) is heavily based on one special mathematical model. This model was invented by A. N. Kolmogorov in 1933. One should pay serious attention to the role of mathematical models in physics. The problems of the realism and locality induced by Bell's argument can be solved by using non-Kolmogorovian probabilistic models. We compare this situation with non-Euclidean geometric models in relativity theory.

  3. A deformation-formulated micromechanics model of the effective Young's modulus and strength of laminated composites containing local ply curvature

    NASA Technical Reports Server (NTRS)

    Lee, Jong-Won; Harris, Charles E.

    1990-01-01

    A mathematical model based on the Euler-Bermoulli beam theory is proposed for predicting the effective Young's moduli of piecewise isotropic composite laminates with local ply curvatures in the main load-carrying layers. Strains in corrugated layers, in-phase layers, and out-of-phase layers are predicted for various geometries and material configurations by assuming matrix layers as elastic foundations of different spring constants. The effective Young's moduli measured from corrugated aluminum specimens and aluminum/epoxy specimens with in-phase and out-of-phase wavy patterns coincide very well with the model predictions. Moire fringe analysis of an in-phase specimen and an out-of-phase specimen are also presented, confirming the main assumption of the model related to the elastic constraint due to the matrix layers. The present model is also compared with the experimental results and other models, including the microbuckling models, published in the literature. The results of the present study show that even a very small-scale local ply curvature produces a noticeable effect on the mechanical constitutive behavior of a laminated composite.

  4. Error assessment of local tie vectors in space geodesy

    NASA Astrophysics Data System (ADS)

    Falkenberg, Jana; Heinkelmann, Robert; Schuh, Harald

    2014-05-01

    For the computation of the ITRF, the data of the geometric space-geodetic techniques on co-location sites are combined. The combination increases the redundancy and offers the possibility to utilize the strengths of each technique while mitigating their weaknesses. To enable the combination of co-located techniques each technique needs to have a well-defined geometric reference point. The linking of the geometric reference points enables the combination of the technique-specific coordinate to a multi-technique site coordinate. The vectors between these reference points are called "local ties". The realization of local ties is usually reached by local surveys of the distances and or angles between the reference points. Identified temporal variations of the reference points are considered in the local tie determination only indirectly by assuming a mean position. Finally, the local ties measured in the local surveying network are to be transformed into the ITRF, the global geocentric equatorial coordinate system of the space-geodetic techniques. The current IERS procedure for the combination of the space-geodetic techniques includes the local tie vectors with an error floor of three millimeters plus a distance dependent component. This error floor, however, significantly underestimates the real accuracy of local tie determination. To fullfill the GGOS goals of 1 mm position and 0.1 mm/yr velocity accuracy, an accuracy of the local tie will be mandatory at the sub-mm level, which is currently not achievable. To assess the local tie effects on ITRF computations, investigations of the error sources will be done to realistically assess and consider them. Hence, a reasonable estimate of all the included errors of the various local ties is needed. An appropriate estimate could also improve the separation of local tie error and technique-specific error contributions to uncertainties and thus access the accuracy of space-geodetic techniques. Our investigations concern the simulation of the error contribution of each component of the local tie definition and determination. A closer look into the models of reference point definition, of accessibility, of measurement, and of transformation is necessary to properly model the error of the local tie. The effect of temporal variations on the local ties will be studied as well. The transformation of the local survey into the ITRF can be assumed to be the largest error contributor, in particular the orientation of the local surveying network to the ITRF.

  5. Locating low-frequency earthquakes using amplitude signals from seismograph stations: Examples from events at Montserrat, West Indies and from synthetic data

    NASA Astrophysics Data System (ADS)

    Jolly, A.; Jousset, P.; Neuberg, J.

    2003-04-01

    We determine locations for low-frequency earthquakes occurring prior to a collapse on June 25th, 1997 using signal amplitudes from a 7-station local seismograph network at the Soufriere Hills volcano on Montserrat, West Indies. Locations are determined by averaging the signal amplitude over the event waveform and inverting these data using an assumed amplitude decay model comprising geometrical spreading and attenuation. Resulting locations are centered beneath the active dome from 500 to 2000 m below sea level assuming body wave geometrical spreading and a quality factor of Q=22. Locations for the same events shifted systematically shallower by about 500 m assuming a surface wave geometrical spreading. Locations are consistent to results obtained using arrival time methods. The validity of the method is tested against synthetic low-frequency events constructed from a 2-D finite difference model including visco-elastic properties. Two example events are tested; one from a point source triggered in a low velocity conduit ranging between 100-1100 m below the surface, and the second triggered in a conduit located 1500-2500 m below the surface. Resulting seismograms have emergent onsets and extended codas and include the effect of conduit resonance. Employing geometrical spreading and attenuation from the finite-difference modelling, we obtain locations within the respective model conduits validating our approach.The location depths are sensitive to the assumed geometric spreading and Q model. We can distinguish between two sources separated by about 1000 meters only if we know the decay parameters.

  6. Accurate landmarking of three-dimensional facial data in the presence of facial expressions and occlusions using a three-dimensional statistical facial feature model.

    PubMed

    Zhao, Xi; Dellandréa, Emmanuel; Chen, Liming; Kakadiaris, Ioannis A

    2011-10-01

    Three-dimensional face landmarking aims at automatically localizing facial landmarks and has a wide range of applications (e.g., face recognition, face tracking, and facial expression analysis). Existing methods assume neutral facial expressions and unoccluded faces. In this paper, we propose a general learning-based framework for reliable landmark localization on 3-D facial data under challenging conditions (i.e., facial expressions and occlusions). Our approach relies on a statistical model, called 3-D statistical facial feature model, which learns both the global variations in configurational relationships between landmarks and the local variations of texture and geometry around each landmark. Based on this model, we further propose an occlusion classifier and a fitting algorithm. Results from experiments on three publicly available 3-D face databases (FRGC, BU-3-DFE, and Bosphorus) demonstrate the effectiveness of our approach, in terms of landmarking accuracy and robustness, in the presence of expressions and occlusions.

  7. The Mass-dependent Star Formation Histories of Disk Galaxies: Infall Model Versus Observations

    NASA Astrophysics Data System (ADS)

    Chang, R. X.; Hou, J. L.; Shen, S. Y.; Shu, C. G.

    2010-10-01

    We introduce a simple model to explore the star formation histories of disk galaxies. We assume that the disk originate and grows by continuous gas infall. The gas infall rate is parameterized by the Gaussian formula with one free parameter: the infall-peak time tp . The Kennicutt star formation law is adopted to describe how much cold gas turns into stars. The gas outflow process is also considered in our model. We find that, at a given galactic stellar mass M *, the model adopting a late infall-peak time tp results in blue colors, low-metallicity, high specific star formation rate (SFR), and high gas fraction, while the gas outflow rate mainly influences the gas-phase metallicity and star formation efficiency mainly influences the gas fraction. Motivated by the local observed scaling relations, we "construct" a mass-dependent model by assuming that the low-mass galaxy has a later infall-peak time tp and a larger gas outflow rate than massive systems. It is shown that this model can be in agreement with not only the local observations, but also with the observed correlations between specific SFR and galactic stellar mass SFR/M * ~ M * at intermediate redshifts z < 1. Comparison between the Gaussian-infall model and the exponential-infall model is also presented. It shows that the exponential-infall model predicts a higher SFR at early stage and a lower SFR later than that of Gaussian infall. Our results suggest that the Gaussian infall rate may be more reasonable in describing the gas cooling process than the exponential infall rate, especially for low-mass systems.

  8. Surface plasmon resonances of arbitrarily shaped nanometallic structures in the small-screening-length limit

    PubMed Central

    Giannini, Vincenzo; Maier, Stefan A.; Craster, Richard V.

    2016-01-01

    According to the hydrodynamic Drude model, surface plasmon resonances of metallic nanostructures blueshift owing to the non-local response of the metal’s electron gas. The screening length characterizing the non-local effect is often small relative to the overall dimensions of the metallic structure, which enables us to derive a coarse-grained non-local description using matched asymptotic expansions; a perturbation theory for the blueshifts of arbitrary-shaped nanometallic structures is then developed. The effect of non-locality is not always a perturbation and we present a detailed analysis of the ‘bonding’ modes of a dimer of nearly touching nanowires where the leading-order eigenfrequencies and eigenmode distributions are shown to be a renormalization of those predicted assuming a local metal permittivity. PMID:27493575

  9. Annealed Importance Sampling for Neural Mass Models

    PubMed Central

    Penny, Will; Sengupta, Biswa

    2016-01-01

    Neural Mass Models provide a compact description of the dynamical activity of cell populations in neocortical regions. Moreover, models of regional activity can be connected together into networks, and inferences made about the strength of connections, using M/EEG data and Bayesian inference. To date, however, Bayesian methods have been largely restricted to the Variational Laplace (VL) algorithm which assumes that the posterior distribution is Gaussian and finds model parameters that are only locally optimal. This paper explores the use of Annealed Importance Sampling (AIS) to address these restrictions. We implement AIS using proposals derived from Langevin Monte Carlo (LMC) which uses local gradient and curvature information for efficient exploration of parameter space. In terms of the estimation of Bayes factors, VL and AIS agree about which model is best but report different degrees of belief. Additionally, AIS finds better model parameters and we find evidence of non-Gaussianity in their posterior distribution. PMID:26942606

  10. Simultaneous treatment of unspecified heteroskedastic model error distribution and mismeasured covariates for restricted moment models.

    PubMed

    Garcia, Tanya P; Ma, Yanyuan

    2017-10-01

    We develop consistent and efficient estimation of parameters in general regression models with mismeasured covariates. We assume the model error and covariate distributions are unspecified, and the measurement error distribution is a general parametric distribution with unknown variance-covariance. We construct root- n consistent, asymptotically normal and locally efficient estimators using the semiparametric efficient score. We do not estimate any unknown distribution or model error heteroskedasticity. Instead, we form the estimator under possibly incorrect working distribution models for the model error, error-prone covariate, or both. Empirical results demonstrate robustness to different incorrect working models in homoscedastic and heteroskedastic models with error-prone covariates.

  11. Theoretical study of gas hydrate decomposition kinetics--model development.

    PubMed

    Windmeier, Christoph; Oellrich, Lothar R

    2013-10-10

    In order to provide an estimate of the order of magnitude of intrinsic gas hydrate dissolution and dissociation kinetics, the "Consecutive Desorption and Melting Model" (CDM) is developed by applying only theoretical considerations. The process of gas hydrate decomposition is assumed to comprise two consecutive and repetitive quasi chemical reaction steps. These are desorption of the guest molecule followed by local solid body melting. The individual kinetic steps are modeled according to the "Statistical Rate Theory of Interfacial Transport" and the Wilson-Frenkel approach. All missing required model parameters are directly linked to geometric considerations and a thermodynamic gas hydrate equilibrium model.

  12. Multi-spatial analysis of forest residue utilization for bioenergy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacobson, Ryan A.; Keefe, Robert F.; Smith, Alistair M. S.

    2016-06-17

    The alternative energy sector is expanding quickly in the USA since passage of the Energy Policy Act of 2005 and the Energy Independence and Security Act of 2007. Increased interest in wood-based bioenergy has led to the need for robust modeling methods to analyze woody biomass operations at landscape scales. However, analyzing woody biomass operations in regions like the US Inland Northwest is difficult due to highly variable terrain and wood characteristics. We developed the Forest Residue Economic Assessment Model (FREAM) to better integrate with Geographical Information Systems and overcome analytical modeling limitations. FREAM analyzes wood-based bioenergy logistics systems andmore » provides a modeling platform that can be readily modified to analyze additional study locations. We evaluated three scenarios to test the FREAM's utility: a local-scale scenario in which a catalytic pyrolysis process produces gasoline from 181 437 Mg yr-1 of forest residues, a regional-scale scenario that assumes a biochemical process to create aviation fuel from 725 748 Mg yr-1 of forest residues, and an international scenario that assumes a pellet mill producing pellets for international markets from 272 155 Mg yr-1 of forest residues. The local scenario produced gasoline for a modeled cost of $22.33 GJ-1*, the regional scenario produced aviation fuel for a modeled cost of $35.83 GJ-1 and the international scenario produced pellets for a modeled cost of $10.51 GJ-1. Results show that incorporating input from knowledgeable stakeholders in the designing of a model yields positive results.« less

  13. Students' Perceptions and Supervisors' Rating as Assessments of Interactive-Constructivist Science Teaching in Elementary School.

    ERIC Educational Resources Information Center

    Shymansky, James A.; Yore, Larry D.; Henriques, Laura; Dunkhase, John A.; Bancroft, Jean

    This study took place within the context of a four-year local systemic reform effort collaboratively undertaken by the Science Education Center at the University of Iowa and the Iowa City Community School District. The goal of the project was to move teachers towards an interactive-constructivist model of teaching and learning that assumes a…

  14. Local quantum measurement and no-signaling imply quantum correlations.

    PubMed

    Barnum, H; Beigi, S; Boixo, S; Elliott, M B; Wehner, S

    2010-04-09

    We show that, assuming that quantum mechanics holds locally, the finite speed of information is the principle that limits all possible correlations between distant parties to be quantum mechanical as well. Local quantum mechanics means that a Hilbert space is assigned to each party, and then all local positive-operator-valued measurements are (in principle) available; however, the joint system is not necessarily described by a Hilbert space. In particular, we do not assume the tensor product formalism between the joint systems. Our result shows that if any experiment would give nonlocal correlations beyond quantum mechanics, quantum theory would be invalidated even locally.

  15. Skin fluorescence model based on the Monte Carlo technique

    NASA Astrophysics Data System (ADS)

    Churmakov, Dmitry Y.; Meglinski, Igor V.; Piletsky, Sergey A.; Greenhalgh, Douglas A.

    2003-10-01

    The novel Monte Carlo technique of simulation of spatial fluorescence distribution within the human skin is presented. The computational model of skin takes into account spatial distribution of fluorophores following the collagen fibers packing, whereas in epidermis and stratum corneum the distribution of fluorophores assumed to be homogeneous. The results of simulation suggest that distribution of auto-fluorescence is significantly suppressed in the NIR spectral region, while fluorescence of sensor layer embedded in epidermis is localized at the adjusted depth. The model is also able to simulate the skin fluorescence spectra.

  16. Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.

    PubMed

    Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine

    2010-09-01

    Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.

  17. Analysis of Point Based Image Registration Errors With Applications in Single Molecule Microscopy

    PubMed Central

    Cohen, E. A. K.; Ober, R. J.

    2014-01-01

    We present an asymptotic treatment of errors involved in point-based image registration where control point (CP) localization is subject to heteroscedastic noise; a suitable model for image registration in fluorescence microscopy. Assuming an affine transform, CPs are used to solve a multivariate regression problem. With measurement errors existing for both sets of CPs this is an errors-in-variable problem and linear least squares is inappropriate; the correct method being generalized least squares. To allow for point dependent errors the equivalence of a generalized maximum likelihood and heteroscedastic generalized least squares model is achieved allowing previously published asymptotic results to be extended to image registration. For a particularly useful model of heteroscedastic noise where covariance matrices are scalar multiples of a known matrix (including the case where covariance matrices are multiples of the identity) we provide closed form solutions to estimators and derive their distribution. We consider the target registration error (TRE) and define a new measure called the localization registration error (LRE) believed to be useful, especially in microscopy registration experiments. Assuming Gaussianity of the CP localization errors, it is shown that the asymptotic distribution for the TRE and LRE are themselves Gaussian and the parameterized distributions are derived. Results are successfully applied to registration in single molecule microscopy to derive the key dependence of the TRE and LRE variance on the number of CPs and their associated photon counts. Simulations show asymptotic results are robust for low CP numbers and non-Gaussianity. The method presented here is shown to outperform GLS on real imaging data. PMID:24634573

  18. The Effects of Climate Model Similarity on Local, Risk-Based Adaptation Planning

    NASA Astrophysics Data System (ADS)

    Steinschneider, S.; Brown, C. M.

    2014-12-01

    The climate science community has recently proposed techniques to develop probabilistic projections of climate change from ensemble climate model output. These methods provide a means to incorporate the formal concept of risk, i.e., the product of impact and probability, into long-term planning assessments for local systems under climate change. However, approaches for pdf development often assume that different climate models provide independent information for the estimation of probabilities, despite model similarities that stem from a common genealogy. Here we utilize an ensemble of projections from the Coupled Model Intercomparison Project Phase 5 (CMIP5) to develop probabilistic climate information, with and without an accounting of inter-model correlations, and use it to estimate climate-related risks to a local water utility in Colorado, U.S. We show that the tail risk of extreme climate changes in both mean precipitation and temperature is underestimated if model correlations are ignored. When coupled with impact models of the hydrology and infrastructure of the water utility, the underestimation of extreme climate changes substantially alters the quantification of risk for water supply shortages by mid-century. We argue that progress in climate change adaptation for local systems requires the recognition that there is less information in multi-model climate ensembles than previously thought. Importantly, adaptation decisions cannot be limited to the spread in one generation of climate models.

  19. Real-time localization of mobile device by filtering method for sensor fusion

    NASA Astrophysics Data System (ADS)

    Fuse, Takashi; Nagara, Keita

    2017-06-01

    Most of the applications with mobile devices require self-localization of the devices. GPS cannot be used in indoor environment, the positions of mobile devices are estimated autonomously by using IMU. Since the self-localization is based on IMU of low accuracy, and then the self-localization in indoor environment is still challenging. The selflocalization method using images have been developed, and the accuracy of the method is increasing. This paper develops the self-localization method without GPS in indoor environment by integrating sensors, such as IMU and cameras, on mobile devices simultaneously. The proposed method consists of observations, forecasting and filtering. The position and velocity of the mobile device are defined as a state vector. In the self-localization, observations correspond to observation data from IMU and camera (observation vector), forecasting to mobile device moving model (system model) and filtering to tracking method by inertial surveying and coplanarity condition and inverse depth model (observation model). Positions of a mobile device being tracked are estimated by system model (forecasting step), which are assumed as linearly moving model. Then estimated positions are optimized referring to the new observation data based on likelihood (filtering step). The optimization at filtering step corresponds to estimation of the maximum a posterior probability. Particle filter are utilized for the calculation through forecasting and filtering steps. The proposed method is applied to data acquired by mobile devices in indoor environment. Through the experiments, the high performance of the method is confirmed.

  20. A model for food and stimulus changes that signal time-based contingency changes.

    PubMed

    Cowie, Sarah; Davison, Michael; Elliffe, Douglas

    2014-11-01

    When the availability of reinforcers depends on time since an event, time functions as a discriminative stimulus. Behavioral control by elapsed time is generally weak, but may be enhanced by added stimuli that act as additional time markers. The present paper assessed the effect of brief and continuous added stimuli on control by time-based changes in the reinforcer differential, using a procedure in which the local reinforcer ratio reversed at a fixed time after the most recent reinforcer delivery. Local choice was enhanced by the presentation of the brief stimuli, even when the stimulus change signalled only elapsed time, but not the local reinforcer ratio. The effect of the brief stimulus presentations on choice decreased as a function of time since the most recent stimulus change. We compared the ability of several versions of a model of local choice to describe these data. The data were best described by a model which assumed that error in discriminating the local reinforcer ratio arose from imprecise discrimination of reinforcers in both time and space, suggesting that timing behavior is controlled not only by discrimination elapsed time, but by discrimination of the reinforcer differential in time. © Society for the Experimental Analysis of Behavior.

  1. The error in total error reduction.

    PubMed

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. How a Small Quantum Bath Can Thermalize Long Localized Chains

    NASA Astrophysics Data System (ADS)

    Luitz, David J.; Huveneers, François; De Roeck, Wojciech

    2017-10-01

    We investigate the stability of the many-body localized phase for a system in contact with a single ergodic grain modeling a Griffiths region with low disorder. Our numerical analysis provides evidence that even a small ergodic grain consisting of only three qubits can delocalize a localized chain as soon as the localization length exceeds a critical value separating localized and extended regimes of the whole system. We present a simple theory, consistent with De Roeck and Huveneers's arguments in [Phys. Rev. B 95, 155129 (2017), 10.1103/PhysRevB.95.155129] that assumes a system to be locally ergodic unless the local relaxation time determined by Fermi's golden rule is larger than the inverse level spacing. This theory predicts a critical value for the localization length that is perfectly consistent with our numerical calculations. We analyze in detail the behavior of local operators inside and outside the ergodic grain and find excellent agreement of numerics and theory.

  3. Multimodal Image Analysis in Alzheimer’s Disease via Statistical Modelling of Non-local Intensity Correlations

    NASA Astrophysics Data System (ADS)

    Lorenzi, Marco; Simpson, Ivor J.; Mendelson, Alex F.; Vos, Sjoerd B.; Cardoso, M. Jorge; Modat, Marc; Schott, Jonathan M.; Ourselin, Sebastien

    2016-04-01

    The joint analysis of brain atrophy measured with magnetic resonance imaging (MRI) and hypometabolism measured with positron emission tomography with fluorodeoxyglucose (FDG-PET) is of primary importance in developing models of pathological changes in Alzheimer’s disease (AD). Most of the current multimodal analyses in AD assume a local (spatially overlapping) relationship between MR and FDG-PET intensities. However, it is well known that atrophy and hypometabolism are prominent in different anatomical areas. The aim of this work is to describe the relationship between atrophy and hypometabolism by means of a data-driven statistical model of non-overlapping intensity correlations. For this purpose, FDG-PET and MRI signals are jointly analyzed through a computationally tractable formulation of partial least squares regression (PLSR). The PLSR model is estimated and validated on a large clinical cohort of 1049 individuals from the ADNI dataset. Results show that the proposed non-local analysis outperforms classical local approaches in terms of predictive accuracy while providing a plausible description of disease dynamics: early AD is characterised by non-overlapping temporal atrophy and temporo-parietal hypometabolism, while the later disease stages show overlapping brain atrophy and hypometabolism spread in temporal, parietal and cortical areas.

  4. Energy balance in the solar transition region. I - Hydrostatic thermal models with ambipolar diffusion

    NASA Technical Reports Server (NTRS)

    Fontenla, J. M.; Avrett, E. H.; Loeser, R.

    1990-01-01

    The energy balance in the lower transition region is analyzed by constructing theoretical models which satisfy the energy balance constraint. The energy balance is achieved by balancing the radiative losses and the energy flowing downward from the corona. This energy flow is mainly in two forms: conductive heat flow and hydrogen ionization energy flow due to ambipolar diffusion. Hydrostatic equilibrium is assumed, and, in a first calculation, local mechanical heating and Joule heating are ignored. In a second model, some mechanical heating compatible with chromospheric energy-balance calculations is introduced. The models are computed for a partial non-LTE approach in which radiation departs strongly from LTE but particles depart from Maxwellian distributions only to first order. The results, which apply to cases where the magnetic field is either absent, or uniform and vertical, are compared with the observed Lyman lines and continuum from the average quiet sun. The approximate agreement suggests that this type of model can roughly explain the observed intensities in a physically meaningful way, assuming only a few free parameters specified as chromospheric boundary conditions.

  5. Analysis of Stress in Steel and Concrete in Cfst Push-Out Test Samples

    NASA Astrophysics Data System (ADS)

    Grzeszykowski, Bartosz; Szadkowska, Magdalena; Szmigiera, Elżbieta

    2017-09-01

    The paper presents the analysis of stress in steel and concrete in CFST composite elements subjected to push-out tests. Two analytical models of stress distribution are presented. The bond at the interface between steel and concrete in the initial phase of the push-out test is provided by the adhesion. Until the force reach a certain value, the slip between both materials does not occur or it is negligibly small, what ensures full composite action of the specimen. In the first analytical model the full bond between both materials was assumed. This model allows to estimate value of the force for which the local loss of adhesion in given cross section begins. In the second model it was assumed that the bond stress distribution is constant along the shear transfer length of the specimen. Based on that the formulas for triangle distribution of stress in steel and concrete for the maximum push-out force were derived and compared with the experimental results. Both models can be used to better understand the mechanisms of interaction between steel and concrete in composite steel-concrete columns.

  6. Frequency-dependent local field factors in dielectric liquids by a polarizable force field and molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Davari, Nazanin; Haghdani, Shokouh; Åstrand, Per-Olof

    2015-12-01

    A force field model for calculating local field factors, i.e. the linear response of the local electric field for example at a nucleus in a molecule with respect to an applied electric field, is discussed. It is based on a combined charge-transfer and point-dipole interaction model for the polarizability, and thereby it includes two physically distinct terms for describing electronic polarization: changes in atomic charges arising from transfer of charge between the atoms and atomic induced dipole moments. A time dependence is included both for the atomic charges and the atomic dipole moments and if they are assumed to oscillate with the same frequency as the applied electric field, a model for frequency-dependent properties are obtained. Furthermore, if a life-time of excited states are included, a model for the complex frequency-dependent polariability is obtained including also information about excited states and the absorption spectrum. We thus present a model for the frequency-dependent local field factors through the first molecular excitation energy. It is combined with molecular dynamics simulations of liquids where a large set of configurations are sampled and for which local field factors are calculated. We are normally not interested in the average of the local field factor but rather in configurations where it is as high as possible. In electrical insulation, we would like to avoid high local field factors to reduce the risk for electrical breakdown, whereas for example in surface-enhanced Raman spectroscopy, high local field factors are desired to give dramatically increased intensities.

  7. 24 CFR 248.121 - Annual authorized return and aggregate preservation rents.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... for the project, assuming a market rate of interest and customary terms; (3) Debt service on the... rehabilitation loan for the project, assuming a market rate of interest and customary terms; (3) Debt service on... local governments and assuming market rate interest rates. ...

  8. Modeling epidemic spread with awareness and heterogeneous transmission rates in networks.

    PubMed

    Shang, Yilun

    2013-06-01

    During an epidemic outbreak in a human population, susceptibility to infection can be reduced by raising awareness of the disease. In this paper, we investigate the effects of three forms of awareness (i.e., contact, local, and global) on the spread of a disease in a random network. Connectivity-correlated transmission rates are assumed. By using the mean-field theory and numerical simulation, we show that both local and contact awareness can raise the epidemic thresholds while the global awareness cannot, which mirrors the recent results of Wu et al. The obtained results point out that individual behaviors in the presence of an infectious disease has a great influence on the epidemic dynamics. Our method enriches mean-field analysis in epidemic models.

  9. Revenue Prediction of a Local Event Using the Mathematical Model of Hit Phenomena

    NASA Astrophysics Data System (ADS)

    Ishii, A.; Matsumoto, T.; Miki, S.

    We propose a theoretical approach to investigate human-humaninteraction in the society, which uses a many-body theory that incorporates human-human interaction. We treat advertisement as an external force, and include the word of mouth (WOM) effect as a two-body interaction between humans and the rumor effect as a three-body interaction among humans. The parameters to define the strength of human interactions are assumed to be constant values. The calculated result explained well the two local events ``Mizuki-Shigeru Road in Sakaiminato" and ``the sculpture festival at Tottori" in Japan.

  10. A Test of the Interstellar Boundary EXplorer Ribbon Formation in the Outer Heliosheath

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamayunov, Konstantin V.; Rassoul, Hamid; Heerikhuisen, Jacob, E-mail: kgamayunov@fit.edu

    NASA’s Interstellar Boundary EXplorer ( IBEX ) mission is imaging energetic neutral atoms (ENAs) propagating to Earth from the outer heliosphere and local interstellar medium (LISM). A dominant feature in all ENA maps is a ribbon of enhanced fluxes that was not predicted before IBEX . While more than a dozen models of the ribbon formation have been proposed, consensus has gathered around the so-called secondary ENA model. Two classes of secondary ENA models have been proposed; the first class assumes weak scattering of the energetic pickup protons in the LISM, and the second class assumes strong but spatially localizedmore » scattering. Here we present a numerical test of the “weak scattering” version of the secondary ENA model using our gyro-averaged kinetic model for the evolution of the phase-space distribution of protons in the outer heliosheath. As input for our test, we use distributions of the primary ENAs from our MHD-plasma/kinetic-neutral model of the heliosphere-LISM interaction. The magnetic field spectrum for the large-scale interstellar turbulence and an upper limit for the amplitude of small-scale local turbulence (SSLT) generated by protons are taken from observations by Voyager 1 in the LISM. The hybrid simulations of energetic protons are also used to set the bounding wavenumbers for the spectrum of SSLT. Our test supports the “weak scattering” version. This makes an additional solid step on the way to understanding the origin and formation of the IBEX ribbon and thus to improving our understanding of the interaction between the heliosphere and the LISM.« less

  11. A Local-Realistic Model of Quantum Mechanics Based on a Discrete Spacetime

    NASA Astrophysics Data System (ADS)

    Sciarretta, Antonio

    2018-01-01

    This paper presents a realistic, stochastic, and local model that reproduces nonrelativistic quantum mechanics (QM) results without using its mathematical formulation. The proposed model only uses integer-valued quantities and operations on probabilities, in particular assuming a discrete spacetime under the form of a Euclidean lattice. Individual (spinless) particle trajectories are described as random walks. Transition probabilities are simple functions of a few quantities that are either randomly associated to the particles during their preparation, or stored in the lattice nodes they visit during the walk. QM predictions are retrieved as probability distributions of similarly-prepared ensembles of particles. The scenarios considered to assess the model comprise of free particle, constant external force, harmonic oscillator, particle in a box, the Delta potential, particle on a ring, particle on a sphere and include quantization of energy levels and angular momentum, as well as momentum entanglement.

  12. Propaganda, Public Information, and Prospecting: Explaining the Irrational Exuberance of Central Place Foragers During a Late Nineteenth Century Colorado Silver Rush.

    PubMed

    Glover, Susan M

    2009-10-01

    Traditionally, models of resource extraction assume individuals act as if they form strategies based on complete information. In reality, gathering information about environmental parameters may be costly. An efficient information gathering strategy is to observe the foraging behavior of others, termed public information. However, media can exploit this strategy by appearing to supply accurate information while actually shaping information to manipulate people to behave in ways that benefit the media or their clients. Here, I use Central Place Foraging (CPF) models to investigate how newspaper propaganda shaped ore foraging strategies of late nineteenth-century Colorado silver prospectors. Data show that optimistic values of silver ore published in local newspapers led prospectors to place mines at a much greater distance than was profitable. Models assuming perfect information neglect the possibility of misinformation among investors, and may underestimate the extent and degree of human impacts on areas of resource extraction.

  13. Many-body localization proximity effects in platforms of coupled spins and bosons

    NASA Astrophysics Data System (ADS)

    Marino, J.; Nandkishore, R. M.

    2018-02-01

    We discuss the onset of many-body localization in a one-dimensional system composed of a XXZ quantum spin chain and a Bose-Hubbard model linearly coupled together. We consider two complementary setups, depending whether spatial disorder is initially imprinted on spins or on bosons; in both cases, we explore the conditions for the disordered portion of the system to localize by proximity of the other clean half. Assuming that the dynamics of one of the two parts develops on shorter time scales than the other, we can adiabatically eliminate the fast degrees of freedom, and derive an effective Hamiltonian for the system's remainder using projection operator techniques. Performing a locator expansion on the strength of the many-body interaction term or on the hopping amplitude of the effective Hamiltonian thus derived, we present results on the stability of the many-body localized phases induced by proximity effect. We also briefly comment on the feasibility of the proposed model through modern quantum optics architectures, with the long-term perspective to realize experimentally, in composite open systems, Anderson or many-body localization proximity effects.

  14. Effect of large magnetic islands on screening of external magnetic perturbation fields at slow plasma flow

    NASA Astrophysics Data System (ADS)

    Li, L.; Liu, Y. Q.; Huang, X.; Luan, Q.; Zhong, F. C.

    2017-02-01

    A toroidal resistive magneto-hydrodynamic plasma response model, involving large magnetic islands, is proposed and numerically investigated, based on local flattening of the equilibrium pressure profile near a rational surface. It is assumed that such islands can be generated near the edge of the tokamak plasma, due to the penetration of the resonant magnetic perturbations, used for the purpose of controlling the edge localized mode. Within this model, it is found that the local flattening of the equilibrium pressure helps to mitigate the toroidal curvature induced screening effect [Glasser et al., Phys. Fluids 7, 875 (1975)]—the so called Glasser-Greene-Johnson screening, when the local toroidal flow near the mode rational surface is very slow (for example, as a result of mode locking associated with the field penetration). The saturation level of the plasma response amplitude is computed, as the plasma rotation frequency approaches zero. The local modification of the plasma resistivity inside the magnetic island is found to also affect the saturation level of the plasma response at vanishing flow.

  15. MEG source localization of spatially extended generators of epileptic activity: comparing entropic and hierarchical bayesian approaches.

    PubMed

    Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe

    2013-01-01

    Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm(2) to 30 cm(2), whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered.

  16. MEG Source Localization of Spatially Extended Generators of Epileptic Activity: Comparing Entropic and Hierarchical Bayesian Approaches

    PubMed Central

    Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe

    2013-01-01

    Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm2 to 30 cm2, whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered. PMID:23418485

  17. A Complex Network Perspective on Clinical Science

    PubMed Central

    Hofmann, Stefan G.; Curtiss, Joshua; McNally, Richard J.

    2016-01-01

    Contemporary classification systems for mental disorders assume that abnormal behaviors are expressions of latent disease entities. An alternative to the latent disease model is the complex network approach. Instead of assuming that symptoms arise from an underlying disease entity, the complex network approach holds that disorders exist as systems of interrelated elements of a network. This approach also provides a framework for the understanding of therapeutic change. Depending on the structure of the network, change can occur abruptly once the network reaches a critical threshold (the tipping point). Homogeneous and highly connected networks often recover more slowly from local perturbations when the network approaches the tipping point, allowing for the possibility to predict treatment change, relapse, and recovery. In this article we discuss the complex network approach as an alternative to the latent disease model, and we discuss its implications for classification, therapy, relapse, and recovery. PMID:27694457

  18. Unmasking the masked Universe: the 2M++ catalogue through Bayesian eyes

    NASA Astrophysics Data System (ADS)

    Lavaux, Guilhem; Jasche, Jens

    2016-01-01

    This work describes a full Bayesian analysis of the Nearby Universe as traced by galaxies of the 2M++ survey. The analysis is run in two sequential steps. The first step self-consistently derives the luminosity-dependent galaxy biases, the power spectrum of matter fluctuations and matter density fields within a Gaussian statistic approximation. The second step makes a detailed analysis of the three-dimensional large-scale structures, assuming a fixed bias model and a fixed cosmology. This second step allows for the reconstruction of both the final density field and the initial conditions at z = 1000 assuming a fixed bias model. From these, we derive fields that self-consistently extrapolate the observed large-scale structures. We give two examples of these extrapolation and their utility for the detection of structures: the visibility of the Sloan Great Wall, and the detection and characterization of the Local Void using DIVA, a Lagrangian based technique to classify structures.

  19. Model of a multiverse providing the dark energy of our universe

    NASA Astrophysics Data System (ADS)

    Rebhan, E.

    2017-09-01

    It is shown that the dark energy presently observed in our universe can be regarded as the energy of a scalar field driving an inflation-like expansion of a multiverse with ours being a subuniverse among other parallel universes. A simple model of this multiverse is elaborated: Assuming closed space geometry, the origin of the multiverse can be explained by quantum tunneling from nothing; subuniverses are supposed to emerge from local fluctuations of separate inflation fields. The standard concept of tunneling from nothing is extended to the effect that in addition to an inflationary scalar field, matter is also generated, and that the tunneling leads to an (unstable) equilibrium state. The cosmological principle is assumed to pertain from the origin of the multiverse until the first subuniverses emerge. With increasing age of the multiverse, its spatial curvature decays exponentially so fast that, due to sharing the same space, the flatness problem of our universe resolves by itself. The dark energy density imprinted by the multiverse on our universe is time-dependent, but such that the ratio w = ϱ/(c2p) of its mass density and pressure (times c2) is time-independent and assumes a value - 1 + 𝜖 with arbitrary 𝜖 > 0. 𝜖 can be chosen so small, that the dark energy model of this paper can be fitted to the current observational data as well as the cosmological constant model.

  20. Effect of bipolar electric fatigue on polarization switching in lead-zirconate-titanate ceramics

    NASA Astrophysics Data System (ADS)

    Zhukov, Sergey; Fedosov, Sergey; Glaum, Julia; Granzow, Torsten; Genenko, Yuri A.; von Seggern, Heinz

    2010-07-01

    From comparison of experimental results on polarization switching in fresh and electrically fatigued lead-zirconate-titanate (PZT) over a wide range of applied fields and switching times it is concluded that fatigue alters the local field distribution inside the sample due to the generation of discrete defects, such as voids and cracks. Such defects have a strong influence on the overall electric field distribution by their shape and dielectric permittivity. On this hypothesis, a new phenomenological model of polarization switching in fatigued PZT is proposed. The model assumes that the fatigued sample can be composed of different local regions which exhibit different field strengths but otherwise can be considered as unfatigued. Consequently the temporal response of a fatigued sample is assumed to be the superposition of the field-dependent temporal responses of unfatigued samples weighted by their respective volume fraction. A certain part of the volume is excluded from the overall switching process due to the domain pinning even at earlier stages of fatigue, which can be recovered by annealing. Suitability of the proposed model is demonstrated by a good correlation between experimental and calculated data for differently fatigued samples. Plausible cause of the formation of such regions is the generation of defects such as microcracks and the change in electrical properties at imperfections such as pores or voids.

  1. A method for establishing constraints on galactic magnetic field models using ultra high energy cosmic rays and results from the data of the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Sutherland, Michael Stephen

    2010-12-01

    The Galactic magnetic field is poorly understood. Essentially the only reliable measurements of its properties are the local orientation and field strength. Its behavior at galactic scales is unknown. Historically, magnetic field measurements have been performed using radio astronomy techniques which are sensitive to certain regions of the Galaxy and rely upon models of the distribution of gas and dust within the disk. However, the deflection of trajectories of ultra high energy cosmic rays arriving from extragalactic sources depends only on the properties of the magnetic field. In this work, a method is developed for determining acceptable global models of the Galactic magnetic field by backtracking cosmic rays through the field model. This method constrains the parameter space of magnetic field models by comparing a test statistic between backtracked cosmic rays and isotropic expectations for assumed cosmic ray source and composition hypotheses. Constraints on Galactic magnetic field models are established using data from the southern site of the Pierre Auger Observatory under various source distribution and cosmic ray composition hypotheses. Field models possessing structure similar to the stellar spiral arms are found to be inconsistent with hypotheses of an iron cosmic ray composition and sources selected from catalogs tracing the local matter distribution in the universe. These field models are consistent with hypothesis combinations of proton composition and sources tracing the local matter distribution. In particular, strong constraints are found on the parameter space of bisymmetric magnetic field models scanned under hypotheses of proton composition and sources selected from the 2MRS-VS, Swift 39-month, and VCV catalogs. Assuming that the Galactic magnetic field is well-described by a bisymmetric model under these hypotheses, the magnetic field strength near the Sun is less than 3-4 muG and magnetic pitch angle is less than -8°. These results comprise the first measurements of the Galactic magnetic field using ultra-high energy cosmic rays and supplement existing radio astronomical measurements of the Galactic magnetic field.

  2. An assessment of adult risks of paresthesia due to mercury from coal combustion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lipfert, F.; Moskowitz, P.; Fthenakis, V.

    1993-11-01

    This paper presents results from a probabilistic assessment of the mercury health risks associated with a hypothetical 1000 MW coal-fired power plant. The assessment draws on the extant knowledge in each of the important steps in the chain from emissions to health effects, based on methylmercury derived from seafood. For this assessment, we define three separate sources of dietary Hg: canned tuna (affected by global Hg), marine shellfish and finfish (affected by global Hg), and freshwater gamefish (affected by both global Hg and local deposition from nearby sources). We consider emissions of both reactive and elemental mercury from the hypotheticalmore » plant (assumed to burn coal with the US average Hg content) and estimate wet and dry deposition rates; atmospheric reactions are not considered. Mercury that is not deposited within 50 km is assumed to enter the global background pool. The incremental Hg in local fish is assumed to be proportional to the incremental total Hg deposition. Three alternative dose-response models were derived from published data on specific neurological responses, in this case, adult paresthesia (skin prickling or tingling of the extremities). Preliminary estimates show the upper 95th percentile of the baseline risk attributed to seafood consumption to be around 10{sup {minus}4} (1 chance in 10,000). Based on a doubling of Hg deposition in the immediate vicinity of the hypothetical plant, the incremental local risk from seafood would be about a factor of 4 higher. These risks should be compared to the estimated background prevalence rate of paresthesia, which is about 7%.« less

  3. Multi-scale finite element modeling of strain localization in geomaterials with strong discontinuity

    NASA Astrophysics Data System (ADS)

    Lai, Timothy Yu

    2002-01-01

    Geomaterials such as soils and rocks undergo strain localization during various loading conditions. Strain localization manifests itself in the form of a shear band, a narrow zone of intense straining. It is now generally recognized that these localized deformations lead to an accelerated softening response and influence the response of structures at or near failure. In order to accurately predict the behavior of geotechnical structures, the effects of strain localization must be included in any model developed. In this thesis, a multi-scale Finite Element (FE) model has been developed that captures the macro- and micro-field deformation patterns present during strain localization. The FE model uses a strong discontinuity approach where a jump in the displacement field is assumed. The onset of strain localization is detected using bifurcation theory that checks when the governing equations lose ellipticity. Two types of bifurcation, continuous and discontinuous are considered. Precise conditions for plane strain loading conditions are reported for each type of bifurcation. Post-localization behavior is governed by the traction relations on the band. Different plasticity models such as Mohr-Coulomb, Drucker-Prager and a Modified Mohr-Coulomb yield were implemented together with cohesion softening and cutoff for the post-localization behavior. The FE model is implemented into a FORTRAN code SPIN2D-LOC using enhanced constant strain triangular (CST) elements. The model is formulated using standard Galerkin finite element method, applicable to problems under undrained conditions and small deformation theory. A band-tracing algorithm is implemented to track the propagation of the shear band. To validate the model, several simulations are performed from simple compression test of soft rock to simulation of a full-scale geosynthetic reinforced soil wall model undergoing strain localization. Results from both standard and enhanced FE method are included for comparison. The resulting load-displacement curves show that the model can represent the softening behavior of geomaterials once strain localization is detected. The orientation of the shear band is found to depend on both the friction and dilation angle of the geomaterial. For most practical problems, slight mesh dependency can be expected but is associated with the standard FE interpolation rather than the strong discontinuity enhancements.

  4. Modeling and Analysis of Ultrarelativistic Heavy Ion Collisions

    NASA Astrophysics Data System (ADS)

    McCormack, William; Pratt, Scott

    2014-09-01

    High-energy collisions of heavy ions, such as gold, copper, or uranium serve as an important means of studying quantum chromodynamic matter. When relativistic nuclei collide, a hot, energetic fireball of dissociated partonic matter is created; this super-hadronic matter is believed to be the quark gluon plasma (QGP), which is theorized to have comprised the universe immediately following the big bang. As the fireball expands and cools, it reaches freeze-out temperatures, and quarks hadronize into baryons and mesons. To characterize this super-hadronic matter, one can use balance functions, a means of studying correlations due to local charge conservation. In particular, the simple model used in this research assumed two waves of localized charge-anticharge production, with an abrupt transition from the QGP stage to hadronization. Balance functions were constructed as the sum of these two charge production components, and four parameters were manipulated to match the model's output with experimental data taken from the STAR Collaboration at RHIC. Results show that the chemical composition of the super-hadronic matter are consistent with that of a thermally equilibrated QGP. High-energy collisions of heavy ions, such as gold, copper, or uranium serve as an important means of studying quantum chromodynamic matter. When relativistic nuclei collide, a hot, energetic fireball of dissociated partonic matter is created; this super-hadronic matter is believed to be the quark gluon plasma (QGP), which is theorized to have comprised the universe immediately following the big bang. As the fireball expands and cools, it reaches freeze-out temperatures, and quarks hadronize into baryons and mesons. To characterize this super-hadronic matter, one can use balance functions, a means of studying correlations due to local charge conservation. In particular, the simple model used in this research assumed two waves of localized charge-anticharge production, with an abrupt transition from the QGP stage to hadronization. Balance functions were constructed as the sum of these two charge production components, and four parameters were manipulated to match the model's output with experimental data taken from the STAR Collaboration at RHIC. Results show that the chemical composition of the super-hadronic matter are consistent with that of a thermally equilibrated QGP. An MSU REU Project.

  5. On the more accurate channel model and positioning based on time-of-arrival for visible light localization

    NASA Astrophysics Data System (ADS)

    Amini, Changeez; Taherpour, Abbas; Khattab, Tamer; Gazor, Saeed

    2017-01-01

    This paper presents an improved propagation channel model for the visible light in indoor environments. We employ this model to derive an enhanced positioning algorithm using on the relation between the time-of-arrivals (TOAs) and the distances for two cases either by assuming known or unknown transmitter and receiver vertical distances. We propose two estimators, namely the maximum likelihood estimator and an estimator by employing the method of moments. To have an evaluation basis for these methods, we calculate the Cramer-Rao lower bound (CRLB) for the performance of the estimations. We show that the proposed model and estimations result in a superior performance in positioning when the transmitter and receiver are perfectly synchronized in comparison to the existing state-of-the-art counterparts. Moreover, the corresponding CRLB of the proposed model represents almost about 20 dB reduction in the localization error bound in comparison with the previous model for some practical scenarios.

  6. Efficient implicit LES method for the simulation of turbulent cavitating flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Egerer, Christian P., E-mail: christian.egerer@aer.mw.tum.de; Schmidt, Steffen J.; Hickel, Stefan

    2016-07-01

    We present a numerical method for efficient large-eddy simulation of compressible liquid flows with cavitation based on an implicit subgrid-scale model. Phase change and subgrid-scale interface structures are modeled by a homogeneous mixture model that assumes local thermodynamic equilibrium. Unlike previous approaches, emphasis is placed on operating on a small stencil (at most four cells). The truncation error of the discretization is designed to function as a physically consistent subgrid-scale model for turbulence. We formulate a sensor functional that detects shock waves or pseudo-phase boundaries within the homogeneous mixture model for localizing numerical dissipation. In smooth regions of the flowmore » field, a formally non-dissipative central discretization scheme is used in combination with a regularization term to model the effect of unresolved subgrid scales. The new method is validated by computing standard single- and two-phase test-cases. Comparison of results for a turbulent cavitating mixing layer obtained with the new method demonstrates its suitability for the target applications.« less

  7. Civil Tiltrotor Feasibility Study for the New York and Washington Terminal Areas

    NASA Technical Reports Server (NTRS)

    Stouffer, Virginia; Johnson, Jesse; Gribko, Joana; Yackovetsky, Robert (Technical Monitor)

    2001-01-01

    NASA tasked LMI to assess the potential contributions of a yet-undeveloped Civil Tiltrotor aircraft (CTR) in improving capacity in the National Airspace System in all weather conditions. The CTRs studied have assumed operating parameters beyond current CTR capabilities. LMI analyzed CTRs three ways: in fast-time terminal area modeling simulations of New York and Washington to determine delay and throughput impacts; in the Integrated Noise Model, to determine local environmental impact; and with an economic model, to determine the price viability of a CTR. The fast-time models encompassed a 250 nmi range and included traffic interactions from local airports. Both the fast-time simulation and the noise model assessed impacts from traffic levels projected for 1999, 2007, and 2017. Results: CTRs can reduce terminal area delays due to concrete congestion in all time frames. The maximum effect, the ratio of CTRs to jets and turboprop aircraft at a subject airport should be optimized. The economic model considered US traffic only and forecasted CTR sales beginning in 2010.

  8. Exploring the effect of diffuse reflection on indoor localization systems based on RSSI-VLC.

    PubMed

    Mohammed, Nazmi A; Elkarim, Mohammed Abd

    2015-08-10

    This work explores and evaluates the effect of diffuse light reflection on the accuracy of indoor localization systems based on visible light communication (VLC) in a high reflectivity environment using a received signal strength indication (RSSI) technique. The effect of the essential receiver (Rx) and transmitter (Tx) parameters on the localization error with different transmitted LED power and wall reflectivity factors is investigated at the worst Rx coordinates for a directed/overall link. Since this work assumes harsh operating conditions (i.e., a multipath model, high reflectivity surfaces, worst Rx position), an error of ≥ 1.46 m is found. To achieve a localization error in the range of 30 cm under these conditions with moderate LED power (i.e., P = 0.45 W), low reflectivity walls (i.e., ρ = 0.1) should be used, which would enable a localization error of approximately 7 mm at the room's center.

  9. Localization and characterization of X chromosome inversion breakpoints separating Drosophila mojavensis and Drosophila arizonae.

    PubMed

    Cirulli, Elizabeth T; Noor, Mohamed A F

    2007-01-01

    Ectopic exchange between transposable elements or other repetitive sequences along a chromosome can produce chromosomal inversions. As a result, genome sequence studies typically find sequence similarity between corresponding inversion breakpoint regions. Here, we identify and investigate the breakpoint regions of the X chromosome inversion distinguishing Drosophila mojavensis and Drosophila arizonae. We localize one inversion breakpoint to 13.7 kb and localize the other to a 1-Mb interval. Using this localization and assuming microsynteny between Drosophila melanogaster and D. arizonae, we pinpoint likely positions of the inversion breakpoints to windows of less than 3000 bp. These breakpoints define the size of the inversion to approximately 11 Mb. However, in contrast to many other studies, we fail to find significant sequence similarity between the 2 breakpoint regions. The localization of these inversion breakpoints will facilitate future genetic and molecular evolutionary studies in this species group, an emerging model system for ecological genetics.

  10. The effect of using genealogy-based haplotypes for genomic prediction

    PubMed Central

    2013-01-01

    Background Genomic prediction uses two sources of information: linkage disequilibrium between markers and quantitative trait loci, and additive genetic relationships between individuals. One way to increase the accuracy of genomic prediction is to capture more linkage disequilibrium by regression on haplotypes instead of regression on individual markers. The aim of this study was to investigate the accuracy of genomic prediction using haplotypes based on local genealogy information. Methods A total of 4429 Danish Holstein bulls were genotyped with the 50K SNP chip. Haplotypes were constructed using local genealogical trees. Effects of haplotype covariates were estimated with two types of prediction models: (1) assuming that effects had the same distribution for all haplotype covariates, i.e. the GBLUP method and (2) assuming that a large proportion (π) of the haplotype covariates had zero effect, i.e. a Bayesian mixture method. Results About 7.5 times more covariate effects were estimated when fitting haplotypes based on local genealogical trees compared to fitting individuals markers. Genealogy-based haplotype clustering slightly increased the accuracy of genomic prediction and, in some cases, decreased the bias of prediction. With the Bayesian method, accuracy of prediction was less sensitive to parameter π when fitting haplotypes compared to fitting markers. Conclusions Use of haplotypes based on genealogy can slightly increase the accuracy of genomic prediction. Improved methods to cluster the haplotypes constructed from local genealogy could lead to additional gains in accuracy. PMID:23496971

  11. The effect of using genealogy-based haplotypes for genomic prediction.

    PubMed

    Edriss, Vahid; Fernando, Rohan L; Su, Guosheng; Lund, Mogens S; Guldbrandtsen, Bernt

    2013-03-06

    Genomic prediction uses two sources of information: linkage disequilibrium between markers and quantitative trait loci, and additive genetic relationships between individuals. One way to increase the accuracy of genomic prediction is to capture more linkage disequilibrium by regression on haplotypes instead of regression on individual markers. The aim of this study was to investigate the accuracy of genomic prediction using haplotypes based on local genealogy information. A total of 4429 Danish Holstein bulls were genotyped with the 50K SNP chip. Haplotypes were constructed using local genealogical trees. Effects of haplotype covariates were estimated with two types of prediction models: (1) assuming that effects had the same distribution for all haplotype covariates, i.e. the GBLUP method and (2) assuming that a large proportion (π) of the haplotype covariates had zero effect, i.e. a Bayesian mixture method. About 7.5 times more covariate effects were estimated when fitting haplotypes based on local genealogical trees compared to fitting individuals markers. Genealogy-based haplotype clustering slightly increased the accuracy of genomic prediction and, in some cases, decreased the bias of prediction. With the Bayesian method, accuracy of prediction was less sensitive to parameter π when fitting haplotypes compared to fitting markers. Use of haplotypes based on genealogy can slightly increase the accuracy of genomic prediction. Improved methods to cluster the haplotypes constructed from local genealogy could lead to additional gains in accuracy.

  12. Local stresses in metal matrix composites subjected to thermal and mechanical loading

    NASA Technical Reports Server (NTRS)

    Highsmith, Alton L.; Shin, Donghee; Naik, Rajiv A.

    1990-01-01

    An elasticity solution has been used to analyze matrix stresses near the fiber/matrix interface in continuous fiber-reinforced metal-matrix composites, modeling the micromechanics in question in terms of a cylindrical fiber and cylindrical matrix sheath which is embedded in an orthotropic medium representing the composite. The model's predictions for lamina thermal and mechanical properties are applied to a laminate analysis determining ply-level stresses due to thermomechanical loading. A comparison is made between these results, which assume cylindrical symmetry, and the predictions yielded by a FEM model in which the fibers are arranged in a square array.

  13. Surface corrections for peridynamic models in elasticity and fracture

    NASA Astrophysics Data System (ADS)

    Le, Q. V.; Bobaru, F.

    2018-04-01

    Peridynamic models are derived by assuming that a material point is located in the bulk. Near a surface or boundary, material points do not have a full non-local neighborhood. This leads to effective material properties near the surface of a peridynamic model to be slightly different from those in the bulk. A number of methods/algorithms have been proposed recently for correcting this peridynamic surface effect. In this study, we investigate the efficacy and computational cost of peridynamic surface correction methods for elasticity and fracture. We provide practical suggestions for reducing the peridynamic surface effect.

  14. Haplotype-Based Genome-Wide Prediction Models Exploit Local Epistatic Interactions Among Markers

    PubMed Central

    Jiang, Yong; Schmidt, Renate H.; Reif, Jochen C.

    2018-01-01

    Genome-wide prediction approaches represent versatile tools for the analysis and prediction of complex traits. Mostly they rely on marker-based information, but scenarios have been reported in which models capitalizing on closely-linked markers that were combined into haplotypes outperformed marker-based models. Detailed comparisons were undertaken to reveal under which circumstances haplotype-based genome-wide prediction models are superior to marker-based models. Specifically, it was of interest to analyze whether and how haplotype-based models may take local epistatic effects between markers into account. Assuming that populations consisted of fully homozygous individuals, a marker-based model in which local epistatic effects inside haplotype blocks were exploited (LEGBLUP) was linearly transformable into a haplotype-based model (HGBLUP). This theoretical derivation formally revealed that haplotype-based genome-wide prediction models capitalize on local epistatic effects among markers. Simulation studies corroborated this finding. Due to its computational efficiency the HGBLUP model promises to be an interesting tool for studies in which ultra-high-density SNP data sets are studied. Applying the HGBLUP model to empirical data sets revealed higher prediction accuracies than for marker-based models for both traits studied using a mouse panel. In contrast, only a small subset of the traits analyzed in crop populations showed such a benefit. Cases in which higher prediction accuracies are observed for HGBLUP than for marker-based models are expected to be of immediate relevance for breeders, due to the tight linkage a beneficial haplotype will be preserved for many generations. In this respect the inheritance of local epistatic effects very much resembles the one of additive effects. PMID:29549092

  15. Haplotype-Based Genome-Wide Prediction Models Exploit Local Epistatic Interactions Among Markers.

    PubMed

    Jiang, Yong; Schmidt, Renate H; Reif, Jochen C

    2018-05-04

    Genome-wide prediction approaches represent versatile tools for the analysis and prediction of complex traits. Mostly they rely on marker-based information, but scenarios have been reported in which models capitalizing on closely-linked markers that were combined into haplotypes outperformed marker-based models. Detailed comparisons were undertaken to reveal under which circumstances haplotype-based genome-wide prediction models are superior to marker-based models. Specifically, it was of interest to analyze whether and how haplotype-based models may take local epistatic effects between markers into account. Assuming that populations consisted of fully homozygous individuals, a marker-based model in which local epistatic effects inside haplotype blocks were exploited (LEGBLUP) was linearly transformable into a haplotype-based model (HGBLUP). This theoretical derivation formally revealed that haplotype-based genome-wide prediction models capitalize on local epistatic effects among markers. Simulation studies corroborated this finding. Due to its computational efficiency the HGBLUP model promises to be an interesting tool for studies in which ultra-high-density SNP data sets are studied. Applying the HGBLUP model to empirical data sets revealed higher prediction accuracies than for marker-based models for both traits studied using a mouse panel. In contrast, only a small subset of the traits analyzed in crop populations showed such a benefit. Cases in which higher prediction accuracies are observed for HGBLUP than for marker-based models are expected to be of immediate relevance for breeders, due to the tight linkage a beneficial haplotype will be preserved for many generations. In this respect the inheritance of local epistatic effects very much resembles the one of additive effects. Copyright © 2018 Jiang et al.

  16. Variable population exposure and distributed travel speeds in least-cost tsunami evacuation modelling

    NASA Astrophysics Data System (ADS)

    Fraser, S. A.; Wood, N. J.; Johnston, D. M.; Leonard, G. S.; Greening, P. D.; Rossetto, T.

    2014-06-01

    Evacuation of the population from a tsunami hazard zone is vital to reduce life-loss due to inundation. Geospatial least-cost distance modelling provides one approach to assessing tsunami evacuation potential. Previous models have generally used two static exposure scenarios and fixed travel speeds to represent population movement. Some analyses have assumed immediate evacuation departure time or assumed a common departure time for all exposed population. In this paper, a method is proposed to incorporate time-variable exposure, distributed travel speeds, and uncertain evacuation departure time into an existing anisotropic least-cost path distance framework. The model is demonstrated for a case study of local-source tsunami evacuation in Napier City, Hawke's Bay, New Zealand. There is significant diurnal variation in pedestrian evacuation potential at the suburb-level, although the total number of people unable to evacuate is stable across all scenarios. Whilst some fixed travel speeds can approximate a distributed speed approach, others may overestimate evacuation potential. The impact of evacuation departure time is a significant contributor to total evacuation time. This method improves least-cost modelling of evacuation dynamics for evacuation planning, casualty modelling, and development of emergency response training scenarios.

  17. Discrete-state phasor neural networks

    NASA Astrophysics Data System (ADS)

    Noest, André J.

    1988-08-01

    An associative memory network with local variables assuming one of q equidistant positions on the unit circle (q-state phasors) is introduced, and its recall behavior is solved exactly for any q when the interactions are sparse and asymmetric. Such models can describe natural or artifical networks of (neuro-)biological, chemical, or electronic limit-cycle oscillators with q-fold instead of circular symmetry, or similar optical computing devices using a phase-encoded data representation.

  18. Two Back Stress Hardening Models in Rate Independent Rigid Plastic Deformation

    NASA Astrophysics Data System (ADS)

    Yun, Su-Jin

    In the present work, the constitutive relations based on the combination of two back stresses are developed using the Armstrong-Frederick, Phillips and Ziegler’s type hardening rules. Various evolutions of the kinematic hardening parameter can be obtained by means of a simple combination of back stress rate using the rule of mixtures. Thus, a wide range of plastic deformation behavior can be depicted depending on the dominant back stress evolution. The ultimate back stress is also determined for the present combined kinematic hardening models. Since a kinematic hardening rule is assumed in the finite deformation regime, the stress rate is co-rotated with respect to the spin of substructure obtained by incorporating the plastic spin concept. A comparison of the various co-rotational rates is also included. Assuming rigid plasticity, the continuum body consists of the elastic deformation zone and the plastic deformation zone to form a hybrid finite element formulation. Then, the plastic deformation behavior is investigated under various loading conditions with an assumption of the J2 deformation theory. The plastic deformation localization turns out to be strongly dependent on the description of back stress evolution and its associated hardening parameters. The analysis for the shear deformation with fixed boundaries is carried out to examine the deformation localization behavior and the evolution of state variables.

  19. Limited potential for adaptation to climate change in a broadly distributed marine crustacean.

    PubMed

    Kelly, Morgan W; Sanford, Eric; Grosberg, Richard K

    2012-01-22

    The extent to which acclimation and genetic adaptation might buffer natural populations against climate change is largely unknown. Most models predicting biological responses to environmental change assume that species' climatic envelopes are homogeneous both in space and time. Although recent discussions have questioned this assumption, few empirical studies have characterized intraspecific patterns of genetic variation in traits directly related to environmental tolerance limits. We test the extent of such variation in the broadly distributed tidepool copepod Tigriopus californicus using laboratory rearing and selection experiments to quantify thermal tolerance and scope for adaptation in eight populations spanning more than 17° of latitude. Tigriopus californicus exhibit striking local adaptation to temperature, with less than 1 per cent of the total quantitative variance for thermal tolerance partitioned within populations. Moreover, heat-tolerant phenotypes observed in low-latitude populations cannot be achieved in high-latitude populations, either through acclimation or 10 generations of strong selection. Finally, in four populations there was no increase in thermal tolerance between generations 5 and 10 of selection, suggesting that standing variation had already been depleted. Thus, plasticity and adaptation appear to have limited capacity to buffer these isolated populations against further increases in temperature. Our results suggest that models assuming a uniform climatic envelope may greatly underestimate extinction risk in species with strong local adaptation.

  20. Learning non-local dependencies.

    PubMed

    Kuhn, Gustav; Dienes, Zoltán

    2008-01-01

    This paper addresses the nature of the temporary storage buffer used in implicit or statistical learning. Kuhn and Dienes [Kuhn, G., and Dienes, Z. (2005). Implicit learning of nonlocal musical rules: implicitly learning more than chunks. Journal of Experimental Psychology-Learning Memory and Cognition, 31(6) 1417-1432] showed that people could implicitly learn a musical rule that was solely based on non-local dependencies. These results seriously challenge models of implicit learning that assume knowledge merely takes the form of linking adjacent elements (chunking). We compare two models that use a buffer to allow learning of long distance dependencies, the Simple Recurrent Network (SRN) and the memory buffer model. We argue that these models - as models of the mind - should not be evaluated simply by fitting them to human data but by determining the characteristic behaviour of each model. Simulations showed for the first time that the SRN could rapidly learn non-local dependencies. However, the characteristic performance of the memory buffer model rather than SRN more closely matched how people came to like different musical structures. We conclude that the SRN is more powerful than previous demonstrations have shown, but it's flexible learned buffer does not explain people's implicit learning (at least, the affective learning of musical structures) as well as fixed memory buffer models do.

  1. Stimulated luminescence emission from localized recombination in randomly distributed defects.

    PubMed

    Jain, Mayank; Guralnik, Benny; Andersen, Martin Thalbitzer

    2012-09-26

    We present a new kinetic model describing localized electronic recombination through the excited state of the donor (d) to an acceptor (a) centre in luminescent materials. In contrast to the existing models based on the localized transition model (LTM) of Halperin and Braner (1960 Phys. Rev. 117 408-15) which assumes a fixed d → a tunnelling probability for the entire crystal, our model is based on nearest-neighbour recombination within randomly distributed centres. Such a random distribution can occur through the entire volume or within the defect complexes of the dosimeter, and implies that the tunnelling probability varies with the donor-acceptor (d-a) separation distance. We first develop an 'exact kinetic model' that incorporates this variation in tunnelling probabilities, and evolves both in spatial as well as temporal domains. We then develop a simplified one-dimensional, semi-analytical model that evolves only in the temporal domain. An excellent agreement is observed between thermally and optically stimulated luminescence (TL and OSL) results produced from the two models. In comparison to the first-order kinetic behaviour of the LTM of Halperin and Braner (1960 Phys. Rev. 117 408-15), our model results in a highly asymmetric TL peak; this peak can be understood to derive from a continuum of several first-order TL peaks. Our model also shows an extended power law behaviour for OSL (or prompt luminescence), which is expected from localized recombination mechanisms in materials with random distribution of centres.

  2. A Spalart-Allmaras local correlation-based transition model for Thermo-fuid dynamics

    NASA Astrophysics Data System (ADS)

    D'Alessandro, V.; Garbuglia, F.; Montelpare, S.; Zoppi, A.

    2017-11-01

    The study of innovative energy systems often involves complex fluid flows problems and the Computational Fluid-Dynamics (CFD) is one of the main tools of analysis. It is important to put in evidence that in several energy systems the flow field experiences the laminar-to-turbulent transition. Direct Numerical Simulations (DNS) or Large Eddy Simulation (LES) are able to predict the flow transition but they are still inapplicable to the study of real problems due to the significant computational resources requirements. Differently standard Reynolds Averaged Navier Stokes (RANS) approaches are not always reliable since they assume a fully turbulent regime. In order to overcome this drawback in the recent years some locally formulated transition RANS models have been developed. In this work, we present a local correlation-based transition approach adding two equations that control the laminar-toturbulent transition process -γ and \\[\\overset{}{\\mathop{{{\\operatorname{Re}}θ, \\text{t}}}} \\] - to the well-known Spalart-Allmaras (SA) turbulence model. The new model was implemented within OpenFOAM code. The energy equation is also implemented in order to evaluate the model performance in thermal-fluid dynamics applications. In all the considered cases a very good agreement between numerical and experimental data was observed.

  3. Electron spin resonance in YbRh2Si2: local-moment, unlike-spin and quasiparticle descriptions.

    PubMed

    Huber, D L

    2012-06-06

    Electron spin resonance (ESR) in the Kondo lattice compound YbRh(2)Si(2) has stimulated discussion as to whether the low-field resonance outside the Fermi liquid regime in this material is more appropriately characterized as a local-moment phenomenon or one that requires a Landau quasiparticle interpretation. In earlier work, we outlined a collective mode approach to the ESR that involves only the local 4f moments. In this paper, we extend the collective mode approach to a situation where there are two subsystems of unlike spins: the pseudospins of the ground multiplet of the Yb ions and the spins of the itinerant conduction electrons. We assume a weakly anisotropic exchange interaction between the two subsystems. With suitable approximations our expression for the g-factor also reproduces that found in recent unlike-spin quasiparticle calculations. It is pointed out that the success of the local-moment approach in describing the resonance is due to the fact that the susceptibility of the Yb subsystem dominates that of the conduction electrons with the consequence that the relative shift in the resonance frequency predicted by the unlike-spin models (and absent in the local-moment models) is ≪ 1. The connection with theoretical studies of a two-component model with like spins is also discussed.

  4. Muscle activation described with a differential equation model for large ensembles of locally coupled molecular motors.

    PubMed

    Walcott, Sam

    2014-10-01

    Molecular motors, by turning chemical energy into mechanical work, are responsible for active cellular processes. Often groups of these motors work together to perform their biological role. Motors in an ensemble are coupled and exhibit complex emergent behavior. Although large motor ensembles can be modeled with partial differential equations (PDEs) by assuming that molecules function independently of their neighbors, this assumption is violated when motors are coupled locally. It is therefore unclear how to describe the ensemble behavior of the locally coupled motors responsible for biological processes such as calcium-dependent skeletal muscle activation. Here we develop a theory to describe locally coupled motor ensembles and apply the theory to skeletal muscle activation. The central idea is that a muscle filament can be divided into two phases: an active and an inactive phase. Dynamic changes in the relative size of these phases are described by a set of linear ordinary differential equations (ODEs). As the dynamics of the active phase are described by PDEs, muscle activation is governed by a set of coupled ODEs and PDEs, building on previous PDE models. With comparison to Monte Carlo simulations, we demonstrate that the theory captures the behavior of locally coupled ensembles. The theory also plausibly describes and predicts muscle experiments from molecular to whole muscle scales, suggesting that a micro- to macroscale muscle model is within reach.

  5. Disjunctive Normal Shape and Appearance Priors with Applications to Image Segmentation.

    PubMed

    Mesadi, Fitsum; Cetin, Mujdat; Tasdizen, Tolga

    2015-10-01

    The use of appearance and shape priors in image segmentation is known to improve accuracy; however, existing techniques have several drawbacks. Active shape and appearance models require landmark points and assume unimodal shape and appearance distributions. Level set based shape priors are limited to global shape similarity. In this paper, we present a novel shape and appearance priors for image segmentation based on an implicit parametric shape representation called disjunctive normal shape model (DNSM). DNSM is formed by disjunction of conjunctions of half-spaces defined by discriminants. We learn shape and appearance statistics at varying spatial scales using nonparametric density estimation. Our method can generate a rich set of shape variations by locally combining training shapes. Additionally, by studying the intensity and texture statistics around each discriminant of our shape model, we construct a local appearance probability map. Experiments carried out on both medical and natural image datasets show the potential of the proposed method.

  6. Numerical Estimation of the Curvature of Biological Surfaces

    NASA Technical Reports Server (NTRS)

    Todd, P. H.

    1985-01-01

    Many biological systems may profitably be studied as surface phenomena. A model consisting of isotropic growth of a curved surface from a flat sheet is assumed. With such a model, the Gaussian curvature of the final surface determines whether growth rate of the surface is subharmonic or superharmonic. These properties correspond to notions of convexity and concavity, and thus to local excess growth and local deficiency of growth. In biological models where the major factors controlling surface growth are intrinsic to the surface, researchers thus gained from geometrical study information on the differential growth undergone by the surface. These ideas were applied to an analysis of the folding of the cerebral cortex, a geometrically rather complex surface growth. A numerical surface curvature technique based on an approximation to the Dupin indicatrix of the surface was developed. A metric for comparing curvature estimates is introduced, and considerable numerical testing indicated the reliability of this technique.

  7. Cosmic bulk flow and the local motion from Cosmicflows-2

    NASA Astrophysics Data System (ADS)

    Hoffman, Yehuda; Courtois, Hélène M.; Tully, R. Brent

    2015-06-01

    Full sky surveys of peculiar velocity are arguably the best way to map the large-scale structure (LSS) out to distances of a few × 100 h-1 Mpc. Using the largest and most accurate ever catalogue of galaxy peculiar velocities Cosmicflows-2, the LSS has been reconstructed by means of the Wiener filter (WF) and constrained realizations (CRs) assuming as a Bayesian prior model the Λ cold dark matter model with the WMAP inferred cosmological parameters. This paper focuses on studying the bulk flow of the local flow field, defined as the mean velocity of top-hat spheres with radii ranging out to R = 500 h-1 Mpc. The estimated LSS, in general, and the bulk flow, in particular, are determined by the tension between the observational data and the assumed prior model. A pre-requisite for such an analysis is the requirement that the estimated bulk flow is consistent with the prior model. Such a consistency is found here. At R = 50 (150) h-1 Mpc, the estimated bulk velocity is 250 ± 21 (239 ± 38) km s-1. The corresponding cosmic variance at these radii is 126 (60) km s-1, which implies that these estimated bulk flows are dominated by the data and not by the assumed prior model. The estimated bulk velocity is dominated by the data out to R ≈ 200 h-1 Mpc, where the cosmic variance on the individual supergalactic Cartesian components (of the rms values) exceeds the variance of the CRs by at least a factor of 2. The SGX and SGY components of the cosmic microwave background dipole velocity are recovered by the WF velocity field down to a very few km s-1. The SGZ component of the estimated velocity, the one that is most affected by the zone of avoidance, is off by 126 km s-1 (an almost 2σ discrepancy). The bulk velocity analysis reported here is virtually unaffected by the Malmquist bias and very similar results are obtained for the data with and without the bias correction.

  8. Contaminant point source localization error estimates as functions of data quantity and model quality

    NASA Astrophysics Data System (ADS)

    Hansen, Scott K.; Vesselinov, Velimir V.

    2016-10-01

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulate well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. This greatly enhanced performance, but gains from additional data collection remained limited.

  9. Environmental Noise Could Promote Stochastic Local Stability of Behavioral Diversity Evolution

    NASA Astrophysics Data System (ADS)

    Zheng, Xiu-Deng; Li, Cong; Lessard, Sabin; Tao, Yi

    2018-05-01

    In this Letter, we investigate stochastic stability in a two-phenotype evolutionary game model for an infinite, well-mixed population undergoing discrete, nonoverlapping generations. We assume that the fitness of a phenotype is an exponential function of its expected payoff following random pairwise interactions whose outcomes randomly fluctuate with time. We show that the stochastic local stability of a constant interior equilibrium can be promoted by the random environmental noise even if the system may display a complicated nonlinear dynamics. This result provides a new perspective for a better understanding of how environmental fluctuations may contribute to the evolution of behavioral diversity.

  10. Dark localized structures in a cavity filled with a left-handed material

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tlidi, Mustapha; Kockaert, Pascal; Gelens, Lendert

    2011-07-15

    We consider a nonlinear passive optical cavity filled with left-handed and right-handed materials and driven by a coherent injected beam. We assume that both left-handed and right-handed materials possess a Kerr focusing type of nonlinearity. We show that close to the zero-diffraction regime, high-order diffraction allows us to stabilize dark localized structures in this device. These structures consist of dips in the transverse profile of the intracavity field and do not exist without high-order diffraction. We analyze the snaking bifurcation diagram associated with these structures. Finally, a realistic estimation of the model parameters is provided.

  11. Structural Analysis of the Redesigned Ice/Frost Ramp Bracket

    NASA Technical Reports Server (NTRS)

    Phillips, D. R.; Dawicke, D. S.; Gentz, S. J.; Roberts, P. W.; Raju, I. S.

    2007-01-01

    This paper describes the interim structural analysis of a redesigned Ice/Frost Ramp bracket for the Space Shuttle External Tank (ET). The proposed redesigned bracket consists of mounts for attachment to the ET wall, supports for the electronic/instrument cables and propellant repressurization lines that run along the ET, an upper plate, a lower plate, and complex bolted connections. The eight nominal bolted connections are considered critical in the summarized structural analysis. Each bolted connection contains a bolt, a nut, four washers, and a non-metallic spacer and block that are designed for thermal insulation. A three-dimensional (3D) finite element model of the bracket is developed using solid 10-node tetrahedral elements. The loading provided by the ET Project is used in the analysis. Because of the complexities associated with accurately modeling the bolted connections in the bracket, the analysis is performed using a global/local analysis procedure. The finite element analysis of the bracket identifies one of the eight bolted connections as having high stress concentrations. A local area of the bracket surrounding this bolted connection is extracted from the global model and used as a local model. Within the local model, the various components of the bolted connection are refined, and contact is introduced along the appropriate interfaces determined by the analysts. The deformations from the global model are applied as boundary conditions to the local model. The results from the global/local analysis show that while the stresses in the bolts are well within yield, the spacers fail due to compression. The primary objective of the interim structural analysis is to show concept viability for static thermal testing. The proposed design concept would undergo continued design optimization to address the identified analytical assumptions and concept shortcomings, assuming successful thermal testing.

  12. Detecting local diversity-dependence in diversification.

    PubMed

    Xu, Liang; Etienne, Rampal S

    2018-04-06

    Whether there are ecological limits to species diversification is a hotly debated topic. Molecular phylogenies show slowdowns in lineage accumulation, suggesting that speciation rates decline with increasing diversity. A maximum-likelihood (ML) method to detect diversity-dependent (DD) diversification from phylogenetic branching times exists, but it assumes that diversity-dependence is a global phenomenon and therefore ignores that the underlying species interactions are mostly local, and not all species in the phylogeny co-occur locally. Here, we explore whether this ML method based on the nonspatial diversity-dependence model can detect local diversity-dependence, by applying it to phylogenies, simulated with a spatial stochastic model of local DD speciation, extinction, and dispersal between two local communities. We find that type I errors (falsely detecting diversity-dependence) are low, and the power to detect diversity-dependence is high when dispersal rates are not too low. Interestingly, when dispersal is high the power to detect diversity-dependence is even higher than in the nonspatial model. Moreover, estimates of intrinsic speciation rate, extinction rate, and ecological limit strongly depend on dispersal rate. We conclude that the nonspatial DD approach can be used to detect diversity-dependence in clades of species that live in not too disconnected areas, but parameter estimates must be interpreted cautiously. © 2018 The Author(s). Evolution published by Wiley Periodicals, Inc. on behalf of The Society for the Study of Evolution.

  13. Accounting for spatial variation of trabecular anisotropy with subject-specific finite element modeling moderately improves predictions of local subchondral bone stiffness at the proximal tibia.

    PubMed

    Nazemi, S Majid; Kalajahi, S Mehrdad Hosseini; Cooper, David M L; Kontulainen, Saija A; Holdsworth, David W; Masri, Bassam A; Wilson, David R; Johnston, James D

    2017-07-05

    Previously, a finite element (FE) model of the proximal tibia was developed and validated against experimentally measured local subchondral stiffness. This model indicated modest predictions of stiffness (R 2 =0.77, normalized root mean squared error (RMSE%)=16.6%). Trabecular bone though was modeled with isotropic material properties despite its orthotropic anisotropy. The objective of this study was to identify the anisotropic FE modeling approach which best predicted (with largest explained variance and least amount of error) local subchondral bone stiffness at the proximal tibia. Local stiffness was measured at the subchondral surface of 13 medial/lateral tibial compartments using in situ macro indentation testing. An FE model of each specimen was generated assuming uniform anisotropy with 14 different combinations of cortical- and tibial-specific density-modulus relationships taken from the literature. Two FE models of each specimen were also generated which accounted for the spatial variation of trabecular bone anisotropy directly from clinical CT images using grey-level structure tensor and Cowin's fabric-elasticity equations. Stiffness was calculated using FE and compared to measured stiffness in terms of R 2 and RMSE%. The uniform anisotropic FE model explained 53-74% of the measured stiffness variance, with RMSE% ranging from 12.4 to 245.3%. The models which accounted for spatial variation of trabecular bone anisotropy predicted 76-79% of the variance in stiffness with RMSE% being 11.2-11.5%. Of the 16 evaluated finite element models in this study, the combination of Synder and Schneider (for cortical bone) and Cowin's fabric-elasticity equations (for trabecular bone) best predicted local subchondral bone stiffness. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. A methodology to predict damage initiation, damage growth and residual strength in titanium matrix composites

    NASA Technical Reports Server (NTRS)

    Bakuckas, J. G., Jr.; Johnson, W. S.

    1994-01-01

    In this research, a methodology to predict damage initiation, damage growth, fatigue life, and residual strength in titanium matrix composites (TMC) is outlined. Emphasis was placed on micromechanics-based engineering approaches. Damage initiation was predicted using a local effective strain approach. A finite element analysis verified the prevailing assumptions made in the formulation of this model. Damage growth, namely, fiber-bridged matrix crack growth, was evaluated using a fiber bridging (FB) model which accounts for thermal residual stresses. This model combines continuum fracture mechanics and micromechanics analyses yielding stress-intensity factor solutions for fiber-bridged matrix cracks. It is assumed in the FB model that fibers in the wake of the matrix crack are idealized as a closure pressure, and an unknown constant frictional shear stress is assumed to act along the debond length of the bridging fibers. This frictional shear stress was used as a curve fitting parameter to the available experimental data. Fatigue life and post-fatigue residual strength were predicted based on the axial stress in the first intact 0 degree fiber calculated using the FB model and a three-dimensional finite element analysis.

  15. Mathematical Model for the Mineralization of Bone

    NASA Technical Reports Server (NTRS)

    Martin, Bruce

    1994-01-01

    A mathematical model is presented for the transport and precipitation of mineral in refilling osteons. One goal of this model was to explain calcification 'halos,' in which the bone near the haversian canal is more highly mineralized than the more peripheral lamellae, which have been mineralizing longer. It was assumed that the precipitation rate of mineral is proportional to the difference between the local concentration of calcium ions and an equilibrium concentration and that the transport of ions is by either diffusion or some other concentration gradient-dependent process. Transport of ions was assumed to be slowed by the accumulation of mineral in the matrix along the transport path. ne model also mimics bone apposition, slowing of apposition during refilling, and mineralization lag time. It was found that simple diffusion cannot account for the transport of calcium ions into mineralizing bone, because the diffusion coefficient is two orders of magnitude too low. If a more rapid concentration gradient-driven means of transport exists, the model demonstrates that osteonal geometry and variable rate of refilling work together to produce calcification halos, as well as the primary and secondary calcification effect reported in the literature.

  16. Goal oriented soil mapping: applying modern methods supported by local knowledge: A review

    NASA Astrophysics Data System (ADS)

    Pereira, Paulo; Brevik, Eric; Oliva, Marc; Estebaranz, Ferran; Depellegrin, Daniel; Novara, Agata; Cerda, Artemi; Menshov, Oleksandr

    2017-04-01

    In the recent years the amount of soil data available increased importantly. This facilitated the production of better and accurate maps, important for sustainable land management (Pereira et al., 2017). Despite these advances, the human knowledge is extremely important to understand the natural characteristics of the landscape. The knowledge accumulated and transmitted generation after generation is priceless, and should be considered as a valuable data source for soil mapping and modelling. The local knowledge and wisdom can complement the new advances in soil analysis. In addition, farmers are the most interested in the participation and incorporation of their knowledge in the models, since they are the end-users of the study that soil scientists produce. Integration of local community's vision and understanding about nature is assumed to be an important step to the implementation of decision maker's policies. Despite this, many challenges appear regarding the integration of local and scientific knowledge, since in some cases there is no spatial correlation between folk and scientific classifications, which may be attributed to the different cultural variables that influence local soil classification. The objective of this work is to review how modern soil methods incorporated local knowledge in their models. References Pereira, P., Brevik, E., Oliva, M., Estebaranz, F., Depellegrin, D., Novara, A., Cerda, A., Menshov, O. (2017) Goal Oriented soil mapping: applying modern methods supported by local knowledge. In: Pereira, P., Brevik, E., Munoz-Rojas, M., Miller, B. (Eds.) Soil mapping and process modelling for sustainable land use management (Elsevier Publishing House) ISBN: 9780128052006

  17. The 60 GHz radiometric local vertical sensor experiment

    NASA Technical Reports Server (NTRS)

    Grauling, C. H., Jr.

    1973-01-01

    The experiment concept involves the use of millimeter wave radiation the atmospheric oxygen to provide vertical sensing information to a satellite-borne radiometer. The radiance profile studies require the calculation of ray brightness temperature as a function of tangential altitude and atmosphere model, and the computer program developed for this purpose is discussed. Detailed calculations have been made for a total of 12 atmosphere models, including some showing severe warning conditions. The experiment system analysis investigates the effect of various design choices on system behavior. Calculated temperature profiles are presented for a wide variety of frequencies, bandwidths, and atmosphere models. System performance is determined by the convolution of the brightness temperature and an assumed antenna pattern. A compensation scheme to account for different plateau temperatures is developed and demonstrated. The millimeter wave components developed for the local vertical sensor are discussed, with emphasis on the antenna, low noise mixer, and solid state local oscillator. It was concluded that a viable sensing technique exists, useful over a wide range of altitude with an accuracy generally on the order of 0.01 degree or better.

  18. Discrete models for the numerical analysis of time-dependent multidimensional gas dynamics

    NASA Technical Reports Server (NTRS)

    Roe, P. L.

    1984-01-01

    A possible technique is explored for extending to multidimensional flows some of the upwind-differencing methods that are highly successful in the one-dimensional case. Emphasis is on the two-dimensional case, and the flow domain is assumed to be divided into polygonal computational elements. Inside each element, the flow is represented by a local superposition of elementary solutions consisting of plane waves not necessarily aligned with the element boundaries.

  19. Observability-based Local Path Planning and Collision Avoidance Using Bearing-only Measurements

    DTIC Science & Technology

    2012-01-20

    Clark N. Taylorb aDepartment of Electrical and Computer Engineering, Brigham Young University , Provo, Utah, 84602 bSensors Directorate, Air Force Research...NAME(S) AND ADDRESS(ES) Brigham Young University ,Department of Electrical and Computer Engineering,Provo,UT,84602 8. PERFORMING ORGANIZATION... vit is the measurement noise that is assumed to be a zero-mean Gaus- sian random variable. Based on the state transition model expressed by Eqs. (1

  20. Modelling Metrics for Mine Counter Measure Operations

    DTIC Science & Technology

    2014-08-01

    the Minister of National Defence, 2014 © Sa Majesté la Reine (en droit du Canada), telle que représentée par le ministre de la Défense nationale, 2014...a random search derived by Koopman is widely used yet it assumes no angular dependence (Ref [10]). In a series of publications considering tactics...Node Placement in Sensor Localization by Optimization of Subspace Principal Angles, In Proceedings of IEEE International Conference on Acoustics

  1. Leakage and spillover effects of forest management on carbon storage: theoretical insights from a simple model

    NASA Astrophysics Data System (ADS)

    Magnani, Federico; Dewar, Roderick C.; Borghetti, Marco

    2009-04-01

    Leakage (spillover) refers to the unintended negative (positive) consequences of forest carbon (C) management in one area on C storage elsewhere. For example, the local C storage benefit of less intensive harvesting in one area may be offset, partly or completely, by intensified harvesting elsewhere in order to meet global timber demand. We present the results of a theoretical study aimed at identifying the key factors determining leakage and spillover, as a prerequisite for more realistic numerical studies. We use a simple model of C storage in managed forest ecosystems and their wood products to derive approximate analytical expressions for the leakage induced by decreasing the harvesting frequency of existing forest, and the spillover induced by establishing new plantations, assuming a fixed total wood production from local and remote (non-local) forests combined. We find that leakage and spillover depend crucially on the growth rates, wood product lifetimes and woody litter decomposition rates of local and remote forests. In particular, our results reveal critical thresholds for leakage and spillover, beyond which effects of forest management on remote C storage exceed local effects. Order of magnitude estimates of leakage indicate its potential importance at global scales.

  2. Models of recurrent strike-slip earthquake cycles and the state of crustal stress

    NASA Technical Reports Server (NTRS)

    Lyzenga, Gregory A.; Raefsky, Arthur; Mulligan, Stephanie G.

    1991-01-01

    Numerical models of the strike-slip earthquake cycle, assuming a viscoelastic asthenosphere coupling model, are examined. The time-dependent simulations incorporate a stress-driven fault, which leads to tectonic stress fields and earthquake recurrence histories that are mutually consistent. Single-fault simulations with constant far-field plate motion lead to a nearly periodic earthquake cycle and a distinctive spatial distribution of crustal shear stress. The predicted stress distribution includes a local minimum in stress at depths less than typical seismogenic depths. The width of this stress 'trough' depends on the magnitude of crustal stress relative to asthenospheric drag stresses. The models further predict a local near-fault stress maximum at greater depths, sustained by the cyclic transfer of strain from the elastic crust to the ductile asthenosphere. Models incorporating both low-stress and high-stress fault strength assumptions are examined, under Newtonian and non-Newtonian rheology assumptions. Model results suggest a preference for low-stress (a shear stress level of about 10 MPa) fault models, in agreement with previous estimates based on heat flow measurements and other stress indicators.

  3. Petrological Geodynamics of Mantle Melting II. AlphaMELTS + Multiphase Flow: Dynamic Fractional Melting

    NASA Astrophysics Data System (ADS)

    Tirone, Massimiliano

    2018-03-01

    In this second installment of a series that aims to investigate the dynamic interaction between the composition and abundance of the solid mantle and its melt products, the classic interpretation of fractional melting is extended to account for the dynamic nature of the process. A multiphase numerical flow model is coupled with the program AlphaMELTS, which provides at the moment possibly the most accurate petrological description of melting based on thermodynamic principles. The conceptual idea of this study is based on a description of the melting process taking place along a 1-D vertical ideal column where chemical equilibrium is assumed to apply in two local sub-systems separately on some spatial and temporal scale. The solid mantle belongs to a local sub-system (ss1) that does not interact chemically with the melt reservoir which forms a second sub-system (ss2). The local melt products are transferred in the melt sub-system ss2 where the melt phase eventually can also crystallize into a different solid assemblage and will evolve dynamically. The main difference with the usual interpretation of fractional melting is that melt is not arbitrarily and instantaneously extracted from the mantle, but instead remains a dynamic component of the model, hence the process is named dynamic fractional melting (DFM). Some of the conditions that may affect the DFM model are investigated in this study, in particular the effect of temperature, mantle velocity at the boundary of the mantle column. A comparison is made with the dynamic equilibrium melting (DEM) model discussed in the first installment. The implications of assuming passive flow or active flow are also considered to some extent. Complete data files of most of the DFM simulations, four animations and two new DEM simulations (passive/active flow) are available following the instructions in the supplementary material.

  4. On the Visual Input Driving Human Smooth-Pursuit Eye Movements

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Beutter, Brent R.; Lorenceau, Jean

    1996-01-01

    Current computational models of smooth-pursuit eye movements assume that the primary visual input is local retinal-image motion (often referred to as retinal slip). However, we show that humans can pursue object motion with considerable accuracy, even in the presence of conflicting local image motion. This finding indicates that the visual cortical area(s) controlling pursuit must be able to perform a spatio-temporal integration of local image motion into a signal related to object motion. We also provide evidence that the object-motion signal that drives pursuit is related to the signal that supports perception. We conclude that current models of pursuit should be modified to include a visual input that encodes perceived object motion and not merely retinal image motion. Finally, our findings suggest that the measurement of eye movements can be used to monitor visual perception, with particular value in applied settings as this non-intrusive approach would not require interrupting ongoing work or training.

  5. Coral reef degradation is not correlated with local human population density

    NASA Astrophysics Data System (ADS)

    Bruno, John F.; Valdivia, Abel

    2016-07-01

    The global decline of reef-building corals is understood to be due to a combination of local and global stressors. However, many reef scientists assume that local factors predominate and that isolated reefs, far from human activities, are generally healthier and more resilient. Here we show that coral reef degradation is not correlated with human population density. This suggests that local factors such as fishing and pollution are having minimal effects or that their impacts are masked by global drivers such as ocean warming. Our results also suggest that the effects of local and global stressors are antagonistic, rather than synergistic as widely assumed. These findings indicate that local management alone cannot restore coral populations or increase the resilience of reefs to large-scale impacts. They also highlight the truly global reach of anthropogenic warming and the immediate need for drastic and sustained cuts in carbon emissions.

  6. Coral reef degradation is not correlated with local human population density.

    PubMed

    Bruno, John F; Valdivia, Abel

    2016-07-20

    The global decline of reef-building corals is understood to be due to a combination of local and global stressors. However, many reef scientists assume that local factors predominate and that isolated reefs, far from human activities, are generally healthier and more resilient. Here we show that coral reef degradation is not correlated with human population density. This suggests that local factors such as fishing and pollution are having minimal effects or that their impacts are masked by global drivers such as ocean warming. Our results also suggest that the effects of local and global stressors are antagonistic, rather than synergistic as widely assumed. These findings indicate that local management alone cannot restore coral populations or increase the resilience of reefs to large-scale impacts. They also highlight the truly global reach of anthropogenic warming and the immediate need for drastic and sustained cuts in carbon emissions.

  7. Coral reef degradation is not correlated with local human population density

    PubMed Central

    Bruno, John F.; Valdivia, Abel

    2016-01-01

    The global decline of reef-building corals is understood to be due to a combination of local and global stressors. However, many reef scientists assume that local factors predominate and that isolated reefs, far from human activities, are generally healthier and more resilient. Here we show that coral reef degradation is not correlated with human population density. This suggests that local factors such as fishing and pollution are having minimal effects or that their impacts are masked by global drivers such as ocean warming. Our results also suggest that the effects of local and global stressors are antagonistic, rather than synergistic as widely assumed. These findings indicate that local management alone cannot restore coral populations or increase the resilience of reefs to large-scale impacts. They also highlight the truly global reach of anthropogenic warming and the immediate need for drastic and sustained cuts in carbon emissions. PMID:27435659

  8. Potential of pressure solution for strain localization in the Baccu Locci Shear Zone (Sardinia, Italy)

    NASA Astrophysics Data System (ADS)

    Casini, Leonardo; Funedda, Antonio

    2014-09-01

    The mylonites of the Baccu Locci Shear Zone (BLSZ), Sardinia (Italy), were deformed during thrusting along a bottom-to-top strain gradient in lower greenschist facies. The microstructure of metavolcanic protoliths shows evidence for composite deformation accommodated by dislocation creep within strong quartz porphyroclasts, and pressure solution in the finer grained matrix. The evolution of mylonite is simulated in two sets of numerical experiments, assuming either a constant width of the deforming zone (model 1) or a narrowing shear zone (model 2). A 2-5 mm y-1 constant-external-velocity boundary condition is applied on the basis of geologic constraints. Inputs to the models are provided by inverting paleostress values obtained from quartz recrystallized grain-size paleopiezometry. Both models predict a significant stress drop across the shear zone. However, model 1 involves a dramatic decrease in strain rate towards the zone of apparent strain localization. In contrast, model 2 predicts an increase in strain rate with time (from 10-14 to 10-12 s-1), which is consistent with stabilization of the shear zone profile and localization of deformation near the hanging wall. Extrapolating these results to the general context of crust strength suggests that pressure-solution creep may be a critical process for strain softening and for the stabilization of deformation within shear zones.

  9. Local structure controls the nonaffine shear and bulk moduli of disordered solids

    NASA Astrophysics Data System (ADS)

    Schlegel, M.; Brujic, J.; Terentjev, E. M.; Zaccone, A.

    2016-01-01

    Paradigmatic model systems, which are used to study the mechanical response of matter, are random networks of point-atoms, random sphere packings, or simple crystal lattices; all of these models assume central-force interactions between particles/atoms. Each of these models differs in the spatial arrangement and the correlations among particles. In turn, this is reflected in the widely different behaviours of the shear (G) and compression (K) elastic moduli. The relation between the macroscopic elasticity as encoded in G, K and their ratio, and the microscopic lattice structure/order, is not understood. We provide a quantitative analytical connection between the local orientational order and the elasticity in model amorphous solids with different internal microstructure, focusing on the two opposite limits of packings (strong excluded-volume) and networks (no excluded-volume). The theory predicts that, in packings, the local orientational order due to excluded-volume causes less nonaffinity (less softness or larger stiffness) under compression than under shear. This leads to lower values of G/K, a well-documented phenomenon which was lacking a microscopic explanation. The theory also provides an excellent one-parameter description of the elasticity of compressed emulsions in comparison with experimental data over a broad range of packing fractions.

  10. Field Extension of Real Values of Physical Observables in Classical Theory can Help Attain Quantum Results

    NASA Astrophysics Data System (ADS)

    Wang, Hai; Kumar, Asutosh; Cho, Minhyung; Wu, Junde

    2018-04-01

    Physical quantities are assumed to take real values, which stems from the fact that an usual measuring instrument that measures a physical observable always yields a real number. Here we consider the question of what would happen if physical observables are allowed to assume complex values. In this paper, we show that by allowing observables in the Bell inequality to take complex values, a classical physical theory can actually get the same upper bound of the Bell expression as quantum theory. Also, by extending the real field to the quaternionic field, we can puzzle out the GHZ problem using local hidden variable model. Furthermore, we try to build a new type of hidden-variable theory of a single qubit based on the result.

  11. Analysis of the M-shell spectra emitted by a short-pulse laser-created tantalum plasma

    PubMed

    Busquet; Jiang; Coinsertion Markte CY; Kieffer; Klapisch; Bar-Shalom; Bauche-Arnoult; Bachelier

    2000-01-01

    The spectrum of tantalum emitted by a subpicosecond laser-created plasma, was recorded in the regions of the 3d-5f, 3d-4f, and 3d-4p transitions. The main difference with a nanosecond laser-created plasma spectrum is a broad understructure appearing under the 3d-5f transitions. An interpretation of this feature as a density effect is proposed. The supertransition array model is used for interpreting the spectrum, assuming local thermodynamic equilibrium (LTE) at some effective temperature. An interpretation of the 3d-4f spectrum using the more detailed unresolved transition array formalism, which does not assume LTE, is also proposed. Fitted contributions of the different ionic species differ slightly from the LTE-predicted values.

  12. Multilevel selection in a resource-based model.

    PubMed

    Ferreira, Fernando Fagundes; Campos, Paulo R A

    2013-07-01

    In the present work we investigate the emergence of cooperation in a multilevel selection model that assumes limiting resources. Following the work by R. J. Requejo and J. Camacho [Phys. Rev. Lett. 108, 038701 (2012)], the interaction among individuals is initially ruled by a prisoner's dilemma (PD) game. The payoff matrix may change, influenced by the resource availability, and hence may also evolve to a non-PD game. Furthermore, one assumes that the population is divided into groups, whose local dynamics is driven by the payoff matrix, whereas an intergroup competition results from the nonuniformity of the growth rate of groups. We study the probability that a single cooperator can invade and establish in a population initially dominated by defectors. Cooperation is strongly favored when group sizes are small. We observe the existence of a critical group size beyond which cooperation becomes counterselected. Although the critical size depends on the parameters of the model, it is seen that a saturation value for the critical group size is achieved. The results conform to the thought that the evolutionary history of life repeatedly involved transitions from smaller selective units to larger selective units.

  13. Relativistic Nonlocality and the EPR Paradox

    NASA Astrophysics Data System (ADS)

    Chamberlain, Thomas

    2014-03-01

    The exact violation of Bell's Inequalities is obtained with a local realistic model for spin. The model treats one particle that comprises a quantum ensemble and simulates the EPR data one coincidence at a time as a product state. Such a spin is represented by operators σx , iσy ,σz in its body frame rather than the usual set of σX ,σY ,σZ in the laboratory frame. This model, assumed valid in the absence of a measuring probe, contains both quantum polarizations and coherences. Each carries half the EPR correlation, but only half can be measured using coincidence techniques. The model further predicts the filter angles that maximize the spin correlation in EPR experiments.

  14. Running of featureful primordial power spectra

    NASA Astrophysics Data System (ADS)

    Gariazzo, Stefano; Mena, Olga; Miralles, Victor; Ramírez, Héctor; Boubekeur, Lotfi

    2017-06-01

    Current measurements of the temperature and polarization anisotropy power spectra of the cosmic microwave background (CMB) seem to indicate that the naive expectation for the slow-roll hierarchy within the most simple inflationary paradigm may not be respected in nature. We show that a primordial power spectrum with localized features could in principle give rise to the observed slow-roll anarchy when fitted to a featureless power spectrum. From a model comparison perspective, and assuming that nature has chosen a featureless primordial power spectrum, we find that, while with mock Planck data there is only weak evidence against a model with localized features, upcoming CMB missions may provide compelling evidence against such a nonstandard primordial power spectrum. This evidence could be reinforced if a featureless primordial power spectrum is independently confirmed from bispectrum and/or galaxy clustering measurements.

  15. Supervised self-organization of homogeneous swarms using ergodic projections of Markov chains.

    PubMed

    Chattopadhyay, Ishanu; Ray, Asok

    2009-12-01

    This paper formulates a self-organization algorithm to address the problem of global behavior supervision in engineered swarms of arbitrarily large population sizes. The swarms considered in this paper are assumed to be homogeneous collections of independent identical finite-state agents, each of which is modeled by an irreducible finite Markov chain. The proposed algorithm computes the necessary perturbations in the local agents' behavior, which guarantees convergence to the desired observed state of the swarm. The ergodicity property of the swarm, which is induced as a result of the irreducibility of the agent models, implies that while the local behavior of the agents converges to the desired behavior only in the time average, the overall swarm behavior converges to the specification and stays there at all times. A simulation example illustrates the underlying concept.

  16. On Local Ionization Equilibrium and Disk Winds in QSOs

    NASA Astrophysics Data System (ADS)

    Pereyra, Nicolas A.

    2014-11-01

    We present theoretical C IV λλ1548,1550 absorption line profiles for QSOs calculated assuming the accretion disk wind (ADW) scenario. The results suggest that the multiple absorption troughs seen in many QSOs may be due to the discontinuities in the ion balance of the wind (caused by X-rays), rather than discontinuities in the density/velocity structure. The profiles are calculated from a 2.5-dimensional time-dependent hydrodynamic simulation of a line-driven disk wind for a typical QSO black hole mass, a typical QSO luminosity, and for a standard Shakura-Sunyaev disk. We include the effects of ionizing X-rays originating from within the inner disk radius by assuming that the wind is shielded from the X-rays from a certain viewing angle up to 90° ("edge on"). In the shielded region, we assume constant ionization equilibrium, and thus constant line-force parameters. In the non-shielded region, we assume that both the line-force and the C IV populations are nonexistent. The model can account for P-Cygni absorption troughs (produced at edge on viewing angles), multiple absorption troughs (produced at viewing angles close to the angle that separates the shielded region and the non-shielded region), and for detached absorption troughs (produced at an angle in between the first two absorption line types); that is, the model can account for the general types of broad absorption lines seen in QSOs as a viewing angle effect. The steady nature of ADWs, in turn, may account for the steady nature of the absorption structure observed in multiple-trough broad absorption line QSOs. The model parameters are M bh = 109 M ⊙ and L disk = 1047 erg s-1.

  17. Local Equilibrium and Retardation Revisited.

    PubMed

    Hansen, Scott K; Vesselinov, Velimir V

    2018-01-01

    In modeling solute transport with mobile-immobile mass transfer (MIMT), it is common to use an advection-dispersion equation (ADE) with a retardation factor, or retarded ADE. This is commonly referred to as making the local equilibrium assumption (LEA). Assuming local equilibrium, Eulerian textbook treatments derive the retarded ADE, ostensibly exactly. However, other authors have presented rigorous mathematical derivations of the dispersive effect of MIMT, applicable even in the case of arbitrarily fast mass transfer. We resolve the apparent contradiction between these seemingly exact derivations by adopting a Lagrangian point of view. We show that local equilibrium constrains the expected time immobile, whereas the retarded ADE actually embeds a stronger, nonphysical, constraint: that all particles spend the same amount of every time increment immobile. Eulerian derivations of the retarded ADE thus silently commit the gambler's fallacy, leading them to ignore dispersion due to mass transfer that is correctly modeled by other approaches. We then present a particle tracking simulation illustrating how poor an approximation the retarded ADE may be, even when mobile and immobile plumes are continually near local equilibrium. We note that classic "LEA" (actually, retarded ADE validity) criteria test for insignificance of MIMT-driven dispersion relative to hydrodynamic dispersion, rather than for local equilibrium. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  18. Numerical investigation on thermal behaviors of two-dimensional latent thermal energy storage with PCM and aluminum foam

    NASA Astrophysics Data System (ADS)

    Buonomo, B.; Ercole, D.; Manca, O.; Nardini, S.

    2017-01-01

    A numerical investigation on Latent Heat Thermal Energy Storage System (LHTESS) based on a phase change material (PCM) is accomplished. The PCM is a pure paraffin wax with a low thermal conductivity. An aluminum metal foam is employed to enhance the PCM thermal behaviors. The geometry is a vertical shell-and-tube LHTESS made with two concentric aluminum tubes. The internal surface of the hollow cylinder is assumed at a constant temperature above the melting temperature of the PCM to simulate the heat transfer from a hot fluid. The external surface is assumed adiabatic. The phase change of the PCM is modelled with the enthalpy porosity theory while the metal foam is considered as a porous media in Darcy-Forchheimer assumption and the Boussinesq approximation is employed. Local thermal non-equilibrium (LTNE) model is assumed. The results are compared in terms of melting time and temperature fields as a function of time for the charging and discharging phases for different porosities and an assigned pore per inch. Results show that the metal foam improves significantly the heat transfer in the LHTESS giving a faster phase change process with respect to pure PCM, reducing the melting time more than one order of magnitude.

  19. Local understandings of conservation in southeastern Mexico and their implications for community-based conservation as an alternative paradigm.

    PubMed

    Reyes-Garcia, Victoria; Ruiz-Mallen, Isabel; Porter-Bolland, Luciana; Garcia-Frapolli, Eduardo; Ellis, Edward A; Mendez, Maria-Elena; Pritchard, Diana J; Sanchez-Gonzalez, María-Consuelo

    2013-08-01

    Since the 1990s national and international programs have aimed to legitimize local conservation initiatives that might provide an alternative to the formal systems of state-managed or otherwise externally driven protected areas. We used discourse analysis (130 semistructured interviews with key informants) and descriptive statistics (679 surveys) to compare local perceptions of and experiences with state-driven versus community-driven conservation initiatives. We conducted our research in 6 communities in southeastern Mexico. Formalization of local conservation initiatives did not seem to be based on local knowledge and practices. Although interviewees thought community-based initiatives generated less conflict than state-managed conservation initiatives, the community-based initiatives conformed to the biodiversity conservation paradigm that emphasizes restricted use of and access to resources. This restrictive approach to community-based conservation in Mexico, promoted through state and international conservation organizations, increased the area of protected land and had local support but was not built on locally relevant and multifunctional landscapes, a model that community-based conservation is assumed to advance. © 2013 Society for Conservation Biology.

  20. Flow studies in canine artery bifurcations using a numerical simulation method.

    PubMed

    Xu, X Y; Collins, M W; Jones, C J

    1992-11-01

    Three-dimensional flows through canine femoral bifurcation models were predicted under physiological flow conditions by solving numerically the time-dependent three-dimensional Navier-stokes equations. In the calculations, two models were assumed for the blood, those of (a) a Newtonian fluid, and (b) a non-Newtonian fluid obeying the power law. The blood vessel wall was assumed to be rigid this being the only approximation to the prediction model. The numerical procedure utilized a finite volume approach on a finite element mesh to discretize the equations, and the code used (ASTEC) incorporated the SIMPLE velocity-pressure algorithm in performing the calculations. The predicted velocity profiles were in good qualitative agreement with the in vivo measurements recently obtained by Jones et al. The non-Newtonian effects on the bifurcation flow field were also investigated, and no great differences in velocity profiles were observed. This indicated that the non-Newtonian characteristics of the blood might not be an important factor in determining the general flow patterns for these bifurcations, but could have local significance. Current work involves modeling wall distensibility in an empirically valid manner. Predictions accommodating these will permit a true quantitative comparison with experiment.

  1. A spatio-temporal model of the human observer for use in display design

    NASA Astrophysics Data System (ADS)

    Bosman, Dick

    1989-08-01

    A "quick look" visual model, a kind of standard observer in software, is being developed to estimate the appearance of new display designs before prototypes are built. It operates on images also stored in software. It is assumed that the majority of display design flaws and technology artefacts can be identified in representations of early visual processing, and insight obtained into very local to global (supra-threshold) brightness distributions. Cognitive aspects are not considered because it seems that poor acceptance of technology and design is only weakly coupled to image content.

  2. Simulating Local Area Network Protocols with the General Purpose Simulation System (GPSS)

    DTIC Science & Technology

    1990-03-01

    generation 15 3.1.2 Frame delivery . 15 3.2 Model artifices 16 3.3 Model variables 17 3.4 Simulation results 18 4. EXTERNAL PROCEDURES USED IN SIMULATION 19...46 15. Token Ring: Frame generation process 47 16. Token Ring: Frame delivery process 48 17 . Token Ring: Mean transfer delay vs mean throughput 49...assumed to be zero were replaced by the maximum values specified in the ANSI 802.3 standard (viz &MI=6, &M2=3, &M3= 17 , &D1=18, &D2=3, &D4=4, &D7=3, and

  3. Capturing the Elite in Marine Conservation in Northeast Kalimantan.

    PubMed

    Kusumawati, Rini; Visser, Leontine

    This article takes the existence of power networks of local elites as a social fact of fundamental importance and the starting point for the study of patronage in the governance of the coastal waters of East Kalimantan. We address the question of how to capture the elites for project implementation, rather than assuming the inevitability of elite capture of project funds. We analyze the multiple-scale networks of local power holders ( punggawa ) and the collaboration and friction between the political-economic interests and historical values of local actors and the scientific motivations of international environmental organizations. We describe how collaboration and friction between members of the elite challenge models that categorically exclude or co-opt local elites in foreign projects. In-depth ethnographic study of these networks shows their resilience through flows of knowledge and power in a highly volatile coastal environment. Results indicate the need for inclusion in decision making of local entrepreneurs, and - indirectly - their dependents in decentralized coastal governance.

  4. Type Ia supernovae, standardizable candles, and gravity

    NASA Astrophysics Data System (ADS)

    Wright, Bill S.; Li, Baojiu

    2018-04-01

    Type Ia supernovae (SNe Ia) are generally accepted to act as standardizable candles, and their use in cosmology led to the first confirmation of the as yet unexplained accelerated cosmic expansion. Many of the theoretical models to explain the cosmic acceleration assume modifications to Einsteinian general relativity which accelerate the expansion, but the question of whether such modifications also affect the ability of SNe Ia to be standardizable candles has rarely been addressed. This paper is an attempt to answer this question. For this we adopt a semianalytical model to calculate SNe Ia light curves in non-standard gravity. We use this model to show that the average rescaled intrinsic peak luminosity—a quantity that is assumed to be constant with redshift in standard analyses of Type Ia supernova (SN Ia) cosmology data—depends on the strength of gravity in the supernova's local environment because the latter determines the Chandrasekhar mass—the mass of the SN Ia's white dwarf progenitor right before the explosion. This means that SNe Ia are no longer standardizable candles in scenarios where the strength of gravity evolves over time, and therefore the cosmology implied by the existing SN Ia data will be different when analysed in the context of such models. As an example, we show that the observational SN Ia cosmology data can be fitted with both a model where (ΩM,ΩΛ)=(0.62 ,0.38 ) and Newton's constant G varies as G (z )=G0(1 +z )-1/4 and the standard model where (ΩM,ΩΛ)=(0.3 ,0.7 ) and G is constant, when the Universe is assumed to be flat.

  5. Long-term implications of sustained wind power growth in the United States: Potential benefits and secondary impacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiser, Ryan; Bolinger, Mark; Heath, Garvin

    We model scenarios of the U.S. electric sector in which wind generation reaches 10% of end-use electricity demand in 2020, 20% in 2030, and 35% in 2050. As shown in a companion paper, achieving these penetration levels would have significant implications for the wind industry and the broader electric sector. Compared to a baseline that assumes no new wind deployment, under the primary scenario modeled, achieving these penetrations imposes an incremental cost to electricity consumers of less than 1% through 2030. These cost implications, however, should be balanced against the variety of environmental and social implications of such a scenario.more » Relative to a baseline that assumes no new wind deployment, our analysis shows that the high-penetration wind scenario yields potential greenhouse-gas benefits of $85-$1,230 billion in present-value terms, with a central estimate of $400 billion. Air-pollution-related health benefits are estimated at $52-$272 billion, while annual electric-sector water withdrawals and consumption are lower by 15% and 23% in 2050, respectively. We also find that a high-wind-energy future would have implications for the diversity and risk of energy supply, local economic development, and land use and related local impacts on communities and ecosystems; however, these additional impacts may not greatly affect aggregate social welfare owing to their nature, in part, as resource transfers.« less

  6. Contaminant point source localization error estimates as functions of data quantity and model quality

    DOE PAGES

    Hansen, Scott K.; Vesselinov, Velimir Valentinov

    2016-10-01

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulatemore » well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. Furthermore, this greatly enhanced performance, but gains from additional data collection remained limited.« less

  7. Optimizing Land and Water Use at the Local Level to Enhance Global Food Security through Virtual Resources Trade in the World

    NASA Astrophysics Data System (ADS)

    Cai, X.; Zhang, X.; Zhu, T.

    2014-12-01

    Global food security is constrained by local and regional land and water availability, as well as other agricultural input limitations and inappropriate national and global regulations. In a theoretical context, this study assumes that optimal water and land uses in local food production to maximize food security and social welfare at the global level can be driven by global trade. It follows the context of "virtual resources trade", i.e., utilizing international trade of agricultural commodities to reduce dependency on local resources, and achieves land and water savings in the world. An optimization model based on the partial equilibrium of agriculture is developed for the analysis, including local commodity production and land and water resources constraints, demand by country, and global food market. Through the model, the marginal values (MVs) of social welfare for water and land at the level of so-called food production units (i.e., sub-basins with similar agricultural production conditions) are derived and mapped in the world. In this personation, we will introduce the model structure, explain the meaning of MVs at the local level and their distribution around the world, and discuss the policy implications for global communities to enhance global food security. In particular, we will examine the economic values of water and land under different world targets of food security (e.g., number of malnourished population or children in a future year). In addition, we will also discuss the opportunities on data to improve such global modeling exercises.

  8. The nonlinear effect of resistive inhomogeneities on van der Pauw measurements

    NASA Astrophysics Data System (ADS)

    Koon, Daniel W.

    2005-03-01

    The resistive weighting function [D. W. Koon and C. J. Knickerbocker, Rev. Sci. Instrum. 63, 207 (1992)] quantifies the effect of small local inhomogeneities on van der Pauw resistivity measurements, but assumes such effects to be linear. This talk will describe deviations from linearity for a square van der Pauw geometry, modeled using a 5 x 5 grid network of discrete resistors and introducing both positive and negative perturbations to local resistors, covering nearly two orders of magnitude in -δρ/ρ or -δσ/σ. While there is a relatively modest quadratic nonlinearity for inhomogeneities of decreasing conductivity, the nonlinear term for inhomogeneities of decreasing resistivity is approximately cubic and can exceed the linear term.

  9. Analysis of the Localization of Michelson Interferometer Fringes Using Fourier Optics and Temporal Coherence

    ERIC Educational Resources Information Center

    Narayanamurthy, C. S.

    2009-01-01

    Fringes formed in a Michelson interferometer never localize in any plane, in the detector plane and in the localization plane. Instead, the fringes are assumed to localize at infinity. Except for some explanation in "Principles of Optics" by Born and Wolf (1964 (New York: Macmillan)), the fringe localization phenomena of Michelson's interferometer…

  10. Remote Estimation of River Discharge and Bathymetry: Sensitivity to Turbulent Dissipation and Bottom Friction

    NASA Astrophysics Data System (ADS)

    Simeonov, J.; Holland, K. T.

    2016-12-01

    We investigated the fidelity of a hierarchy of inverse models that estimate river bathymetry and discharge using measurements of surface currents and water surface elevation. Our most comprehensive depth inversion was based on the Shiono and Knight (1991) model that considers the depth-averaged along-channel momentum balance between the downstream pressure gradient due to gravity, the bottom drag and the lateral stresses induced by turbulence. The discharge was determined by minimizing the difference between the predicted and the measured streamwise variation of the total head. The bottom friction coefficient was assumed to be known or determined by alternative means. We also considered simplifications of the comprehensive inversion model that exclude the lateral mixing term from the momentum balance and assessed the effect of neglecting this term on the depth and discharge estimates for idealized in-bank flow in symmetric trapezoidal channels with width/depth ratio of 40 and different side-wall slopes. For these simple gravity-friction models, we used two different bottom friction parameterizations - a constant Darcy-Weisbach local friction and a depth-dependent friction related to the local depth and a constant Manning (roughness) coefficient. Our results indicated that the Manning gravity-friction model provides accurate estimates of the depth and the discharge that are within 1% of the assumed values for channels with side-wall slopes between 1/2 and 1/17. On the other hand, the constant Darcy-Weisbach friction model underpredicted the true depth and discharge by 7% and 9%, respectively, for the channel with side-wall slope of 1/17. These idealized modeling results suggest that a depth-dependent parameterization of the bottom friction is important for accurate inversion of depth and discharge and that the lateral turbulent mixing is not important. We also tested the comprehensive and the simplified inversion models for the Kootenai River near Bonners Ferry (Idaho) using in situ and remote sensing measurements of surface currents and water surface elevation obtained during a 2010 field experiment.

  11. Effect of wave localization on plasma instabilities

    NASA Astrophysics Data System (ADS)

    Levedahl, William Kirk

    1987-10-01

    The Anderson model of wave localization in random media is involved to study the effect of solar wind density turbulence on plasma processes associated with the solar type III radio burst. ISEE-3 satellite data indicate that a possible model for the type III process is the parametric decay of Langmuir waves excited by solar flare electron streams into daughter electromagnetic and ion acoustic waves. The threshold for this instability, however, is much higher than observed Langmuir wave levels because of rapid wave convection of the transverse electromagnetic daughter wave in the case where the solar wind is assumed homogeneous. Langmuir and transverse waves near critical density satisfy the Ioffe-Reigel criteria for wave localization in the solar wind with observed density fluctuations -1 percent. Numerical simulations of wave propagation in random media confirm the localization length predictions of Escande and Souillard for stationary density fluctations. For mobile density fluctuations localized wave packets spread at the propagation velocity of the density fluctuations rather than the group velocity of the waves. Computer simulations using a linearized hybrid code show that an electron beam will excite localized Langmuir waves in a plasma with density turbulence. An action principle approach is used to develop a theory of non-linear wave processes when waves are localized. A theory of resonant particles diffusion by localized waves is developed to explain the saturation of the beam-plasma instability. It is argued that localization of electromagnetic waves will allow the instability threshold to be exceeded for the parametric decay discussed above.

  12. Transfer Kinetics at the Aqueous/Non-Aqueous Phase Liquid Interface. A Statistical Mechanic Approach

    NASA Astrophysics Data System (ADS)

    Doss, S. K.; Ezzedine, S.; Ezzedine, S.; Ziagos, J. P.; Hoffman, F.; Gelinas, R. J.

    2001-05-01

    Many modeling efforts in the literature use a first-order, linear-driving-force model to represent the chemical dissolution process at the non-aqueous/aqueous phase liquid (NAPL/APL) interface. In other words, NAPL to APL phase flux is assumed to be equal to the difference between the solubility limit and the "bulk aqueous solution" concentrations times a mass transfer coefficient. Under such assumptions, a few questions are raised: where, in relation to a region of pure NAPL, does the "bulk aqueous solution" regime begin and how does it behave? The answers are assumed to be associated with an arbitrary, predetermined boundary layer, which separates the NAPL from the surrounding solution. The mass transfer rate is considered to be, primarily, limited by diffusion of the component through the boundary layer. In fact, compositional models of interphase mass transfer usually assume that a local equilibrium is reached between phases. Representing mass flux as a rate-limiting process is equivalent to assuming diffusion through a stationary boundary layer with an instantaneous local equilibrium and linear concentration profile. Some environmental researchers have enjoyed success explaining their data using chemical engineering-based correlations. Correlations are strongly dependent on the experimental conditions employed. A universally applicable theory for NAPL dissolution in natural systems does not exist. These correlations are usually expressed in terms of the modified Sherwood number as a function of Reynolds, Peclet, and Schmidt numbers. The Sherwood number may be interpreted as the ratio between the grain size and the thickness of the Nernst stagnant film. In the present study, we show that transfer kinetics at the NAPL/APL interface under equilibrium conditions disagree with approaches based on the Nernst stagnant film concept. It is unclear whether local equilibrium assumptions used in current models are suitable for all situations.A statistical mechanic framework has been chosen to study the transfer kinetic processes at the microscale level. The rationale for our approach is based on both the activation energy of transfer of an ion and its velocity across the NAPL/APL interface. There are four major energies controlling the interfacial NAPL dissolution kinetics: (de)solvation energy, interfacial tension energy, electrostatic energy, and thermal fluctuation energy. Transfer of an ion across the NAPL/APL interface is accelerated by the viscous forces which can be described using the averaged Langevin master equation. The resulting energies and viscous forces were combined using the Boltzmann probability distribution. Asymptotic time limits of the resulting kinetics lead to instantaneous local equilibrium conditions that contradict the Nernst equilibrium equation. The NAPL/APL interface is not an ideal one: it does not conserve energy and heat. In our case the interface is treated as a thin film or slush zone that alters the thermodynamic variables. Such added zone, between the two phases, is itself a phase, and, therefore, the equilibrium does not occur between two phases but rather three. All these findings led us to develop a new non-linearly coupled flow and transport system of equations which is able to account for specific chemical dissolution processes and precludes the need for empirical mass-transfer parameters. Work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract W-7405-Eng-48.

  13. Position Estimation and Local Mapping Using Omnidirectional Images and Global Appearance Descriptors

    PubMed Central

    Berenguer, Yerai; Payá, Luis; Ballesta, Mónica; Reinoso, Oscar

    2015-01-01

    This work presents some methods to create local maps and to estimate the position of a mobile robot, using the global appearance of omnidirectional images. We use a robot that carries an omnidirectional vision system on it. Every omnidirectional image acquired by the robot is described only with one global appearance descriptor, based on the Radon transform. In the work presented in this paper, two different possibilities have been considered. In the first one, we assume the existence of a map previously built composed of omnidirectional images that have been captured from previously-known positions. The purpose in this case consists of estimating the nearest position of the map to the current position of the robot, making use of the visual information acquired by the robot from its current (unknown) position. In the second one, we assume that we have a model of the environment composed of omnidirectional images, but with no information about the location of where the images were acquired. The purpose in this case consists of building a local map and estimating the position of the robot within this map. Both methods are tested with different databases (including virtual and real images) taking into consideration the changes of the position of different objects in the environment, different lighting conditions and occlusions. The results show the effectiveness and the robustness of both methods. PMID:26501289

  14. A Symmetric Time-Varying Cluster Rate of Descent Model

    NASA Technical Reports Server (NTRS)

    Ray, Eric S.

    2015-01-01

    A model of the time-varying rate of descent of the Orion vehicle was developed based on the observed correlation between canopy projected area and drag coefficient. This initial version of the model assumes cluster symmetry and only varies the vertical component of velocity. The cluster fly-out angle is modeled as a series of sine waves based on flight test data. The projected area of each canopy is synchronized with the primary fly-out angle mode. The sudden loss of projected area during canopy collisions is modeled at minimum fly-out angles, leading to brief increases in rate of descent. The cluster geometry is converted to drag coefficient using empirically derived constants. A more complete model is under development, which computes the aerodynamic response of each canopy to its local incidence angle.

  15. Hydrodynamic Models of Line-Driven Accretion Disk Winds III: Local Ionization Equilibrium

    NASA Technical Reports Server (NTRS)

    Pereyra, Nicolas Antonio; Kallman, Timothy R.; White, Nicholas E. (Technical Monitor)

    2002-01-01

    We present time-dependent numerical hydrodynamic models of line-driven accretion disk winds in cataclysmic variable systems and calculate wind mass-loss rates and terminal velocities. The models are 2.5-dimensional, include an energy balance condition with radiative heating and cooling processes, and includes local ionization equilibrium introducing time dependence and spatial dependence on the line radiation force parameters. The radiation field is assumed to originate in an optically thick accretion disk. Wind ion populations are calculated under the assumption that local ionization equilibrium is determined by photoionization and radiative recombination, similar to a photoionized nebula. We find a steady wind flowing from the accretion disk. Radiative heating tends to maintain the temperature in the higher density wind regions near the disk surface, rather than cooling adiabatically. For a disk luminosity L (sub disk) = solar luminosity, white dwarf mass M(sub wd) = 0.6 solar mass, and white dwarf radii R(sub wd) = 0.01 solar radius, we obtain a wind mass-loss rate of M(sub wind) = 4 x 10(exp -12) solar mass yr(exp -1) and a terminal velocity of approximately 3000 km per second. These results confirm the general velocity and density structures found in our earlier constant ionization equilibrium adiabatic CV wind models. Further we establish here 2.5D numerical models that can be extended to QSO/AGN winds where the local ionization equilibrium will play a crucial role in the overall dynamics.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, X; Sisniega, A; Zbijewski, W

    Purpose: Visualization and quantification of coronary artery calcification and atherosclerotic plaque benefits from coronary artery motion (CAM) artifact elimination. This work applies a rigid linear motion model to a Volume of Interest (VoI) for estimating motion estimation and compensation of image degradation in Coronary Computed Tomography Angiography (CCTA). Methods: In both simulation and testbench experiments, translational CAM was generated by displacement of the imaging object (i.e. simulated coronary artery and explanted human heart) by ∼8 mm, approximating the motion of a main coronary branch. Rotation was assumed to be negligible. A motion degraded region containing a calcification was selected asmore » the VoI. Local residual motion was assumed to be rigid and linear over the acquisition window, simulating motion observed during diastasis. The (negative) magnitude of the image gradient of the reconstructed VoI was chosen as the motion estimation objective and was minimized with Covariance Matrix Adaptation Evolution Strategy (CMAES). Results: Reconstruction incorporated the estimated CAM yielded signification recovery of fine calcification structures as well as reduced motion artifacts within the selected local region. The compensated reconstruction was further evaluated using two image similarity metrics, the structural similarity index (SSIM) and Root Mean Square Error (RMSE). At the calcification site, the compensated data achieved a 3% increase in SSIM and a 91.2% decrease in RMSE in comparison with the uncompensated reconstruction. Conclusion: Results demonstrate the feasibility of our image-based motion estimation method exploiting a local rigid linear model for CAM compensation. The method shows promising preliminary results for the application of such estimation in CCTA. Further work will involve motion estimation of complex motion corrupted patient data acquired from clinical CT scanner.« less

  17. Numerical analysis of the effect of turbulence transition on the hemodynamic parameters in human coronary arteries.

    PubMed

    Mahalingam, Arun; Gawandalkar, Udhav Ulhas; Kini, Girish; Buradi, Abdulrajak; Araki, Tadashi; Ikeda, Nobutaka; Nicolaides, Andrew; Laird, John R; Saba, Luca; Suri, Jasjit S

    2016-06-01

    Local hemodynamics plays an important role in atherogenesis and the progression of coronary atherosclerosis disease (CAD). The primary biological effect due to blood turbulence is the change in wall shear stress (WSS) on the endothelial cell membrane, while the local oscillatory nature of the blood flow affects the physiological changes in the coronary artery. In coronary arteries, the blood flow Reynolds number ranges from few tens to several hundreds and hence it is generally assumed to be laminar while calculating the WSS calculations. However, the pulsatile blood flow through coronary arteries under stenotic condition could result in transition from laminar to turbulent flow condition. In the present work, the onset of turbulent transition during pulsatile flow through coronary arteries for varying degree of stenosis (i.e., 0%, 30%, 50% and 70%) is quantitatively analyzed by calculating the turbulent parameters distal to the stenosis. Also, the effect of turbulence transition on hemodynamic parameters such as WSS and oscillatory shear index (OSI) for varying degree of stenosis is quantified. The validated transitional shear stress transport (SST) k-ω model used in the present investigation is the best suited Reynolds averaged Navier-Stokes turbulence model to capture the turbulent transition. The arterial wall is assumed to be rigid and the dynamic curvature effect due to myocardial contraction on the blood flow has been neglected. Our observations shows that for stenosis 50% and above, the WSSavg, WSSmax and OSI calculated using turbulence model deviates from laminar by more than 10% and the flow disturbances seems to significantly increase only after 70% stenosis. Our model shows reliability and completely validated. Blood flow through stenosed coronary arteries seems to be turbulent in nature for area stenosis above 70% and the transition to turbulent flow begins from 50% stenosis.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeong, J; Deasy, J O

    Purpose: Concurrent chemo-radiation therapy (CCRT) has become a more common cancer treatment option with a better tumor control rate for several tumor sites, including head and neck and lung cancer. In this work, possible optimal chemotherapy schedules were investigated by implementing chemotherapy cell-kill into a tumor response model of RT. Methods: The chemotherapy effect has been added into a published model (Jeong et al., PMB (2013) 58:4897), in which the tumor response to RT can be simulated with the effects of hypoxia and proliferation. Based on the two-compartment pharmacokinetic model, the temporal concentration of chemotherapy agent was estimated. Log cell-killmore » was assumed and the cell-kill constant was estimated from the observed increase in local control due to concurrent chemotherapy. For a simplified two cycle CCRT regime, several different starting times and intervals were simulated with conventional RT regime (2Gy/fx, 5fx/wk). The effectiveness of CCRT was evaluated in terms of reduction in radiation dose required for 50% of control to find the optimal chemotherapy schedule. Results: Assuming the typical slope of dose response curve (γ50=2), the observed 10% increase in local control rate was evaluated to be equivalent to an extra RT dose of about 4 Gy, from which the cell-kill rate of chemotherapy was derived to be about 0.35. Best response was obtained when chemotherapy was started at about 3 weeks after RT began. As the interval between two cycles decreases, the efficacy of chemotherapy increases with broader range of optimal starting times. Conclusion: The effect of chemotherapy has been implemented into the resource-conservation tumor response model to investigate CCRT. The results suggest that the concurrent chemotherapy might be more effective when delayed for about 3 weeks, due to lower tumor burden and a larger fraction of proliferating cells after reoxygenation.« less

  19. Cosmic Bulk Flow and the Local Motion from Cosmicflows-2

    NASA Astrophysics Data System (ADS)

    Courtois, Helene M.; Hoffman, Yehuda; Tully, R. Brent

    2015-08-01

    Full sky surveys of peculiar velocity are arguably the best way to map the large scale structure out to distances of a few times 100 Mpc/h.Using the largest and most accurate ever catalog of galaxy peculiar velocities Cosmicflows-2, the large scale structure has been reconstructed by means of the Wiener filter and constrained realizations assuming as a Bayesian prior model the LCDM standard model of cosmology. The present paper focuses on studying the bulk flow of the local flow field, defined as the mean velocity of top-hat spheres with radii ranging out to R=500 Mpc/h. Our main results is that the estimated bulk flow is consistent with the LCDM model with the WMAP inferred cosmological parameters. At R=50 (150)Mpc/h the estimated bulk velocity is 250 +/- 21 (239 +/- 38) km/s. The corresponding cosmic variance at these radii is 126 (60) km/s, which implies that these estimated bulk flows are dominated by the data and not by the assumed prior model. The estimated bulk velocity is dominated by the data out to R ˜200 Mpc/h, where the cosmic variance on the individual Supergalactic Cartesian components (of the r.m.s. values) exceeds the variance of the constrined realizations by at least a factor of 2. The SGX and SGY components of the CMB dipole velocity are recovered by the Wiener Filter velocity field down to a very few km/s. The SGZ component of the estimated velocity, the one that is most affected by the Zone of Avoidance, is off by 126km/s (an almost 2 sigma discrepancy).The bulk velocity analysis reported here is virtually unaffected by the Malmquist bias and very similar results are obtained for the data with and without the bias correction.

  20. Local-scale models reveal ecological niche variability in amphibian and reptile communities from two contrasting biogeographic regions

    PubMed Central

    Santos, Xavier; Felicísimo, Ángel M.

    2016-01-01

    Ecological Niche Models (ENMs) are widely used to describe how environmental factors influence species distribution. Modelling at a local scale, compared to a large scale within a high environmental gradient, can improve our understanding of ecological species niches. The main goal of this study is to assess and compare the contribution of environmental variables to amphibian and reptile ENMs in two Spanish national parks located in contrasting biogeographic regions, i.e., the Mediterranean and the Atlantic area. The ENMs were built with maximum entropy modelling using 11 environmental variables in each territory. The contributions of these variables to the models were analysed and classified using various statistical procedures (Mann–Whitney U tests, Principal Components Analysis and General Linear Models). Distance to the hydrological network was consistently the most relevant variable for both parks and taxonomic classes. Topographic variables (i.e., slope and altitude) were the second most predictive variables, followed by climatic variables. Differences in variable contribution were observed between parks and taxonomic classes. Variables related to water availability had the larger contribution to the models in the Mediterranean park, while topography variables were decisive in the Atlantic park. Specific response curves to environmental variables were in accordance with the biogeographic affinity of species (Mediterranean and non-Mediterranean species) and taxonomy (amphibians and reptiles). Interestingly, these results were observed for species located in both parks, particularly those situated at their range limits. Our findings show that ecological niche models built at local scale reveal differences in habitat preferences within a wide environmental gradient. Therefore, modelling at local scales rather than assuming large-scale models could be preferable for the establishment of conservation strategies for herptile species in natural parks. PMID:27761304

  1. 1/ f noise from the laws of thermodynamics for finite-size fluctuations.

    PubMed

    Chamberlin, Ralph V; Nasir, Derek M

    2014-07-01

    Computer simulations of the Ising model exhibit white noise if thermal fluctuations are governed by Boltzmann's factor alone; whereas we find that the same model exhibits 1/f noise if Boltzmann's factor is extended to include local alignment entropy to all orders. We show that this nonlinear correction maintains maximum entropy during equilibrium fluctuations. Indeed, as with the usual way to resolve Gibbs' paradox that avoids entropy reduction during reversible processes, the correction yields the statistics of indistinguishable particles. The correction also ensures conservation of energy if an instantaneous contribution from local entropy is included. Thus, a common mechanism for 1/f noise comes from assuming that finite-size fluctuations strictly obey the laws of thermodynamics, even in small parts of a large system. Empirical evidence for the model comes from its ability to match the measured temperature dependence of the spectral-density exponents in several metals and to show non-Gaussian fluctuations characteristic of nanoscale systems.

  2. Mesohysteresis model for ferromagnetic materials by minimization of the micromagnetic free energy

    NASA Astrophysics Data System (ADS)

    van den Berg, A.; Dupré, L.; Van de Wiele, B.; Crevecoeur, G.

    2009-04-01

    To study the connection between macroscopic hysteretic behavior and the microstructural properties, this paper presents and validates a new material dependent three-dimensional mesoscopic magnetic hysteresis model. In the presented mesoscopic description, the different micromagnetic energy terms are reformulated on the space scale of the magnetic domains. The sample is discretized in cubic cells, each with a local stress state, local bcc crystallographic axes, etc. The magnetization is assumed to align with one of the three crystallographic axes, in positive or negative sense, defining six volume fractions within each cell. The micromagnetic Gibbs free energy is described in terms of these volume fractions. Hysteresis loops are computed by minimizing the mesoscopic Gibbs free energy using a modified gradient search for a sequence of external applied fields. To validate the mesohysteresis model, we studied the magnetic memory properties. Numerical experiments reveal that (1) minor hysteresis loops are indeed closed and (2) the closed minor loops are erased from the memory.

  3. Observational effects of varying speed of light in quadratic gravity cosmological models

    NASA Astrophysics Data System (ADS)

    Izadi, Azam; Shacker, Shadi Sajedi; Olmo, Gonzalo J.; Banerjee, Robi

    We study different manifestations of the speed of light in theories of gravity where metric and connection are regarded as independent fields. We find that for a generic gravity theory in a frame with locally vanishing affine connection, the usual degeneracy between different manifestations of the speed of light is broken. In particular, the space-time causal structure constant (cST) may become variable in that local frame. For theories of the form f(ℛ,ℛμνℛ μν), this variation in cST has an impact on the definition of the luminosity distance (and distance modulus), which can be used to confront the predictions of particular models against Supernovae type Ia (SN Ia) data. We carry out this test for a quadratic gravity model without cosmological constant assuming (i) a constant speed of light and (ii) a varying speed of light (VSL), and find that the latter scenario is favored by the data.

  4. Dielectric and thermal modeling of Vesta's surface

    NASA Astrophysics Data System (ADS)

    Palmer, E. M.; Heggy, E.; Capria, M. T.; Tosi, F.; Russell, C. T.

    2013-09-01

    We generate a dielectric model for the surface of Vesta from thermal observations by Dawn's Visible and Infrared (VIR) mapping spectrometer. After retrieving surface temperatures from VIR data, we model thermal inertia, and derive a theoretical temperature map of Vesta's surface at a given UTC. To calculate the real part of the dielectric constant (ɛ') and the loss tangent (tg δ) we use the dielectric properties of basaltic lunar regolith as a first-order analog, assuming surface density and composition consistent with fine basaltic lunar dust. First results indicate that for the majority of the surface, ɛ' ranges from 2.0 to 2.1 from the night to day side respectively, and tg δ ranges from 1.05E-2 to 1.40E-2. While these regions are consistent with a basaltic, desiccated ~55% porous surface, we also find anomalies in the thermal inertia that may correspond to a variation in local surface density relative to the global average, and a consequent variation in the local dielectric properties.

  5. On the effects of nonlinear boundary conditions in diffusive logistic equations on bounded domains

    NASA Astrophysics Data System (ADS)

    Cantrell, Robert Stephen; Cosner, Chris

    We study a diffusive logistic equation with nonlinear boundary conditions. The equation arises as a model for a population that grows logistically inside a patch and crosses the patch boundary at a rate that depends on the population density. Specifically, the rate at which the population crosses the boundary is assumed to decrease as the density of the population increases. The model is motivated by empirical work on the Glanville fritillary butterfly. We derive local and global bifurcation results which show that the model can have multiple equilibria and in some parameter ranges can support Allee effects. The analysis leads to eigenvalue problems with nonstandard boundary conditions.

  6. Viscous dissipation effects on MHD slip flow and heat transfer in porous micro duct with LTNE assumptions using modified lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Rabhi, R.; Amami, B.; Dhahri, H.; Mhimid, A.

    2017-11-01

    This paper deals with heat transfer and fluid flow in a porous micro duct under local thermal non equilibrium conditions subjected to an external oriented magnetic field. The considered sample is a micro duct filled with porous media assumed to be homogenous, isotropic and saturated. The slip velocity and the temperature jump were uniformly imposed to the wall. In modeling the flow, the Brinkmann-Forchheimer extended Darcy model was incorporated into the momentum equations. In the energy equation, the local thermal non equilibrium between the two phases was adopted. A modified axisymmetric lattice Boltzmann method was used to solve the obtained governing equation system. Attention was focused on the influence of the emerging parameters such as Knudsen number, Kn, Hartmann number, Ha, Eckert number, Ec, Biot number, Bi and the magnetic field inclination γ on flow and heat transfer throughout this paper.

  7. Clean Floquet Time Crystals: Models and Realizations in Cold Atoms

    NASA Astrophysics Data System (ADS)

    Huang, Biao; Wu, Ying-Hai; Liu, W. Vincent

    2018-03-01

    Time crystals, a phase showing spontaneous breaking of time-translation symmetry, has been an intriguing subject for systems far away from equilibrium. Recent experiments found such a phase in both the presence and the absence of localization, while in theories localization by disorder is usually assumed a priori. In this work, we point out that time crystals can generally exist in systems without disorder. A series of clean quasi-one-dimensional models under Floquet driving are proposed to demonstrate this unexpected result in principle. Robust time crystalline orders are found in the strongly interacting regime along with the emergent integrals of motion in the dynamical system, which can be characterized by level statistics and the out-of-time-ordered correlators. We propose two cold atom experimental schemes to realize the clean Floquet time crystals, one by making use of dipolar gases and another by synthetic dimensions.

  8. Mott Time Crystal: Models and Realizations in Cold Atoms

    NASA Astrophysics Data System (ADS)

    Huang, Biao; Wu, Ying-Hai; Liu, W. Vincent

    2017-04-01

    Time crystals, a phase showing spontaneously breaking of time-translation symmetry, has been an intriguing subject for systems far away from equilibrium. Recent experiments found such a phase both in the presence and absence of localization, while in theories localization is usually assumed a priori. In this work, we point out that time crystals can generally exist in systems without disorder and is not in a pre-thermal state. A series of driven interacting ladder models are proposed to demonstrate this unexpected result in principle. Robust time crystalline orders are found in the Mott regime due to the emergent integrals of motion in the dynamical system, which can be characterized by the out-of-time-order correlators (OTOC). We propose two cold atom experimental schemes to realize the Mott time crystals, one by making use of dipolar gases and another by synthetic dimensions. U.S. ARO (W911NF-11-1-0230), AFOSR (FA9550-16-1-0006).

  9. Full circumpolar migration ensures evolutionary unity in the Emperor penguin.

    PubMed

    Cristofari, Robin; Bertorelle, Giorgio; Ancel, André; Benazzo, Andrea; Le Maho, Yvon; Ponganis, Paul J; Stenseth, Nils Chr; Trathan, Phil N; Whittington, Jason D; Zanetti, Enrico; Zitterbart, Daniel P; Le Bohec, Céline; Trucchi, Emiliano

    2016-06-14

    Defining reliable demographic models is essential to understand the threats of ongoing environmental change. Yet, in the most remote and threatened areas, models are often based on the survey of a single population, assuming stationarity and independence in population responses. This is the case for the Emperor penguin Aptenodytes forsteri, a flagship Antarctic species that may be at high risk continent-wide before 2100. Here, using genome-wide data from the whole Antarctic continent, we reveal that this top-predator is organized as one single global population with a shared demography since the late Quaternary. We refute the view of the local population as a relevant demographic unit, and highlight that (i) robust extinction risk estimations are only possible by including dispersal rates and (ii) colony-scaled population size is rather indicative of local stochastic events, whereas the species' response to global environmental change is likely to follow a shared evolutionary trajectory.

  10. Full circumpolar migration ensures evolutionary unity in the Emperor penguin

    PubMed Central

    Cristofari, Robin; Bertorelle, Giorgio; Ancel, André; Benazzo, Andrea; Le Maho, Yvon; Ponganis, Paul J.; Stenseth, Nils Chr; Trathan, Phil N.; Whittington, Jason D.; Zanetti, Enrico; Zitterbart, Daniel P.; Le Bohec, Céline; Trucchi, Emiliano

    2016-01-01

    Defining reliable demographic models is essential to understand the threats of ongoing environmental change. Yet, in the most remote and threatened areas, models are often based on the survey of a single population, assuming stationarity and independence in population responses. This is the case for the Emperor penguin Aptenodytes forsteri, a flagship Antarctic species that may be at high risk continent-wide before 2100. Here, using genome-wide data from the whole Antarctic continent, we reveal that this top-predator is organized as one single global population with a shared demography since the late Quaternary. We refute the view of the local population as a relevant demographic unit, and highlight that (i) robust extinction risk estimations are only possible by including dispersal rates and (ii) colony-scaled population size is rather indicative of local stochastic events, whereas the species' response to global environmental change is likely to follow a shared evolutionary trajectory. PMID:27296726

  11. Experimental generation and computational modeling of intracellular pH gradients in cardiac myocytes.

    PubMed

    Swietach, Pawel; Leem, Chae-Hun; Spitzer, Kenneth W; Vaughan-Jones, Richard D

    2005-04-01

    It is often assumed that pH(i) is spatially uniform within cells. A double-barreled microperfusion system was used to apply solutions of weak acid (acetic acid, CO(2)) or base (ammonia) to localized regions of an isolated ventricular myocyte (guinea pig). A stable, longitudinal pH(i) gradient (up to 1 pH(i) unit) was observed (using confocal imaging of SNARF-1 fluorescence). Changing the fractional exposure of the cell to weak acid/base altered the gradient, as did changing the concentration and type of weak acid/base applied. A diffusion-reaction computational model accurately simulated this behavior of pH(i). The model assumes that H(i)(+) movement occurs via diffusive shuttling on mobile buffers, with little free H(+) diffusion. The average diffusion constant for mobile buffer was estimated as 33 x 10(-7) cm(2)/s, consistent with an apparent H(i)(+) diffusion coefficient, D(H)(app), of 14.4 x 10(-7) cm(2)/s (at pH(i) 7.07), a value two orders of magnitude lower than for H(+) ions in water but similar to that estimated recently from local acid injection via a cell-attached glass micropipette. We conclude that, because H(i)(+) mobility is so low, an extracellular concentration gradient of permeant weak acid readily induces pH(i) nonuniformity. Similar concentration gradients for weak acid (e.g., CO(2)) occur across border zones during regional myocardial ischemia, raising the possibility of steep pH(i) gradients within the heart under some pathophysiological conditions.

  12. Ages of Massive Galaxies at 0.5 > z > 2.0 from 3D-HST Rest-frame Optical Spectroscopy

    NASA Astrophysics Data System (ADS)

    Fumagalli, Mattia; Franx, Marijn; van Dokkum, Pieter; Whitaker, Katherine E.; Skelton, Rosalind E.; Brammer, Gabriel; Nelson, Erica; Maseda, Michael; Momcheva, Ivelina; Kriek, Mariska; Labbé, Ivo; Lundgren, Britt; Rix, Hans-Walter

    2016-05-01

    We present low-resolution near-infrared stacked spectra from the 3D-HST survey up to z = 2.0 and fit them with commonly used stellar population synthesis models: BC03, FSPS10 (Flexible Stellar Population Synthesis), and FSPS-C3K. The accuracy of the grism redshifts allows the unambiguous detection of many emission and absorption features and thus a first systematic exploration of the rest-frame optical spectra of galaxies up to z = 2. We select massive galaxies ({log}({M}*/{M}⊙ )\\gt 10.8), we divide them into quiescent and star-forming via a rest-frame color-color technique, and we median-stack the samples in three redshift bins between z = 0.5 and z = 2.0. We find that stellar population models fit the observations well at wavelengths below the 6500 Å rest frame, but show systematic residuals at redder wavelengths. The FSPS-C3K model generally provides the best fits (evaluated with χ 2 red statistics) for quiescent galaxies, while BC03 performs the best for star-forming galaxies. The stellar ages of quiescent galaxies implied by the models, assuming solar metallicity, vary from 4 Gyr at z ˜ 0.75 to 1.5 Gyr at z ˜ 1.75, with an uncertainty of a factor of two caused by the unknown metallicity. On average, the stellar ages are half the age of the universe at these redshifts. We show that the inferred evolution of ages of quiescent galaxies is in agreement with fundamental plane measurements, assuming an 8 Gyr age for local galaxies. For star-forming galaxies, the inferred ages depend strongly on the stellar population model and the shape of the assumed star-formation history.

  13. On the methanol emission detection in the TW Hya disc: the role of grain surface chemistry and non-LTE excitation

    NASA Astrophysics Data System (ADS)

    Parfenov, S. Yu.; Semenov, D. A.; Henning, Th.; Shapovalova, A. S.; Sobolev, A. M.; Teague, R.

    2017-06-01

    The recent detection of gas-phase methanol (CH3OH) lines in the disc of TW Hya by Walsh et al. provided the first observational constraints on the complex O-bearing organic content in protoplanetary discs. The emission has a ring-like morphology, with a peak at ˜30-50 au and an inferred column density of ˜3-6 × 1012 cm-2. A low CH3OH fractional abundance of ˜0.3-4 × 10-11 (with respect to H2) is derived, depending on the assumed vertical location of the CH3OH molecular layer. In this study, we use a thermochemical model of the TW Hya disc, coupled with the alchemic gas-grain chemical model, assuming laboratory-motivated, fast diffusivities of the surface molecules to interpret the CH3OH detection. Based on this disc model, we performed radiative transfer calculations with the lime code and simulations of the observations with the casa simulator. We found that our model allows us to reproduce the observations well. The CH3OH emission in our model appears as a ring with radius of ˜60 au. Synthetic and observed line flux densities are equal within the rms noise level of observations. The synthetic CH3OH spectra calculated assuming local thermodynamic equilibrium (LTE) can differ by up to a factor of 3.5 from the non-LTE spectra. For the strongest lines, the differences between LTE and non-LTE flux densities are very small and practically negligible. Variations in the diffusivity of the surface molecules can lead to variations of the CH3OH abundance and, therefore, line flux densities by an order of magnitude.

  14. Lumped Model Generation and Evaluation: Sensitivity and Lie Algebraic Techniques with Applications to Combustion

    DTIC Science & Technology

    1987-10-01

    literature for their predictive capabilities. STATUS OF RESEARCH During the past year research on several interrelated activities was pursued in the...identifiability in nonlinear systems. In addition to its analiticity we assume that system (1) is locally reduced at z 0 (p) for all p E l , i.e., it...rearranging the lbs of (19) and continuing for Ci-2,* ’..0 yields From (20) and (24) * Since the O.R.C. is satisfied at 0, by analiticity of (1) there

  15. Architecture Aware Partitioning Algorithms

    DTIC Science & Technology

    2006-01-19

    follows: Given a graph G = (V, E ), where V is the set of vertices, n = |V | is the number of vertices, and E is the set of edges in the graph, partition the...communication link l(pi, pj) is associated with a graph edge weight e ∗(pi, pj) that represents the communication cost per unit of communication between...one that is local for each one. For our model we assume that communication in either direction across a given link is the same, therefore e ∗(pi, pj

  16. Classical and Non-Classical Regimes of the Limited-Fetch Wave Growth and Localized Structures on the Surface of Water

    DTIC Science & Technology

    2013-09-30

    specifying the wave-maker driving signal . The short intense envelope solitons possess vertical asymmetry similar to regular Stokes waves with the same...presented in [P1], [P2]. 2. Physical model of sea wave period from altimeter data We use the asymptotic theory of wind wave growth proposed in [R6...relationship can be used for processing altimeter data assuming the wave field to be stationary and spatially inhomogeneous. It is consistent with

  17. Numerical Field Model Simulation of Fire and Heat Transfer in a Rectangular Compartment

    DTIC Science & Technology

    1992-09-01

    zero . However, due to the approximation inherent in the numerical scheme, we will be satisfied if S,, tends toward zero as determined by comparison... zero , the appropriate coefficient (A) corresponding to that boundary is also set equal to zero . After the local pressure correction (P’) is determined...chamber just prior to starting the fire. It is assumed that the air is uni- formly at rest, thus all components of velocity are set equal to zero

  18. An MRI denoising method using image data redundancy and local SNR estimation.

    PubMed

    Golshan, Hosein M; Hasanzadeh, Reza P R; Yousefzadeh, Shahrokh C

    2013-09-01

    This paper presents an LMMSE-based method for the three-dimensional (3D) denoising of MR images assuming a Rician noise model. Conventionally, the LMMSE method estimates the noise-less signal values using the observed MR data samples within local neighborhoods. This is not an efficient procedure to deal with this issue while the 3D MR data intrinsically includes many similar samples that can be used to improve the estimation results. To overcome this problem, we model MR data as random fields and establish a principled way which is capable of choosing the samples not only from a local neighborhood but also from a large portion of the given data. To follow the similar samples within the MR data, an effective similarity measure based on the local statistical moments of images is presented. The parameters of the proposed filter are automatically chosen from the estimated local signal-to-noise ratio. To further enhance the denoising performance, a recursive version of the introduced approach is also addressed. The proposed filter is compared with related state-of-the-art filters using both synthetic and real MR datasets. The experimental results demonstrate the superior performance of our proposal in removing the noise and preserving the anatomical structures of MR images. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Modelling of thick composites using a layerwise laminate theory

    NASA Technical Reports Server (NTRS)

    Robbins, D. H., Jr.; Reddy, J. N.

    1993-01-01

    The layerwise laminate theory of Reddy (1987) is used to develop a layerwise, two-dimensional, displacement-based, finite element model of laminated composite plates that assumes a piecewise continuous distribution of the tranverse strains through the laminate thickness. The resulting layerwise finite element model is capable of computing interlaminar stresses and other localized effects with the same level of accuracy as a conventional 3D finite element model. Although the total number of degrees of freedom are comparable in both models, the layerwise model maintains a 2D-type data structure that provides several advantages over a conventional 3D finite element model, e.g. simplified input data, ease of mesh alteration, and faster element stiffness matrix formulation. Two sample problems are provided to illustrate the accuracy of the present model in computing interlaminar stresses for laminates in bending and extension.

  20. 46 CFR 172.205 - Local damage.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... BULK CARGOES Special Rules Pertaining to a Ship That Carries a Bulk Liquefied Gas Regulated Under... location in the cargo length: (b) The vessel is presumed to survive assumed local damage if it does not...

  1. 46 CFR 172.205 - Local damage.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... BULK CARGOES Special Rules Pertaining to a Ship That Carries a Bulk Liquefied Gas Regulated Under... location in the cargo length: (b) The vessel is presumed to survive assumed local damage if it does not...

  2. 46 CFR 172.205 - Local damage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... BULK CARGOES Special Rules Pertaining to a Ship That Carries a Bulk Liquefied Gas Regulated Under... location in the cargo length: (b) The vessel is presumed to survive assumed local damage if it does not...

  3. 46 CFR 172.205 - Local damage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... BULK CARGOES Special Rules Pertaining to a Ship That Carries a Bulk Liquefied Gas Regulated Under... location in the cargo length: (b) The vessel is presumed to survive assumed local damage if it does not...

  4. 46 CFR 172.205 - Local damage.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... BULK CARGOES Special Rules Pertaining to a Ship That Carries a Bulk Liquefied Gas Regulated Under... location in the cargo length: (b) The vessel is presumed to survive assumed local damage if it does not...

  5. Influence of a large-scale field on energy dissipation in magnetohydrodynamic turbulence

    NASA Astrophysics Data System (ADS)

    Zhdankin, Vladimir; Boldyrev, Stanislav; Mason, Joanne

    2017-07-01

    In magnetohydrodynamic (MHD) turbulence, the large-scale magnetic field sets a preferred local direction for the small-scale dynamics, altering the statistics of turbulence from the isotropic case. This happens even in the absence of a total magnetic flux, since MHD turbulence forms randomly oriented large-scale domains of strong magnetic field. It is therefore customary to study small-scale magnetic plasma turbulence by assuming a strong background magnetic field relative to the turbulent fluctuations. This is done, for example, in reduced models of plasmas, such as reduced MHD, reduced-dimension kinetic models, gyrokinetics, etc., which make theoretical calculations easier and numerical computations cheaper. Recently, however, it has become clear that the turbulent energy dissipation is concentrated in the regions of strong magnetic field variations. A significant fraction of the energy dissipation may be localized in very small volumes corresponding to the boundaries between strongly magnetized domains. In these regions, the reduced models are not applicable. This has important implications for studies of particle heating and acceleration in magnetic plasma turbulence. The goal of this work is to systematically investigate the relationship between local magnetic field variations and magnetic energy dissipation, and to understand its implications for modelling energy dissipation in realistic turbulent plasmas.

  6. Investigation on the electron flux to the wall in the VENUS ion source

    NASA Astrophysics Data System (ADS)

    Thuillier, T.; Angot, J.; Benitez, J. Y.; Hodgkinson, A.; Lyneis, C. M.; Todd, D. S.; Xie, D. Z.

    2016-02-01

    The long-term operation of high charge state electron cyclotron resonance ion sources fed with high microwave power has caused damage to the plasma chamber wall in several laboratories. Porosity, or a small hole, can be progressively created in the chamber wall which can destroy the plasma chamber over a few year time scale. A burnout of the VENUS plasma chamber is investigated in which the hole formation in relation to the local hot electron power density is studied. First, the results of a simple model assuming that hot electrons are fully magnetized and strictly following magnetic field lines are presented. The model qualitatively reproduces the experimental traces left by the plasma on the wall. However, it is too crude to reproduce the localized electron power density for creating a hole in the chamber wall. Second, the results of a Monte Carlo simulation, following a population of scattering hot electrons, indicate a localized high power deposited to the chamber wall consistent with the hole formation process. Finally, a hypervapotron cooling scheme is proposed to mitigate the hole formation in electron cyclotron resonance plasma chamber wall.

  7. Delay-induced depinning of localized structures in a spatially inhomogeneous Swift-Hohenberg model

    NASA Astrophysics Data System (ADS)

    Tabbert, Felix; Schelte, Christian; Tlidi, Mustapha; Gurevich, Svetlana V.

    2017-03-01

    We report on the dynamics of localized structures in an inhomogeneous Swift-Hohenberg model describing pattern formation in the transverse plane of an optical cavity. This real order parameter equation is valid close to the second-order critical point associated with bistability. The optical cavity is illuminated by an inhomogeneous spatial Gaussian pumping beam and subjected to time-delayed feedback. The Gaussian injection beam breaks the translational symmetry of the system by exerting an attracting force on the localized structure. We show that the localized structure can be pinned to the center of the inhomogeneity, suppressing the delay-induced drift bifurcation that has been reported in the particular case where the injection is homogeneous, assuming a continuous wave operation. Under an inhomogeneous spatial pumping beam, we perform the stability analysis of localized solutions to identify different instability regimes induced by time-delayed feedback. In particular, we predict the formation of two-arm spirals, as well as oscillating and depinning dynamics caused by the interplay of an attracting inhomogeneity and destabilizing time-delayed feedback. The transition from oscillating to depinning solutions is investigated by means of numerical continuation techniques. Analytically, we use an order parameter approach to derive a normal form of the delay-induced Hopf bifurcation leading to an oscillating solution. Additionally we model the interplay of an attracting inhomogeneity and destabilizing time delay by describing the localized solution as an overdamped particle in a potential well generated by the inhomogeneity. In this case, the time-delayed feedback acts as a driving force. Comparing results from the later approach with the full Swift-Hohenberg model, we show that the approach not only provides an instructive description of the depinning dynamics, but also is numerically accurate throughout most of the parameter regime.

  8. Multilevel selection in a resource-based model

    NASA Astrophysics Data System (ADS)

    Ferreira, Fernando Fagundes; Campos, Paulo R. A.

    2013-07-01

    In the present work we investigate the emergence of cooperation in a multilevel selection model that assumes limiting resources. Following the work by R. J. Requejo and J. Camacho [Phys. Rev. Lett.0031-900710.1103/PhysRevLett.108.038701 108, 038701 (2012)], the interaction among individuals is initially ruled by a prisoner's dilemma (PD) game. The payoff matrix may change, influenced by the resource availability, and hence may also evolve to a non-PD game. Furthermore, one assumes that the population is divided into groups, whose local dynamics is driven by the payoff matrix, whereas an intergroup competition results from the nonuniformity of the growth rate of groups. We study the probability that a single cooperator can invade and establish in a population initially dominated by defectors. Cooperation is strongly favored when group sizes are small. We observe the existence of a critical group size beyond which cooperation becomes counterselected. Although the critical size depends on the parameters of the model, it is seen that a saturation value for the critical group size is achieved. The results conform to the thought that the evolutionary history of life repeatedly involved transitions from smaller selective units to larger selective units.

  9. Modelling of the Saturnian Kilometric Radiation (SKR)

    NASA Astrophysics Data System (ADS)

    Cecconi, B.; Lamy, L.; Prangé, R.; Zarka, P.; Hess, S.; Clarke, J. T.; Nichols, J.

    2008-12-01

    The Saturnian Kilometric Radiation (SKR), discovered by the Voyager spacecraft in the 1980's, is observed quasi-continuously by Cassini since 2003. Study of 3 years of SKR observations by RPWS (Radio and Plasma Wave Science) revealed three recurrent features of SKR dynamic spectra : (i) discrete arcs, presumably caused by the anisotropy of the radio emission pattern combined to the observer's motion, (ii) an equatorial shadow zone around the planet (observed near perikrones) and (iii) signal extinctions at high northern latitudes. We model these features using the code PRES (Planetary Radio Emission Simulator) that assumes radio emissions to be generated via the Cyclotron Maser Instability for simulating observed dynamic spectra. We show that observed arc-like structures imply radio sources in partial (~90%) corotation, located on magnetic field lines of invariant latitude 70° to 75°, and emitting at oblique angle from the local magnetic field with a cone angle that varies with frequency. Then, based on the previously demonstrated conjugacy between UV and SKR sources, we successfully model the equatorial shadow zone as well as northern latitude SKR extinctions assuming time variable radio sources distributed along field lines with footprints along the daily UV oval measured from HST images.

  10. Adjoint sensitivity analysis of a tumor growth model and its application to spatiotemporal radiotherapy optimization.

    PubMed

    Fujarewicz, Krzysztof; Lakomiec, Krzysztof

    2016-12-01

    We investigate a spatial model of growth of a tumor and its sensitivity to radiotherapy. It is assumed that the radiation dose may vary in time and space, like in intensity modulated radiotherapy (IMRT). The change of the final state of the tumor depends on local differences in the radiation dose and varies with the time and the place of these local changes. This leads to the concept of a tumor's spatiotemporal sensitivity to radiation, which is a function of time and space. We show how adjoint sensitivity analysis may be applied to calculate the spatiotemporal sensitivity of the finite difference scheme resulting from the partial differential equation describing the tumor growth. We demonstrate results of this approach to the tumor proliferation, invasion and response to radiotherapy (PIRT) model and we compare the accuracy and the computational effort of the method to the simple forward finite difference sensitivity analysis. Furthermore, we use the spatiotemporal sensitivity during the gradient-based optimization of the spatiotemporal radiation protocol and present results for different parameters of the model.

  11. Impulse-induced localized control of chaos in starlike networks.

    PubMed

    Chacón, Ricardo; Palmero, Faustino; Cuevas-Maraver, Jesús

    2016-06-01

    Locally decreasing the impulse transmitted by periodic pulses is shown to be a reliable method of taming chaos in starlike networks of dissipative nonlinear oscillators, leading to both synchronous periodic states and equilibria (oscillation death). Specifically, the paradigmatic model of damped kicked rotators is studied in which it is assumed that when the rotators are driven synchronously, i.e., all driving pulses transmit the same impulse, the networks display chaotic dynamics. It is found that the taming effect of decreasing the impulse transmitted by the pulses acting on particular nodes strongly depends on their number and degree of connectivity. A theoretical analysis is given explaining the basic physical mechanism as well as the main features of the chaos-control scenario.

  12. 76 FR 53681 - Information Collection Being Reviewed by the Federal Communications Commission

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-29

    ... this form to local franchising authorities or the Commission, in situations where the FCC has assumed.... Cable operators submit FCC Form 1240 to their respective local franchising authorities (``LFAs'') to...

  13. Estimating the effects of Cry1F Bt-maize pollen on non-target Lepidoptera using a mathematical model of exposure

    PubMed Central

    Perry, Joe N; Devos, Yann; Arpaia, Salvatore; Bartsch, Detlef; Ehlert, Christina; Gathmann, Achim; Hails, Rosemary S; Hendriksen, Niels B; Kiss, Jozsef; Messéan, Antoine; Mestdagh, Sylvie; Neemann, Gerd; Nuti, Marco; Sweet, Jeremy B; Tebbe, Christoph C

    2012-01-01

    In farmland biodiversity, a potential risk to the larvae of non-target Lepidoptera from genetically modified (GM) Bt-maize expressing insecticidal Cry1 proteins is the ingestion of harmful amounts of pollen deposited on their host plants. A previous mathematical model of exposure quantified this risk for Cry1Ab protein. We extend this model to quantify the risk for sensitive species exposed to pollen containing Cry1F protein from maize event 1507 and to provide recommendations for management to mitigate this risk. A 14-parameter mathematical model integrating small- and large-scale exposure was used to estimate the larval mortality of hypothetical species with a range of sensitivities, and under a range of simulated mitigation measures consisting of non-Bt maize strips of different widths placed around the field edge. The greatest source of variability in estimated mortality was species sensitivity. Before allowance for effects of large-scale exposure, with moderate within-crop host-plant density and with no mitigation, estimated mortality locally was <10% for species of average sensitivity. For the worst-case extreme sensitivity considered, estimated mortality locally was 99·6% with no mitigation, although this estimate was reduced to below 40% with mitigation of 24-m-wide strips of non-Bt maize. For highly sensitive species, a 12-m-wide strip reduced estimated local mortality under 1·5%, when within-crop host-plant density was zero. Allowance for large-scale exposure effects would reduce these estimates of local mortality by a highly variable amount, but typically of the order of 50-fold. Mitigation efficacy depended critically on assumed within-crop host-plant density; if this could be assumed negligible, then the estimated effect of mitigation would reduce local mortality below 1% even for very highly sensitive species. Synthesis and applications. Mitigation measures of risks of Bt-maize to sensitive larvae of non-target lepidopteran species can be effective, but depend on host-plant densities which are in turn affected by weed-management regimes. We discuss the relevance for management of maize events where cry1F is combined (stacked) with a herbicide-tolerance trait. This exemplifies how interactions between biota may occur when different traits are stacked irrespective of interactions between the proteins themselves and highlights the importance of accounting for crop management in the assessment of the ecological impact of GM plants. PMID:22496596

  14. A Bayesian approach to modeling 2D gravity data using polygon states

    NASA Astrophysics Data System (ADS)

    Titus, W. J.; Titus, S.; Davis, J. R.

    2015-12-01

    We present a Bayesian Markov chain Monte Carlo (MCMC) method for the 2D gravity inversion of a localized subsurface object with constant density contrast. Our models have four parameters: the density contrast, the number of vertices in a polygonal approximation of the object, an upper bound on the ratio of the perimeter squared to the area, and the vertices of a polygon container that bounds the object. Reasonable parameter values can be estimated prior to inversion using a forward model and geologic information. In addition, we assume that the field data have a common random uncertainty that lies between two bounds but that it has no systematic uncertainty. Finally, we assume that there is no uncertainty in the spatial locations of the measurement stations. For any set of model parameters, we use MCMC methods to generate an approximate probability distribution of polygons for the object. We then compute various probability distributions for the object, including the variance between the observed and predicted fields (an important quantity in the MCMC method), the area, the center of area, and the occupancy probability (the probability that a spatial point lies within the object). In addition, we compare probabilities of different models using parallel tempering, a technique which also mitigates trapping in local optima that can occur in certain model geometries. We apply our method to several synthetic data sets generated from objects of varying shape and location. We also analyze a natural data set collected across the Rio Grande Gorge Bridge in New Mexico, where the object (i.e. the air below the bridge) is known and the canyon is approximately 2D. Although there are many ways to view results, the occupancy probability proves quite powerful. We also find that the choice of the container is important. In particular, large containers should be avoided, because the more closely a container confines the object, the better the predictions match properties of object.

  15. Site-specific to local-scale shallow landslides triggering zones assessment using TRIGRS

    NASA Astrophysics Data System (ADS)

    Bordoni, M.; Meisina, C.; Valentino, R.; Bittelli, M.; Chersich, S.

    2015-05-01

    Rainfall-induced shallow landslides are common phenomena in many parts of the world, affecting cultivation and infrastructure and sometimes causing human losses. Assessing the triggering zones of shallow landslides is fundamental for land planning at different scales. This work defines a reliable methodology to extend a slope stability analysis from the site-specific to local scale by using a well-established physically based model (TRIGRS-unsaturated). The model is initially applied to a sample slope and then to the surrounding 13.4 km2 area in Oltrepo Pavese (northern Italy). To obtain more reliable input data for the model, long-term hydro-meteorological monitoring has been carried out at the sample slope, which has been assumed to be representative of the study area. Field measurements identified the triggering mechanism of shallow failures and were used to verify the reliability of the model to obtain pore water pressure trends consistent with those measured during the monitoring activity. In this way, more reliable trends have been modelled for past landslide events, such as the April 2009 event that was assumed as a benchmark. The assessment of shallow landslide triggering zones obtained using TRIGRS-unsaturated for the benchmark event appears good for both the monitored slope and the whole study area, with better results when a pedological instead of geological zoning is considered at the regional scale. The sensitivity analyses of the influence of the soil input data show that the mean values of the soil properties give the best results in terms of the ratio between the true positive and false positive rates. The scheme followed in this work allows us to obtain better results in the assessment of shallow landslide triggering areas in terms of the reduction in the overestimation of unstable zones with respect to other distributed models applied in the past.

  16. Carryover effects from natal habitat type upon competitive ability lead to trait divergence or source-sink dynamics.

    PubMed

    Kristensen, Nadiah Pardede; Johansson, Jacob; Chisholm, Ryan A; Smith, Henrik G; Kokko, Hanna

    2018-06-25

    Local adaptation to rare habitats is difficult due to gene flow, but can occur if the habitat has higher productivity. Differences in offspring phenotypes have attracted little attention in this context. We model a scenario where the rarer habitat improves offspring's later competitive ability - a carryover effect that operates on top of local adaptation to one or the other habitat type. Assuming localised dispersal, so the offspring tend to settle in similar habitat to the natal type, the superior competitive ability of offspring remaining in the rarer habitat hampers immigration from the majority habitat. This initiates a positive feedback between local adaptation and trait divergence, which can thereafter be reinforced by coevolution with dispersal traits that match ecotype to habitat type. Rarity strengthens selection on dispersal traits and promotes linkage disequilibrium between locally adapted traits and ecotype-habitat matching dispersal. We propose that carryover effects may initiate isolation by ecology. © 2018 The Authors. Ecology Letters published by CNRS and John Wiley & Sons Ltd.

  17. The two-pore channel TPC1 is required for efficient protein processing through early and recycling endosomes.

    PubMed

    Castonguay, Jan; Orth, Joachim H C; Müller, Thomas; Sleman, Faten; Grimm, Christian; Wahl-Schott, Christian; Biel, Martin; Mallmann, Robert Theodor; Bildl, Wolfgang; Schulte, Uwe; Klugbauer, Norbert

    2017-08-30

    Two-pore channels (TPCs) are localized in endo-lysosomal compartments and assumed to play an important role for vesicular fusion and endosomal trafficking. Recently, it has been shown that both TPC1 and 2 were required for host cell entry and pathogenicity of Ebola viruses. Here, we investigate the cellular function of TPC1 using protein toxins as model substrates for distinct endosomal processing routes. Toxin uptake and activation through early endosomes but not processing through other compartments were reduced in TPC1 knockout cells. Detailed co-localization studies with subcellular markers confirmed predominant localization of TPC1 to early and recycling endosomes. Proteomic analysis of native TPC1 channels finally identified direct interaction with a distinct set of syntaxins involved in fusion of intracellular vesicles. Together, our results demonstrate a general role of TPC1 for uptake and processing of proteins in early and recycling endosomes, likely by providing high local Ca 2+ concentrations required for SNARE-mediated vesicle fusion.

  18. Wide variation of prostate-specific antigen doubling time of untreated, clinically localized, low-to-intermediate grade, prostate carcinoma.

    PubMed

    Choo, Richard; Klotz, Laurence; Deboer, Gerrit; Danjoux, Cyril; Morton, Gerard C

    2004-08-01

    To assess the prostate specific antigen (PSA) doubling time of untreated, clinically localized, low-to-intermediate grade prostate carcinoma. A prospective single-arm cohort study has been in progress since November 1995 to assess the feasibility of a watchful-observation protocol with selective delayed intervention for clinically localized, low-to-intermediate grade prostate adenocarcinoma. The PSA doubling time was estimated from a linear regression of ln(PSA) against time, assuming a simple exponential growth model. As of March 2003, 231 patients had at least 6 months of follow-up (median 45) and at least three PSA measurements (median 8, range 3-21). The distribution of the doubling time was: < 2 years, 26 patients; 2-5 years, 65; 5-10 years, 42; 10-20 years, 26; 20-50 years, 16; >50 years, 56. The median doubling time was 7.0 years; 42% of men had a doubling time of >10 years. The doubling time of untreated clinically localized, low-to-intermediate grade prostate cancer varies widely.

  19. Many-body localization transition: Schmidt gap, entanglement length, and scaling

    NASA Astrophysics Data System (ADS)

    Gray, Johnnie; Bose, Sougato; Bayat, Abolfazl

    2018-05-01

    Many-body localization has become an important phenomenon for illuminating a potential rift between nonequilibrium quantum systems and statistical mechanics. However, the nature of the transition between ergodic and localized phases in models displaying many-body localization is not yet well understood. Assuming that this is a continuous transition, analytic results show that the length scale should diverge with a critical exponent ν ≥2 in one-dimensional systems. Interestingly, this is in stark contrast with all exact numerical studies which find ν ˜1 . We introduce the Schmidt gap, new in this context, which scales near the transition with an exponent ν >2 compatible with the analytical bound. We attribute this to an insensitivity to certain finite-size fluctuations, which remain significant in other quantities at the sizes accessible to exact numerical methods. Additionally, we find that a physical manifestation of the diverging length scale is apparent in the entanglement length computed using the logarithmic negativity between disjoint blocks.

  20. The cortical organization of lexical knowledge: A dual lexicon model of spoken language processing

    PubMed Central

    Gow, David W.

    2012-01-01

    Current accounts of spoken language assume the existence of a lexicon where wordforms are stored and interact during spoken language perception, understanding and production. Despite the theoretical importance of the wordform lexicon, the exact localization and function of the lexicon in the broader context of language use is not well understood. This review draws on evidence from aphasia, functional imaging, neuroanatomy, laboratory phonology and behavioral results to argue for the existence of parallel lexica that facilitate different processes in the dorsal and ventral speech pathways. The dorsal lexicon, localized in the inferior parietal region including the supramarginal gyrus, serves as an interface between phonetic and articulatory representations. The ventral lexicon, localized in the posterior superior temporal sulcus and middle temporal gyrus, serves as an interface between phonetic and semantic representations. In addition to their interface roles, the two lexica contribute to the robustness of speech processing. PMID:22498237

  1. Decentralized Grid Scheduling with Evolutionary Fuzzy Systems

    NASA Astrophysics Data System (ADS)

    Fölling, Alexander; Grimme, Christian; Lepping, Joachim; Papaspyrou, Alexander

    In this paper, we address the problem of finding workload exchange policies for decentralized Computational Grids using an Evolutionary Fuzzy System. To this end, we establish a non-invasive collaboration model on the Grid layer which requires minimal information about the participating High Performance and High Throughput Computing (HPC/HTC) centers and which leaves the local resource managers completely untouched. In this environment of fully autonomous sites, independent users are assumed to submit their jobs to the Grid middleware layer of their local site, which in turn decides on the delegation and execution either on the local system or on remote sites in a situation-dependent, adaptive way. We find for different scenarios that the exchange policies show good performance characteristics not only with respect to traditional metrics such as average weighted response time and utilization, but also in terms of robustness and stability in changing environments.

  2. A multistate dynamic site occupancy model for spatially aggregated sessile communities

    USGS Publications Warehouse

    Fukaya, Keiichi; Royle, J. Andrew; Okuda, Takehiro; Nakaoka, Masahiro; Noda, Takashi

    2017-01-01

    Estimation of transition probabilities of sessile communities seems easy in principle but may still be difficult in practice because resampling error (i.e. a failure to resample exactly the same location at fixed points) may cause significant estimation bias. Previous studies have developed novel analytical methods to correct for this estimation bias. However, they did not consider the local structure of community composition induced by the aggregated distribution of organisms that is typically observed in sessile assemblages and is very likely to affect observations.We developed a multistate dynamic site occupancy model to estimate transition probabilities that accounts for resampling errors associated with local community structure. The model applies a nonparametric multivariate kernel smoothing methodology to the latent occupancy component to estimate the local state composition near each observation point, which is assumed to determine the probability distribution of data conditional on the occurrence of resampling error.By using computer simulations, we confirmed that an observation process that depends on local community structure may bias inferences about transition probabilities. By applying the proposed model to a real data set of intertidal sessile communities, we also showed that estimates of transition probabilities and of the properties of community dynamics may differ considerably when spatial dependence is taken into account.Results suggest the importance of accounting for resampling error and local community structure for developing management plans that are based on Markovian models. Our approach provides a solution to this problem that is applicable to broad sessile communities. It can even accommodate an anisotropic spatial correlation of species composition, and may also serve as a basis for inferring complex nonlinear ecological dynamics.

  3. Out of the net: An agent-based model to study human movements influence on local-scale malaria transmission.

    PubMed

    Pizzitutti, Francesco; Pan, William; Feingold, Beth; Zaitchik, Ben; Álvarez, Carlos A; Mena, Carlos F

    2018-01-01

    Though malaria control initiatives have markedly reduced malaria prevalence in recent decades, global eradication is far from actuality. Recent studies show that environmental and social heterogeneities in low-transmission settings have an increased weight in shaping malaria micro-epidemiology. New integrated and more localized control strategies should be developed and tested. Here we present a set of agent-based models designed to study the influence of local scale human movements on local scale malaria transmission in a typical Amazon environment, where malaria is transmission is low and strongly connected with seasonal riverine flooding. The agent-based simulations show that the overall malaria incidence is essentially not influenced by local scale human movements. In contrast, the locations of malaria high risk spatial hotspots heavily depend on human movements because simulated malaria hotspots are mainly centered on farms, were laborers work during the day. The agent-based models are then used to test the effectiveness of two different malaria control strategies both designed to reduce local scale malaria incidence by targeting hotspots. The first control scenario consists in treat against mosquito bites people that, during the simulation, enter at least once inside hotspots revealed considering the actual sites where human individuals were infected. The second scenario involves the treatment of people entering in hotspots calculated assuming that the infection sites of every infected individual is located in the household where the individual lives. Simulations show that both considered scenarios perform better in controlling malaria than a randomized treatment, although targeting household hotspots shows slightly better performance.

  4. Effect of wave localization on plasma instabilities. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Levedahl, William Kirk

    1987-01-01

    The Anderson model of wave localization in random media is involved to study the effect of solar wind density turbulence on plasma processes associated with the solar type III radio burst. ISEE-3 satellite data indicate that a possible model for the type III process is the parametric decay of Langmuir waves excited by solar flare electron streams into daughter electromagnetic and ion acoustic waves. The threshold for this instability, however, is much higher than observed Langmuir wave levels because of rapid wave convection of the transverse electromagnetic daughter wave in the case where the solar wind is assumed homogeneous. Langmuir and transverse waves near critical density satisfy the Ioffe-Reigel criteria for wave localization in the solar wind with observed density fluctuations -1 percent. Numerical simulations of wave propagation in random media confirm the localization length predictions of Escande and Souillard for stationary density fluctations. For mobile density fluctuations localized wave packets spread at the propagation velocity of the density fluctuations rather than the group velocity of the waves. Computer simulations using a linearized hybrid code show that an electron beam will excite localized Langmuir waves in a plasma with density turbulence. An action principle approach is used to develop a theory of non-linear wave processes when waves are localized. A theory of resonant particles diffusion by localized waves is developed to explain the saturation of the beam-plasma instability. It is argued that localization of electromagnetic waves will allow the instability threshold to be exceeded for the parametric decay discussed above.

  5. An integral wall model for Large Eddy Simulation (iWMLES) and applications to developing boundary layers over smooth and rough plates

    NASA Astrophysics Data System (ADS)

    Yang, Xiang; Sadique, Jasim; Mittal, Rajat; Meneveau, Charles

    2014-11-01

    A new wall model for Large-Eddy-Simulations is proposed. It is based on an integral boundary layer method that assumes a functional form for the local mean velocity profile. The method, iWMLES, evaluates required unsteady and advective terms in the vertically integrated boundary layer equations analytically. The assumed profile contains a viscous or roughness sublayer, and a logarithmic layer with an additional linear term accounting for inertial and pressure gradient effects. The iWMLES method is tested in the context of a finite difference LES code. Test cases include developing turbulent boundary layers on a smooth flat plate at various Reynolds numbers, over flat plates with unresolved roughness, and a sample application to boundary layer flow over a plate that includes resolved roughness elements. The elements are truncated cones acting as idealized barnacle-like roughness elements that often occur in biofouling of marine surfaces. Comparisons with data show that iWMLES provides accurate predictions of near-wall velocity profiles in LES while, similarly to equilibrium wall models, its cost remains independent of Reynolds number and is thus significantly lower compared to standard zonal or hybrid wall models. This work is funded by ONR Grant N00014-12-1-0582 (Dr. R. Joslin, program manager).

  6. Modeling missing data in knowledge space theory.

    PubMed

    de Chiusole, Debora; Stefanutti, Luca; Anselmi, Pasquale; Robusto, Egidio

    2015-12-01

    Missing data are a well known issue in statistical inference, because some responses may be missing, even when data are collected carefully. The problem that arises in these cases is how to deal with missing data. In this article, the missingness is analyzed in knowledge space theory, and in particular when the basic local independence model (BLIM) is applied to the data. Two extensions of the BLIM to missing data are proposed: The former, called ignorable missing BLIM (IMBLIM), assumes that missing data are missing completely at random; the latter, called missing BLIM (MissBLIM), introduces specific dependencies of the missing data on the knowledge states, thus assuming that the missing data are missing not at random. The IMBLIM and the MissBLIM modeled the missingness in a satisfactory way, in both a simulation study and an empirical application, depending on the process that generates the missingness: If the missing data-generating process is of type missing completely at random, then either IMBLIM or MissBLIM provide adequate fit to the data. However, if the pattern of missingness is functionally dependent upon unobservable features of the data (e.g., missing answers are more likely to be wrong), then only a correctly specified model of the missingness distribution provides an adequate fit to the data. (c) 2015 APA, all rights reserved).

  7. Evaluation of the cold weather plan for England: modelling of cost-effectiveness.

    PubMed

    Chalabi, Z; Hajat, S; Wilkinson, P; Erens, B; Jones, L; Mays, N

    2016-08-01

    To determine the conditions under which the Cold Weather Plan (CWP) for England is likely to prove cost-effective in order to inform the development of the CWP in the short term before direct data on costs and benefits can be collected. Mathematical modelling study undertaken in the absence of direct epidemiological evidence on the effect of the CWP in reducing cold-related mortality and morbidity, and limited data or on its costs. The model comprised a simulated temperature time series based on historical data; epidemiologically-derived relationships between temperature, and mortality and morbidity; and information on baseline unit costs of contacts with health care and community care services. Cost-effectiveness was assessed assuming varying levels of protection against cold-related burdens, coverage of the vulnerable population and willingness-to-pay criteria. Simulations showed that the CWP is likely to be cost effective under some scenarios at the high end of the willingness to pay threshold used by National Institute for Health and Care Excellence (NICE) in England, but these results are sensitive to assumptions about the extent of implementation of the CWP at local level, and its assumed effectiveness when implemented. The incremental cost-effectiveness ratio varied from £29,754 to £75,875 per Quality Adjusted Life Year (QALY) gained. Conventional cost-effectiveness (<£30,000/QALY) was reached only when effective targeting of at-risk groups was assumed (i.e. need for low coverage (∼5%) of the population for targeted actions) and relatively high assumed effectiveness (>15%) in avoiding deaths and hospital admissions. Although the modelling relied on a large number of assumptions, this type of modelling is useful for understanding whether, and in what circumstances, untested plans are likely to be cost-effective before they are implemented and in the early period of implementation before direct data on cost-effectiveness have accrued. Steps can then be taken to optimize the relevant parameters as far as practicable during the early implementation period. Copyright © 2015 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  8. The impact of the Fermi-Dirac distribution on charge injection at metal/organic interfaces.

    PubMed

    Wang, Z B; Helander, M G; Greiner, M T; Lu, Z H

    2010-05-07

    The Fermi level has historically been assumed to be the only energy-level from which carriers are injected at metal/semiconductor interfaces. In traditional semiconductor device physics, this approximation is reasonable as the thermal distribution of delocalized states in the semiconductor tends to dominate device characteristics. However, in the case of organic semiconductors the weak intermolecular interactions results in highly localized electronic states, such that the thermal distribution of carriers in the metal may also influence device characteristics. In this work we demonstrate that the Fermi-Dirac distribution of carriers in the metal has a much more significant impact on charge injection at metal/organic interfaces than has previously been assumed. An injection model which includes the effect of the Fermi-Dirac electron distribution was proposed. This model has been tested against experimental data and was found to provide a better physical description of charge injection. This finding indicates that the thermal distribution of electronic states in the metal should, in general, be considered in the study of metal/organic interfaces.

  9. Nonlocal and Mixed-Locality Multiscale Finite Element Methods

    DOE PAGES

    Costa, Timothy B.; Bond, Stephen D.; Littlewood, David J.

    2018-03-27

    In many applications the resolution of small-scale heterogeneities remains a significant hurdle to robust and reliable predictive simulations. In particular, while material variability at the mesoscale plays a fundamental role in processes such as material failure, the resolution required to capture mechanisms at this scale is often computationally intractable. Multiscale methods aim to overcome this difficulty through judicious choice of a subscale problem and a robust manner of passing information between scales. One promising approach is the multiscale finite element method, which increases the fidelity of macroscale simulations by solving lower-scale problems that produce enriched multiscale basis functions. Here, inmore » this study, we present the first work toward application of the multiscale finite element method to the nonlocal peridynamic theory of solid mechanics. This is achieved within the context of a discontinuous Galerkin framework that facilitates the description of material discontinuities and does not assume the existence of spatial derivatives. Analysis of the resulting nonlocal multiscale finite element method is achieved using the ambulant Galerkin method, developed here with sufficient generality to allow for application to multiscale finite element methods for both local and nonlocal models that satisfy minimal assumptions. Finally, we conclude with preliminary results on a mixed-locality multiscale finite element method in which a nonlocal model is applied at the fine scale and a local model at the coarse scale.« less

  10. Nonlocal and Mixed-Locality Multiscale Finite Element Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costa, Timothy B.; Bond, Stephen D.; Littlewood, David J.

    In many applications the resolution of small-scale heterogeneities remains a significant hurdle to robust and reliable predictive simulations. In particular, while material variability at the mesoscale plays a fundamental role in processes such as material failure, the resolution required to capture mechanisms at this scale is often computationally intractable. Multiscale methods aim to overcome this difficulty through judicious choice of a subscale problem and a robust manner of passing information between scales. One promising approach is the multiscale finite element method, which increases the fidelity of macroscale simulations by solving lower-scale problems that produce enriched multiscale basis functions. Here, inmore » this study, we present the first work toward application of the multiscale finite element method to the nonlocal peridynamic theory of solid mechanics. This is achieved within the context of a discontinuous Galerkin framework that facilitates the description of material discontinuities and does not assume the existence of spatial derivatives. Analysis of the resulting nonlocal multiscale finite element method is achieved using the ambulant Galerkin method, developed here with sufficient generality to allow for application to multiscale finite element methods for both local and nonlocal models that satisfy minimal assumptions. Finally, we conclude with preliminary results on a mixed-locality multiscale finite element method in which a nonlocal model is applied at the fine scale and a local model at the coarse scale.« less

  11. The School Elections: A Critique of the 1969 New York City School Decentralization.

    ERIC Educational Resources Information Center

    Demas, Boulton H.

    When local school board members in New York City assumed office on 31 local school boards in 1969, this should have resulted in more responsive local boards with sufficient power to control local policy; but this was not the actual result. Specific examination of the decentralization bill, the politics of the election, and the election procedures…

  12. Incentives for Optimal Multi-level Allocation of HIV Prevention Resources

    PubMed Central

    Malvankar, Monali M.; Zaric, Gregory S.

    2013-01-01

    HIV/AIDS prevention funds are often allocated at multiple levels of decision-making. Optimal allocation of HIV prevention funds maximizes the number of HIV infections averted. However, decision makers often allocate using simple heuristics such as proportional allocation. We evaluate the impact of using incentives to encourage optimal allocation in a two-level decision-making process. We model an incentive based decision-making process consisting of an upper-level decision maker allocating funds to a single lower-level decision maker who then distributes funds to local programs. We assume that the lower-level utility function is linear in the amount of the budget received from the upper-level, the fraction of funds reserved for proportional allocation, and the number of infections averted. We assume that the upper level objective is to maximize the number of infections averted. We illustrate with an example using data from California, U.S. PMID:23766551

  13. An integral turbulent kinetic energy analysis of free shear flows

    NASA Technical Reports Server (NTRS)

    Peters, C. E.; Phares, W. J.

    1973-01-01

    Mixing of coaxial streams is analyzed by application of integral techniques. An integrated turbulent kinetic energy (TKE) equation is solved simultaneously with the integral equations for the mean flow. Normalized TKE profile shapes are obtained from incompressible jet and shear layer experiments and are assumed to be applicable to all free turbulent flows. The shear stress at the midpoint of the mixing zone is assumed to be directly proportional to the local TKE, and dissipation is treated with a generalization of the model developed for isotropic turbulence. Although the analysis was developed for ducted flows, constant-pressure flows were approximated with the duct much larger than the jet. The axisymmetric flows under consideration were predicted with reasonable accuracy. Fairly good results were also obtained for the fully developed two-dimensional shear layers, which were computed as thin layers at the boundary of a large circular jet.

  14. A water-vapor radiometer error model. [for ionosphere in geodetic microwave techniques

    NASA Technical Reports Server (NTRS)

    Beckman, B.

    1985-01-01

    The water-vapor radiometer (WVR) is used to calibrate unpredictable delays in the wet component of the troposphere in geodetic microwave techniques such as very-long-baseline interferometry (VLBI) and Global Positioning System (GPS) tracking. Based on experience with Jet Propulsion Laboratory (JPL) instruments, the current level of accuracy in wet-troposphere calibration limits the accuracy of local vertical measurements to 5-10 cm. The goal for the near future is 1-3 cm. Although the WVR is currently the best calibration method, many instruments are prone to systematic error. In this paper, a treatment of WVR data is proposed and evaluated. This treatment reduces the effect of WVR systematic errors by estimating parameters that specify an assumed functional form for the error. The assumed form of the treatment is evaluated by comparing the results of two similar WVR's operating near each other. Finally, the observability of the error parameters is estimated by covariance analysis.

  15. Analysis of economics of a TV broadcasting satellite for additional nationwide TV programs

    NASA Technical Reports Server (NTRS)

    Becker, D.; Mertens, G.; Rappold, A.; Seith, W.

    1977-01-01

    The influence of a TV broadcasting satellite, transmitting four additional TV networks was analyzed. It is assumed that the cost of the satellite systems will be financed by the cable TV system operators. The additional TV programs increase income by attracting additional subscribers. Two economic models were established: (1) each local network is regarded as an independent economic unit with individual fees (cost price model) and (2) all networks are part of one public cable TV company with uniform fees (uniform price model). Assumptions are made for penetration as a function of subscription rates. Main results of the study are: the installation of a TV broadcasting satellite improves the economics of CTV-networks in both models; the overall coverage achievable by the uniform price model is significantly higher than that achievable by the cost price model.

  16. Hydrodynamic models for slurry bubble column reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gidaspow, D.

    1995-12-31

    The objective of this investigation is to convert a {open_quotes}learning gas-solid-liquid{close_quotes} fluidization model into a predictive design model. This model is capable of predicting local gas, liquid and solids hold-ups and the basic flow regimes: the uniform bubbling, the industrially practical churn-turbulent (bubble coalescence) and the slugging regimes. Current reactor models incorrectly assume that the gas and the particle hold-ups (volume fractions) are uniform in the reactor. They must be given in terms of empirical correlations determined under conditions that radically differ from reactor operation. In the proposed hydrodynamic approach these hold-ups are computed from separate phase momentum balances. Furthermore,more » the kinetic theory approach computes the high slurry viscosities from collisions of the catalyst particles. Thus particle rheology is not an input into the model.« less

  17. The stability and slow dynamics of spot patterns in the 2D Brusselator model: The effect of open systems and heterogeneities

    NASA Astrophysics Data System (ADS)

    Tzou, J. C.; Ward, M. J.

    2018-06-01

    Spot patterns, whereby the activator field becomes spatially localized near certain dynamically-evolving discrete spatial locations in a bounded multi-dimensional domain, is a common occurrence for two-component reaction-diffusion (RD) systems in the singular limit of a large diffusivity ratio. In previous studies of 2-D localized spot patterns for various specific well-known RD systems, the domain boundary was assumed to be impermeable to both the activator and inhibitor, and the reaction-kinetics were assumed to be spatially uniform. As an extension of this previous theory, we use formal asymptotic methods to study the existence, stability, and slow dynamics of localized spot patterns for the singularly perturbed 2-D Brusselator RD model when the domain boundary is only partially impermeable, as modeled by an inhomogeneous Robin boundary condition, or when there is an influx of inhibitor across the domain boundary. In our analysis, we will also allow for the effect of a spatially variable bulk feed term in the reaction kinetics. By applying our extended theory to the special case of one-spot patterns and ring patterns of spots inside the unit disk, we provide a detailed analysis of the effect on spot patterns of these three different sources of heterogeneity. In particular, when there is an influx of inhibitor across the boundary of the unit disk, a ring pattern of spots can become pinned to a ring-radius closer to the domain boundary. Under a Robin condition, a quasi-equilibrium ring pattern of spots is shown to exhibit a novel saddle-node bifurcation behavior in terms of either the inhibitor diffusivity, the Robin constant, or the ambient background concentration. A spatially variable bulk feed term, with a concentrated source of "fuel" inside the domain, is shown to yield a saddle-node bifurcation structure of spot equilibria, which leads to qualitatively new spot-pinning behavior. Results from our asymptotic theory are validated from full numerical simulations of the Brusselator model.

  18. Semiautomatic approaches to account for 3-D distortion of the electric field from local, near-surface structures in 3-D resistivity inversions of 3-D regional magnetotelluric data

    USGS Publications Warehouse

    Rodriguez, Brian D.

    2017-03-31

    This report summarizes the results of three-dimensional (3-D) resistivity inversion simulations that were performed to account for local 3-D distortion of the electric field in the presence of 3-D regional structure, without any a priori information on the actual 3-D distribution of the known subsurface geology. The methodology used a 3-D geologic model to create a 3-D resistivity forward (“known”) model that depicted the subsurface resistivity structure expected for the input geologic configuration. The calculated magnetotelluric response of the modeled resistivity structure was assumed to represent observed magnetotelluric data and was subsequently used as input into a 3-D resistivity inverse model that used an iterative 3-D algorithm to estimate 3-D distortions without any a priori geologic information. A publicly available inversion code, WSINV3DMT, was used for all of the simulated inversions, initially using the default parameters, and subsequently using adjusted inversion parameters. A semiautomatic approach of accounting for the static shift using various selections of the highest frequencies and initial models was also tested. The resulting 3-D resistivity inversion simulation was compared to the “known” model and the results evaluated. The inversion approach that produced the lowest misfit to the various local 3-D distortions was an inversion that employed an initial model volume resistivity that was nearest to the maximum resistivities in the near-surface layer.

  19. New constraints on all flavor Galactic diffuse neutrino emission with the ANTARES telescope

    NASA Astrophysics Data System (ADS)

    Albert, A.; André, M.; Anghinolfi, M.; Anton, G.; Ardid, M.; Aubert, J.-J.; Avgitas, T.; Baret, B.; Barrios-Martí, J.; Basa, S.; Belhorma, B.; Bertin, V.; Biagi, S.; Bormuth, R.; Bourret, S.; Bouwhuis, M. C.; Bruijn, R.; Brunner, J.; Busto, J.; Capone, A.; Caramete, L.; Carr, J.; Celli, S.; Cherkaoui El Moursli, R.; Chiarusi, T.; Circella, M.; Coelho, J. A. B.; Coleiro, A.; Coniglione, R.; Costantini, H.; Coyle, P.; Creusot, A.; Díaz, A. F.; Deschamps, A.; de Bonis, G.; Distefano, C.; di Palma, I.; Domi, A.; Donzaud, C.; Dornic, D.; Drouhin, D.; Eberl, T.; El Bojaddaini, I.; El Khayati, N.; Elsässer, D.; Enzenhöfer, A.; Ettahiri, A.; Fassi, F.; Felis, I.; Fusco, L. A.; Galatà, S.; Gay, P.; Giordano, V.; Glotin, H.; Grégoire, T.; Gracia Ruiz, R.; Graf, K.; Hallmann, S.; van Haren, H.; Heijboer, A. J.; Hello, Y.; Hernández-Rey, J. J.; Hößl, J.; Hofestädt, J.; Hugon, C.; Illuminati, G.; James, C. W.; de Jong, M.; Jongen, M.; Kadler, M.; Kalekin, O.; Katz, U.; Kießling, D.; Kouchner, A.; Kreter, M.; Kreykenbohm, I.; Kulikovskiy, V.; Lachaud, C.; Lahmann, R.; Lefèvre, D.; Leonora, E.; Lotze, M.; Loucatos, S.; Marcelin, M.; Margiotta, A.; Marinelli, A.; Martínez-Mora, J. A.; Mele, R.; Melis, K.; Michael, T.; Migliozzi, P.; Moussa, A.; Navas, S.; Nezri, E.; Organokov, M.; Pǎvǎlaş, G. E.; Pellegrino, C.; Perrina, C.; Piattelli, P.; Popa, V.; Pradier, T.; Quinn, L.; Racca, C.; Riccobene, G.; Sánchez-Losa, A.; Saldaña, M.; Salvadori, I.; Samtleben, D. F. E.; Sanguineti, M.; Sapienza, P.; Schüssler, F.; Sieger, C.; Spurio, M.; Stolarczyk, Th.; Taiuti, M.; Tayalati, Y.; Trovato, A.; Turpin, D.; Tönnis, C.; Vallage, B.; van Elewyck, V.; Versari, F.; Vivolo, D.; Vizzoca, A.; Wilms, J.; Zornoza, J. D.; Zúñiga, J.; Gaggero, D.; Grasso, D.; ANTARES Collaboration

    2017-09-01

    The flux of very high-energy neutrinos produced in our Galaxy by the interaction of accelerated cosmic rays with the interstellar medium is not yet determined. The characterization of this flux will shed light on Galactic accelerator features, gas distribution morphology and Galactic cosmic ray transport. The central Galactic plane can be the site of an enhanced neutrino production, thus leading to anisotropies in the extraterrestrial neutrino signal as measured by the IceCube Collaboration. The ANTARES neutrino telescope, located in the Mediterranean Sea, offers a favorable view of this part of the sky, thereby allowing for a contribution to the determination of this flux. The expected diffuse Galactic neutrino emission can be obtained, linking a model of generation and propagation of cosmic rays with the morphology of the gas distribution in the Milky Way. In this paper, the so-called "gamma model" introduced recently to explain the high-energy gamma-ray diffuse Galactic emission is assumed as reference. The neutrino flux predicted by the "gamma model" depends on the assumed primary cosmic ray spectrum cutoff. Considering a radially dependent diffusion coefficient, this proposed scenario is able to account for the local cosmic ray measurements, as well as for the Galactic gamma-ray observations. Nine years of ANTARES data are used in this work to search for a possible Galactic contribution according to this scenario. All flavor neutrino interactions are considered. No excess of events is observed, and an upper limit is set on the neutrino flux of 1.1 (1.2) times the prediction of the "gamma model," assuming the primary cosmic ray spectrum cutoff at 5 (50) PeV. This limit excludes the diffuse Galactic neutrino emission as the major cause of the "spectral anomaly" between the two hemispheres measured by IceCube.

  20. The Local Food Environment and Fruit and Vegetable Intake: A Geographically Weighted Regression Approach in the ORiEL Study.

    PubMed

    Clary, Christelle; Lewis, Daniel J; Flint, Ellen; Smith, Neil R; Kestens, Yan; Cummins, Steven

    2016-12-01

    Studies that explore associations between the local food environment and diet routinely use global regression models, which assume that relationships are invariant across space, yet such stationarity assumptions have been little tested. We used global and geographically weighted regression models to explore associations between the residential food environment and fruit and vegetable intake. Analyses were performed in 4 boroughs of London, United Kingdom, using data collected between April 2012 and July 2012 from 969 adults in the Olympic Regeneration in East London Study. Exposures were assessed both as absolute densities of healthy and unhealthy outlets, taken separately, and as a relative measure (proportion of total outlets classified as healthy). Overall, local models performed better than global models (lower Akaike information criterion). Locally estimated coefficients varied across space, regardless of the type of exposure measure, although changes of sign were observed only when absolute measures were used. Despite findings from global models showing significant associations between the relative measure and fruit and vegetable intake (β = 0.022; P < 0.01) only, geographically weighted regression models using absolute measures outperformed models using relative measures. This study suggests that greater attention should be given to nonstationary relationships between the food environment and diet. It further challenges the idea that a single measure of exposure, whether relative or absolute, can reflect the many ways the food environment may shape health behaviors. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. The Density of the North Polar Layered Deposit from Gravity and Topography

    NASA Astrophysics Data System (ADS)

    Ojha, L.; Lewis, K. W.

    2017-12-01

    The North Polar Layered Deposit (NPLD) of Mars is a vast reservoir of water ice with a volume of 1.14 million km3. Radar data indicates that the ice in the NPLD is extremely pure with dust content between 5 % to 10 %, however the radar data has not been able to put a direct constraint on the density of the NPLD. Here, we localize the gravity and topography signature of the NPLD and place a direct constraint on its density. We performed a grid search by generating admittance spectrum at each latitude and longitude between 75° N to 90° N and 0° E to 360° E, using a spherical cap of angular radius () of 7°, and a harmonic-bandwidth of the localization window Lwin of 37°. A region between Gemina Lingula and Planum Boreum was found to possesss an adequate correlation between gravity and topography. The estimated admittance spectra were compared with synthetic admittance spectra to constrain the load-density and the elastic thickness of the lithosphere. We constructed forward models by assuming that the lithosphere is a thin shell that deforms elastically in response to surface loads. We find that the bulk density of the NPLD ranges between 1000 to 1100 kg.m-3. Assuming a grain density of 3000 kg.m-3 for dust, the NPLD region within our localized window can contain dust content between 3 - 8 %, which is in an excellent agreement with the radar data.

  2. Local pH domains regulate NHE3-mediated Na+ reabsorption in the renal proximal tubule

    PubMed Central

    Burford, James L.; McDonough, Alicia A.; Holstein-Rathlou, Niels-Henrik; Peti-Peterdi, Janos

    2014-01-01

    The proximal tubule Na+/H+ exchanger 3 (NHE3), located in the apical dense microvilli (brush border), plays a major role in the reabsorption of NaCl and water in the renal proximal tubule. In response to a rise in blood pressure NHE3 redistributes in the plane of the plasma membrane to the base of the brush border, where NHE3 activity is reduced. This NHE3 redistribution is assumed to provoke pressure natriuresis; however, it is unclear how NHE3 redistribution per se reduces NHE3 activity. To investigate if the distribution of NHE3 in the brush border can change the reabsorption rate, we constructed a spatiotemporal mathematical model of NHE3-mediated Na+ reabsorption across a proximal tubule cell and compared the model results with in vivo experiments in rats. The model predicts that when NHE3 is localized exclusively at the base of the brush border, it creates local pH microdomains that reduce NHE3 activity by >30%. We tested the model's prediction experimentally: the rat kidney cortex was loaded with the pH-sensitive fluorescent dye BCECF, and cells of the proximal tubule were imaged in vivo using confocal fluorescence microscopy before and after an increase of blood pressure by ∼50 mmHg. The experimental results supported the model by demonstrating that a rise of blood pressure induces the development of pH microdomains near the bottom of the brush border. These local changes in pH reduce NHE3 activity, which may explain the pressure natriuresis response to NHE3 redistribution. PMID:25298526

  3. Unpolarized emissivity with shadow and multiple reflections from random rough surfaces with the geometric optics approximation: application to Gaussian sea surfaces in the infrared band.

    PubMed

    Bourlier, Christophe

    2006-08-20

    The emissivity from a stationary random rough surface is derived by taking into account the multiple reflections and the shadowing effect. The model is applied to the ocean surface. The geometric optics approximation is assumed to be valid, which means that the rough surface is modeled as a collection of facets reflecting locally the light in the specular direction. In particular, the emissivity with zero, single, and double reflections are analytically calculated, and each contribution is studied numerically by considering a 1D sea surface observed in the near infrared band. The model is also compared with results computed from a Monte Carlo ray-tracing method.

  4. Edgeworth expansions of stochastic trading time

    NASA Astrophysics Data System (ADS)

    Decamps, Marc; De Schepper, Ann

    2010-08-01

    Under most local and stochastic volatility models the underlying forward is assumed to be a positive function of a time-changed Brownian motion. It relates nicely the implied volatility smile to the so-called activity rate in the market. Following Young and DeWitt-Morette (1986) [8], we propose to apply the Duru-Kleinert process-cum-time transformation in path integral to formulate the transition density of the forward. The method leads to asymptotic expansions of the transition density around a Gaussian kernel corresponding to the average activity in the market conditional on the forward value. The approximation is numerically illustrated for pricing vanilla options under the CEV model and the popular normal SABR model. The asymptotics can also be used for Monte Carlo simulations or backward integration schemes.

  5. Learning from communities: overcoming difficulties in dissemination of prevention and promotion efforts.

    PubMed

    Miller, Robin L; Shinn, Marybeth

    2005-06-01

    The model of prevention science advocated by the Institute of Medicine (P. J. Mrazek & R. J. Haggerty, 1994) has not lead to widespread adoption of prevention and promotion programs for four reasons. The model of dissemination of programs to communities fails to consider community and organizational capacity to implement programs, ignores the need for congruence in values between programs and host sites, displays a pro-innovation bias that undervalues indigenous practices, and assumes a simplistic model of how community organizations adopt innovations. To address these faults, researchers should locate, study, and help disseminate successful indigenous programs that fit community capacity and values. In addition, they should build on theoretical models of how locally developed programs work to make existing programs and polices more effective.

  6. Global and Local Approaches to Children's Rights in Vietnam.

    ERIC Educational Resources Information Center

    Burr, Rachel

    2002-01-01

    Examines influence of the United Nations Convention on the Rights of the Child (CRC) on local attitudes toward childhood and children's lived experiences in Vietnam. Examines attitudes of aid agency members working with those children, and suggests that members of organizations that uphold the CRC often assume that local people share their notion…

  7. High-energy neutrino fluxes from AGN populations inferred from X-ray surveys

    NASA Astrophysics Data System (ADS)

    Jacobsen, Idunn B.; Wu, Kinwah; On, Alvina Y. L.; Saxton, Curtis J.

    2015-08-01

    High-energy neutrinos and photons are complementary messengers, probing violent astrophysical processes and structural evolution of the Universe. X-ray and neutrino observations jointly constrain conditions in active galactic nuclei (AGN) jets: their baryonic and leptonic contents, and particle production efficiency. Testing two standard neutrino production models for local source Cen A (Koers & Tinyakov and Becker & Biermann), we calculate the high-energy neutrino spectra of single AGN sources and derive the flux of high-energy neutrinos expected for the current epoch. Assuming that accretion determines both X-rays and particle creation, our parametric scaling relations predict neutrino yield in various AGN classes. We derive redshift-dependent number densities of each class, from Chandra and Swift/BAT X-ray luminosity functions (Silverman et al. and Ajello et al.). We integrate the neutrino spectrum expected from the cumulative history of AGN (correcting for cosmological and source effects, e.g. jet orientation and beaming). Both emission scenarios yield neutrino fluxes well above limits set by IceCube (by ˜4-106 × at 1 PeV, depending on the assumed jet models for neutrino production). This implies that: (i) Cen A might not be a typical neutrino source as commonly assumed; (ii) both neutrino production models overestimate the efficiency; (iii) neutrino luminosity scales with accretion power differently among AGN classes and hence does not follow X-ray luminosity universally; (iv) some AGN are neutrino-quiet (e.g. below a power threshold for neutrino production); (v) neutrino and X-ray emission have different duty cycles (e.g. jets alternate between baryonic and leptonic flows); or (vi) some combination of the above.

  8. Trace Elements in Basalts From the Siqueiros Fracture Zone: Implications for Melt Migration Models

    NASA Astrophysics Data System (ADS)

    Pickle, R. C.; Forsyth, D. W.; Saal, A. E.; Nagle, A. N.; Perfit, M. R.

    2008-12-01

    Incompatible trace element (ITE) ratios in MORB from a variety of locations may provide insights into the melt migration process by constraining aggregated melt compositions predicted by mantle melting and flow models. By using actual plate geometries to create a 3-D thermodynamic mantle model, melt volumes and compositions at all depths and locations may be calculated and binned into cubes using the pHMELTS algorithm [Asimow et al., 2004]. These melts can be traced from each cube to the surface assuming several migration models, including a simplified pressure gradient model and one in which melt is guided upwards by a low permeability compacted layer. The ITE ratios of all melts arriving at the surface are summed, averaged, and compared to those of the actual sample compositions from the various MOR locales. The Siqueiros fracture zone at 8° 20' N on the East Pacific Rise (EPR) comprises 4 intra-transform spreading centers (ITSCs) across 140 km of offset between two longer spreading ridges, and is an excellent study region for several reasons. First, an abundance of MORB data is readily available, and the samples retrieved from ITSCs are unlikely to be aggregated in a long-lived magma chamber or affected by along-axis transport, so they represent melts extracted locally from the mantle. Additionally, samples at Siqueiros span a compositional range from depleted to normal MORB within the fracture zone yet have similar isotopic compositions to samples collected from the 9-10° EPR. This minimizes the effect of assuming a uniform source composition in our melting model despite a heterogeneous mantle, allowing us to consistently compare the actual lava composition with that predicted by our model. Finally, it has been demonstrated with preliminary migration models that incipient melts generated directly below an ITSC may not necessarily erupt at that ITSC but migrate laterally towards a nearby ridge due to enhanced pressure gradients. The close proximity of the ITSCs at Siqueiros to the large ridges bounding the fracture zone provide a good opportunity to model this phenomenon and may help explain the variable ITE ratios found between samples collected within the transform and those near the ridges.

  9. Calibration of a Spatial-Temporal Discrimination Model from Forward, Simultaneous, and Backward Masking

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J.; Beard, B. L.; Stone, Leland (Technical Monitor)

    1997-01-01

    We have been developing a simplified spatial-temporal discrimination model similar to our simplified spatial model in that masking is assumed to be a function of the local visible contrast energy. The overall spatial-temporal sensitivity of the model is calibrated to predict the detectability of targets on a uniform background. To calibrate the spatial-temporal integration functions that define local visible contrast energy, spatial-temporal masking data are required. Observer thresholds were measured (2IFC) for the detection of a 12 msec target stimulus in the presence of a 700 msec mask. Targets were 1, 3 or 9 c/deg sine wave gratings. Masks were either one of these gratings or two of them combined. The target was presented in 17 temporal positions with respect to the mask, including positions before, during and after the mask. Peak masking was found near mask onset and offset for 1 and 3 c/deg targets, while masking effects were more nearly uniform during the mask for the 9 c/deg target. As in the purely spatial case, the simplified model can not predict all the details of masking as a function of masking component spatial frequencies, but overall the prediction errors are small.

  10. Predicting protein complexes using a supervised learning method combined with local structural information.

    PubMed

    Dong, Yadong; Sun, Yongqi; Qin, Chao

    2018-01-01

    The existing protein complex detection methods can be broadly divided into two categories: unsupervised and supervised learning methods. Most of the unsupervised learning methods assume that protein complexes are in dense regions of protein-protein interaction (PPI) networks even though many true complexes are not dense subgraphs. Supervised learning methods utilize the informative properties of known complexes; they often extract features from existing complexes and then use the features to train a classification model. The trained model is used to guide the search process for new complexes. However, insufficient extracted features, noise in the PPI data and the incompleteness of complex data make the classification model imprecise. Consequently, the classification model is not sufficient for guiding the detection of complexes. Therefore, we propose a new robust score function that combines the classification model with local structural information. Based on the score function, we provide a search method that works both forwards and backwards. The results from experiments on six benchmark PPI datasets and three protein complex datasets show that our approach can achieve better performance compared with the state-of-the-art supervised, semi-supervised and unsupervised methods for protein complex detection, occasionally significantly outperforming such methods.

  11. Discrete-State Stochastic Models of Calcium-Regulated Calcium Influx and Subspace Dynamics Are Not Well-Approximated by ODEs That Neglect Concentration Fluctuations

    PubMed Central

    Weinberg, Seth H.; Smith, Gregory D.

    2012-01-01

    Cardiac myocyte calcium signaling is often modeled using deterministic ordinary differential equations (ODEs) and mass-action kinetics. However, spatially restricted “domains” associated with calcium influx are small enough (e.g., 10−17 liters) that local signaling may involve 1–100 calcium ions. Is it appropriate to model the dynamics of subspace calcium using deterministic ODEs or, alternatively, do we require stochastic descriptions that account for the fundamentally discrete nature of these local calcium signals? To address this question, we constructed a minimal Markov model of a calcium-regulated calcium channel and associated subspace. We compared the expected value of fluctuating subspace calcium concentration (a result that accounts for the small subspace volume) with the corresponding deterministic model (an approximation that assumes large system size). When subspace calcium did not regulate calcium influx, the deterministic and stochastic descriptions agreed. However, when calcium binding altered channel activity in the model, the continuous deterministic description often deviated significantly from the discrete stochastic model, unless the subspace volume is unrealistically large and/or the kinetics of the calcium binding are sufficiently fast. This principle was also demonstrated using a physiologically realistic model of calmodulin regulation of L-type calcium channels introduced by Yue and coworkers. PMID:23509597

  12. NON-EQUILIBRIUM HELIUM IONIZATION IN AN MHD SIMULATION OF THE SOLAR ATMOSPHERE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Golding, Thomas Peter; Carlsson, Mats; Leenaarts, Jorrit, E-mail: thomas.golding@astro.uio.no, E-mail: mats.carlsson@astro.uio.no, E-mail: jorrit.leenaarts@astro.su.se

    The ionization state of the gas in the dynamic solar chromosphere can depart strongly from the instantaneous statistical equilibrium commonly assumed in numerical modeling. We improve on earlier simulations of the solar atmosphere that only included non-equilibrium hydrogen ionization by performing a 2D radiation-magnetohydrodynamics simulation featuring non-equilibrium ionization of both hydrogen and helium. The simulation includes the effect of hydrogen Lyα and the EUV radiation from the corona on the ionization and heating of the atmosphere. Details on code implementation are given. We obtain helium ion fractions that are far from their equilibrium values. Comparison with models with local thermodynamicmore » equilibrium (LTE) ionization shows that non-equilibrium helium ionization leads to higher temperatures in wavefronts and lower temperatures in the gas between shocks. Assuming LTE ionization results in a thermostat-like behavior with matter accumulating around the temperatures where the LTE ionization fractions change rapidly. Comparison of DEM curves computed from our models shows that non-equilibrium ionization leads to more radiating material in the temperature range 11–18 kK, compared to models with LTE helium ionization. We conclude that non-equilibrium helium ionization is important for the dynamics and thermal structure of the upper chromosphere and transition region. It might also help resolve the problem that intensities of chromospheric lines computed from current models are smaller than those observed.« less

  13. Detection of entanglement with few local measurements

    NASA Astrophysics Data System (ADS)

    Gühne, O.; Hyllus, P.; Bruß, D.; Ekert, A.; Lewenstein, M.; Macchiavello, C.; Sanpera, A.

    2002-12-01

    We introduce a general method for the experimental detection of entanglement by performing only few local measurements, assuming some prior knowledge of the density matrix. The idea is based on the minimal decomposition of witness operators into a pseudomixture of local operators. We discuss an experimentally relevant case of two qubits, and show an example how bound entanglement can be detected with few local measurements.

  14. Steepest entropy ascent quantum thermodynamic model of electron and phonon transport

    NASA Astrophysics Data System (ADS)

    Li, Guanchen; von Spakovsky, Michael R.; Hin, Celine

    2018-01-01

    An advanced nonequilibrium thermodynamic model for electron and phonon transport is formulated based on the steepest-entropy-ascent quantum thermodynamics framework. This framework, based on the principle of steepest entropy ascent (or the equivalent maximum entropy production principle), inherently satisfies the laws of thermodynamics and mechanics and is applicable at all temporal and spatial scales even in the far-from-equilibrium realm. Specifically, the model is proven to recover the Boltzmann transport equations in the near-equilibrium limit and the two-temperature model of electron-phonon coupling when no dispersion is assumed. The heat and mass transport at a temperature discontinuity across a homogeneous interface where the dispersion and coupling of electron and phonon transport are both considered are then modeled. Local nonequilibrium system evolution and nonquasiequilibrium interactions are predicted and the results discussed.

  15. 1.5D quasilinear model and its application on beams interacting with Alfvén eigenmodes in DIII-D

    NASA Astrophysics Data System (ADS)

    Ghantous, K.; Gorelenkov, N. N.; Berk, H. L.; Heidbrink, W. W.; Van Zeeland, M. A.

    2012-09-01

    We propose a model, denoted here by 1.5D, to study energetic particle (EP) interaction with toroidal Alfvenic eigenmodes (TAE) in the case where the local EP drive for TAE exceeds the stability limit. Based on quasilinear theory, the proposed 1.5D model assumes that the particles diffuse in phase space, flattening the pressure profile until its gradient reaches a critical value where the modes stabilize. Using local theories and NOVA-K simulations of TAE damping and growth rates, the 1.5D model calculates the critical gradient and reconstructs the relaxed EP pressure profile. Local theory is improved from previous study by including more sophisticated damping and drive mechanisms such as the numerical computation of the effect of the EP finite orbit width on the growth rate. The 1.5D model is applied on the well-diagnosed DIII-D discharges #142111 [M. A. Van Zeeland et al., Phys. Plasmas 18, 135001 (2011)] and #127112 [W. W. Heidbrink et al., Nucl. Fusion. 48, 084001 (2008)]. We achieved a very satisfactory agreement with the experimental results on the EP pressure profiles redistribution and measured losses. This agreement of the 1.5D model with experimental results allows the use of this code as a guide for ITER plasma operation where it is desired to have no more than 5% loss of fusion alpha particles as limited by the design.

  16. The structure of evaporating and combusting sprays: Measurements and predictions

    NASA Technical Reports Server (NTRS)

    Shuen, J. S.; Solomon, A. S. P.; Faeth, G. M.

    1982-01-01

    An apparatus was constructed to provide measurements in open sprays with no zones of recirculation, in order to provide well-defined conditions for use in evaluating spray models. Measurements were completed in a gas jet, in order to test experimental methods, and are currently in progress for nonevaporating sprays. A locally homogeneous flow (LHF) model where interphase transport rates are assumed to be infinitely fast; a separated flow (SF) model which allows for finite interphase transport rates but neglects effects of turbulent fluctuations on drop motion; and a stochastic SF model which considers effects of turbulent fluctuations on drop motion were evaluated using existing data on particle-laden jets. The LHF model generally overestimates rates of particle dispersion while the SF model underestimates dispersion rates. The stochastic SF flow yield satisfactory predictions except at high particle mass loadings where effects of turbulence modulation may have caused the model to overestimate turbulence levels.

  17. Coupling survey data with drift model results suggests that local spawning is important for Calanus finmarchicus production in the Barents Sea

    NASA Astrophysics Data System (ADS)

    Kvile, Kristina Øie; Fiksen, Øyvind; Prokopchuk, Irina; Opdal, Anders Frugård

    2017-01-01

    The copepod Calanus finmarchicus is an important part of the diet for several large fish stocks feeding in the Atlantic waters of the Barents Sea. Determining the origin of the new generation copepodites present on the Barents Sea shelf in spring can shed light on the importance of local versus imported production of C. finmarchicus biomass in this region. In this study, we couple large-scale spatiotemporal survey data (> 30 years in both Norwegian Sea and Barents Sea areas) with drift trajectories from a hydrodynamic model to back-calculate and map the spatial distribution of C. finmarchicus from copepod to egg, allowing us to identify potential adult spawning areas. Assuming the adult stage emerges from overwintering in the Norwegian Sea, our results suggest that copepodites sampled at the Barents Sea entrance are a mix of locally spawned individuals and long-distance-travellers advected northwards along the Norwegian shelf edge. However, copepodites sampled farther east in the Barents Sea (33°30‧E) are most likely spawned on the Barents Sea shelf, potentially from females that have overwintered locally. Our results support that C. finmarchicus dynamics in the Barents Sea are not, at least in the short-term, solely driven by advection from the Norwegian Sea, but that local production may be more important than commonly believed.

  18. Phenomenological crystal-field model of the magnetic and thermal properties of the Kondo-like system UCu2Si2

    NASA Astrophysics Data System (ADS)

    Troć, R.; Gajek, Z.; Pikul, A.; Misiorek, H.; Colineau, E.; Wastin, F.

    2013-07-01

    The transport properties described previously [Troć , Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.85.224434 85, 224434 (2012)] as well as the magnetic and thermal properties presented in this paper, observed for single-crystalline UCu2Si2, are discussed by assuming a dual (localized-itinerant) scenario. The electronic states of the localized 5f electrons in UCu2Si2 are constructed using the effective Hamiltonian known for ionic systems, allowing us to treat the Coulomb, spin-orbital, and crystal-field interactions on equal footing. The space of parameters has been restricted in the initial steps with the aid of the angular overlap model approximation. The final crystal-field parameters, obtained from the refined steps of calculations, are relatively large (in absolute values), which we attribute to the hybridization characteristic of the metallic systems on the verge of localization. The proposed crystal-field model reproduces correctly with satisfactory accuracy the magnetic and thermal properties of UCu2Si2 in agreement also with the transport properties reported previously. Considerable crystal-field splitting of the ground multiplet of 2760 K is responsible for a large anisotropy in the magnetic behavior, observed in the whole temperature range explored.

  19. Effects of Behavioural Strategy on the Exploitative Competition Dynamics.

    PubMed

    Nguyen-Ngoc, Doanh; Nguyen-Phuong, Thuy

    2016-12-01

    We investigate a system of two species exploiting a common resource. We consider both abiotic (i.e. with a constant resource supply rate) and biotic (i.e. with resource reproduction and self-limitation) resources. We are interested in the asymmetric competition where a given consumer is the locally superior resource exploiter (LSE) and the other is the locally inferior resource exploiter (LIE). They also interact directly via interference competition in the sense that LIE individuals can use two opposite strategies to compete with LSE individuals: we assume, in the first case, that LIE uses an avoiding strategy, i.e. LIE individuals go to a non-competition patch to avoids competition with LSE individuals, and in the second one, LIE uses an aggressive strategy, i.e. being very aggressive so that LSE individuals have to go to a non-competition patch. We further assume that there is no resource in the non-competition patch so that individuals have to come back to the competition patch for their maintenance, and the migration process acts on a fast time scale in comparison with demography and competition processes. The models show that being aggressive is efficient for LIE's survival and even provoke global extinction of the LSE and this result does not depend on the nature of resource.

  20. Transformation of body force localized near the surface of a half-space into equivalent surface stresses.

    PubMed

    Rouge, Clémence; Lhémery, Alain; Ségur, Damien

    2013-10-01

    An electromagnetic acoustic transducer (EMAT) or a laser used to generate elastic waves in a component is often described as a source of body force confined in a layer close to the surface. On the other hand, models for elastic wave radiation more efficiently handle sources described as distributions of surface stresses. Equivalent surface stresses can be obtained by integrating the body force with respect to depth. They are assumed to generate the same field as the one that would be generated by the body force. Such an integration scheme can be applied to Lorentz force for conventional EMAT configuration. When applied to magnetostrictive force generated by an EMAT in a ferromagnetic material, the same scheme fails, predicting a null stress. Transforming body force into equivalent surface stresses therefore, requires taking into account higher order terms of the force moments, the zeroth order being the simple force integration over the depth. In this paper, such a transformation is derived up to the second order, assuming that body forces are localized at depths shorter than the ultrasonic wavelength. Two formulations are obtained, each having some advantages depending on the application sought. They apply regardless of the nature of the force considered.

  1. Variable Anisotropic Brain Electrical Conductivities in Epileptogenic Foci

    PubMed Central

    Mandelkern, M.; Bui, D.; Salamon, N.; Vinters, H. V.; Mathern, G. W.

    2010-01-01

    Source localization models assume brain electrical conductivities are isotropic at about 0.33 S/m. These assumptions have not been confirmed ex vivo in humans. This study determined bidirectional electrical conductivities from pediatric epilepsy surgery patients. Electrical conductivities perpendicular and parallel to the pial surface of neocortex and subcortical white matter (n = 15) were measured using the 4-electrode technique and compared with clinical variables. Mean (±SD) electrical conductivities were 0.10 ± 0.01 S/m, and varied by 243% from patient to patient. Perpendicular and parallel conductivities differed by 45%, and the larger values were perpendicular to the pial surface in 47% and parallel in 40% of patients. A perpendicular principal axis was associated with normal, while isotropy and parallel principal axes were linked with epileptogenic lesions by MRI. Electrical conductivities were decreased in patients with cortical dysplasia compared with non-dysplasia etiologies. The electrical conductivity values of freshly excised human brain tissues were approximately 30% of assumed values, varied by over 200% from patient to patient, and had erratic anisotropic and isotropic shapes if the MRI showed a lesion. Understanding brain electrical conductivity and ways to non-invasively measure them are probably necessary to enhance the ability to localize EEG sources from epilepsy surgery patients. PMID:20440549

  2. Finite-time scaling at the Anderson transition for vibrations in solids

    NASA Astrophysics Data System (ADS)

    Beltukov, Y. M.; Skipetrov, S. E.

    2017-11-01

    A model in which a three-dimensional elastic medium is represented by a network of identical masses connected by springs of random strengths and allowed to vibrate only along a selected axis of the reference frame exhibits an Anderson localization transition. To study this transition, we assume that the dynamical matrix of the network is given by a product of a sparse random matrix with real, independent, Gaussian-distributed nonzero entries and its transpose. A finite-time scaling analysis of the system's response to an initial excitation allows us to estimate the critical parameters of the localization transition. The critical exponent is found to be ν =1.57 ±0.02 , in agreement with previous studies of the Anderson transition belonging to the three-dimensional orthogonal universality class.

  3. A Bayesian cluster analysis method for single-molecule localization microscopy data.

    PubMed

    Griffié, Juliette; Shannon, Michael; Bromley, Claire L; Boelen, Lies; Burn, Garth L; Williamson, David J; Heard, Nicholas A; Cope, Andrew P; Owen, Dylan M; Rubin-Delanchy, Patrick

    2016-12-01

    Cell function is regulated by the spatiotemporal organization of the signaling machinery, and a key facet of this is molecular clustering. Here, we present a protocol for the analysis of clustering in data generated by 2D single-molecule localization microscopy (SMLM)-for example, photoactivated localization microscopy (PALM) or stochastic optical reconstruction microscopy (STORM). Three features of such data can cause standard cluster analysis approaches to be ineffective: (i) the data take the form of a list of points rather than a pixel array; (ii) there is a non-negligible unclustered background density of points that must be accounted for; and (iii) each localization has an associated uncertainty in regard to its position. These issues are overcome using a Bayesian, model-based approach. Many possible cluster configurations are proposed and scored against a generative model, which assumes Gaussian clusters overlaid on a completely spatially random (CSR) background, before every point is scrambled by its localization precision. We present the process of generating simulated and experimental data that are suitable to our algorithm, the analysis itself, and the extraction and interpretation of key cluster descriptors such as the number of clusters, cluster radii and the number of localizations per cluster. Variations in these descriptors can be interpreted as arising from changes in the organization of the cellular nanoarchitecture. The protocol requires no specific programming ability, and the processing time for one data set, typically containing 30 regions of interest, is ∼18 h; user input takes ∼1 h.

  4. A Hidden Markov Model Approach for Simultaneously Estimating Local Ancestry and Admixture Time Using Next Generation Sequence Data in Samples of Arbitrary Ploidy

    PubMed Central

    Nielsen, Rasmus

    2017-01-01

    Admixture—the mixing of genomes from divergent populations—is increasingly appreciated as a central process in evolution. To characterize and quantify patterns of admixture across the genome, a number of methods have been developed for local ancestry inference. However, existing approaches have a number of shortcomings. First, all local ancestry inference methods require some prior assumption about the expected ancestry tract lengths. Second, existing methods generally require genotypes, which is not feasible to obtain for many next-generation sequencing projects. Third, many methods assume samples are diploid, however a wide variety of sequencing applications will fail to meet this assumption. To address these issues, we introduce a novel hidden Markov model for estimating local ancestry that models the read pileup data, rather than genotypes, is generalized to arbitrary ploidy, and can estimate the time since admixture during local ancestry inference. We demonstrate that our method can simultaneously estimate the time since admixture and local ancestry with good accuracy, and that it performs well on samples of high ploidy—i.e. 100 or more chromosomes. As this method is very general, we expect it will be useful for local ancestry inference in a wider variety of populations than what previously has been possible. We then applied our method to pooled sequencing data derived from populations of Drosophila melanogaster on an ancestry cline on the east coast of North America. We find that regions of local recombination rates are negatively correlated with the proportion of African ancestry, suggesting that selection against foreign ancestry is the least efficient in low recombination regions. Finally we show that clinal outlier loci are enriched for genes associated with gene regulatory functions, consistent with a role of regulatory evolution in ecological adaptation of admixed D. melanogaster populations. Our results illustrate the potential of local ancestry inference for elucidating fundamental evolutionary processes. PMID:28045893

  5. State and Local Responsibility for Promoting Learning Environments of Equity and Equality: A Conceptual Analysis.

    ERIC Educational Resources Information Center

    Willie, Charles V.

    This paper presents a conceptual analysis of the different responsibilities that State and local levels should assume in promoting the purposes of education. It is their responsibility to provide learning environments that promote equity and equality. The problem for State and local authorities with reference to desegregated education is that of…

  6. Constraints on the temperature inhomogeneity in quasar accretion discs from the ultraviolet-optical spectral variability

    NASA Astrophysics Data System (ADS)

    Kokubo, Mitsuru

    2015-05-01

    The physical mechanisms of the quasar ultraviolet (UV)-optical variability are not well understood despite the long history of observations. Recently, Dexter & Agol presented a model of quasar UV-optical variability, which assumes large local temperature fluctuations in the quasar accretion discs. This inhomogeneous accretion disc model is claimed to describe not only the single-band variability amplitude, but also microlensing size constraints and the quasar composite spectral shape. In this work, we examine the validity of the inhomogeneous accretion disc model in the light of quasar UV-optical spectral variability by using five-band multi-epoch light curves for nearly 9 000 quasars in the Sloan Digital Sky Survey (SDSS) Stripe 82 region. By comparing the values of the intrinsic scatter σint of the two-band magnitude-magnitude plots for the SDSS quasar light curves and for the simulated light curves, we show that Dexter & Agol's inhomogeneous accretion disc model cannot explain the tight inter-band correlation often observed in the SDSS quasar light curves. This result leads us to conclude that the local temperature fluctuations in the accretion discs are not the main driver of the several years' UV-optical variability of quasars, and consequently, that the assumption that the quasar accretion discs have large localized temperature fluctuations is not preferred from the viewpoint of the UV-optical spectral variability.

  7. The causal relation between turbulent particle flux and density gradient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Milligen, B. Ph. van; Martín de Aguilera, A.; Hidalgo, C.

    A technique for detecting the causal relationship between fluctuating signals is used to investigate the relation between flux and gradient in fusion plasmas. Both a resistive pressure gradient driven turbulence model and experimental Langmuir probe data from the TJ-II stellarator are studied. It is found that the maximum influence occurs at a finite time lag (non-instantaneous response) and that quasi-periodicities exist. Furthermore, the model results show very long range radial influences, extending over most of the investigated regions, possibly related to coupling effects associated with plasma self-organization. These results clearly show that transport in fusion plasmas is not local andmore » instantaneous, as is sometimes assumed.« less

  8. An Analysis of San Diego's Housing Market Using a Geographically Weighted Regression Approach

    NASA Astrophysics Data System (ADS)

    Grant, Christina P.

    San Diego County real estate transaction data was evaluated with a set of linear models calibrated by ordinary least squares and geographically weighted regression (GWR). The goal of the analysis was to determine whether the spatial effects assumed to be in the data are best studied globally with no spatial terms, globally with a fixed effects submarket variable, or locally with GWR. 18,050 single-family residential sales which closed in the six months between April 2014 and September 2014 were used in the analysis. Diagnostic statistics including AICc, R2, Global Moran's I, and visual inspection of diagnostic plots and maps indicate superior model performance by GWR as compared to both global regressions.

  9. Local approximation of a metapopulation's equilibrium.

    PubMed

    Barbour, A D; McVinish, R; Pollett, P K

    2018-04-18

    We consider the approximation of the equilibrium of a metapopulation model, in which a finite number of patches are randomly distributed over a bounded subset [Formula: see text] of Euclidean space. The approximation is good when a large number of patches contribute to the colonization pressure on any given unoccupied patch, and when the quality of the patches varies little over the length scale determined by the colonization radius. If this is the case, the equilibrium probability of a patch at z being occupied is shown to be close to [Formula: see text], the equilibrium occupation probability in Levins's model, at any point [Formula: see text] not too close to the boundary, if the local colonization pressure and extinction rates appropriate to z are assumed. The approximation is justified by giving explicit upper and lower bounds for the occupation probabilities, expressed in terms of the model parameters. Since the patches are distributed randomly, the occupation probabilities are also random, and we complement our bounds with explicit bounds on the probability that they are satisfied at all patches simultaneously.

  10. Nonlinear deformation and localized failure of bacterial streamers in creeping flows

    PubMed Central

    Biswas, Ishita; Ghosh, Ranajay; Sadrzadeh, Mohtada; Kumar, Aloke

    2016-01-01

    We investigate the failure of bacterial floc mediated streamers in a microfluidic device in a creeping flow regime using both experimental observations and analytical modeling. The quantification of streamer deformation and failure behavior is possible due to the use of 200 nm fluorescent polystyrene beads which firmly embed in the extracellular polymeric substance (EPS) and act as tracers. The streamers, which form soon after the commencement of flow begin to deviate from an apparently quiescent fully formed state in spite of steady background flow and limited mass accretion indicating significant mechanical nonlinearity. This nonlinear behavior shows distinct phases of deformation with mutually different characteristic times and comes to an end with a distinct localized failure of the streamer far from the walls. We investigate this deformation and failure behavior for two separate bacterial strains and develop a simplified but nonlinear analytical model describing the experimentally observed instability phenomena assuming a necking route to instability. Our model leads to a power law relation between the critical strain at failure and the fluid velocity scale exhibiting excellent qualitative and quantitative agreeing with the experimental rupture behavior. PMID:27558511

  11. Total Mean Curvature, Scalar Curvature, and a Variational Analog of Brown-York Mass

    NASA Astrophysics Data System (ADS)

    Mantoulidis, Christos; Miao, Pengzi

    2017-06-01

    We study the supremum of the total mean curvature on the boundary of compact, mean-convex 3-manifolds with nonnegative scalar curvature, and a prescribed boundary metric. We establish an additivity property for this supremum and exhibit rigidity for maximizers assuming the supremum is attained. When the boundary consists of 2-spheres, we demonstrate that the finiteness of the supremum follows from the previous work of Shi-Tam and Wang-Yau on the quasi-local mass problem in general relativity. In turn, we define a variational analog of Brown-York quasi-local mass without assuming that the boundary 2-sphere has positive Gauss curvature.

  12. Qubit transport model for unitary black hole evaporation without firewalls*

    NASA Astrophysics Data System (ADS)

    Osuga, Kento; Page, Don N.

    2018-03-01

    We give an explicit toy qubit transport model for transferring information from the gravitational field of a black hole to the Hawking radiation by a continuous unitary transformation of the outgoing radiation and the black hole gravitational field. The model has no firewalls or other drama at the event horizon, and it avoids a counterargument that has been raised for subsystem transfer models as resolutions of the firewall paradox. Furthermore, it fits the set of six physical constraints that Giddings has proposed for models of black hole evaporation. It does utilize nonlocal qubits for the gravitational field but assumes that the radiation interacts locally with these nonlocal qubits, so in some sense the nonlocality is confined to the gravitational sector. Although the qubit model is too crude to be quantitatively correct for the detailed spectrum of Hawking radiation, it fits qualitatively with what is expected.

  13. An evaluation of the Bayesian approach to fitting the N-mixture model for use with pseudo-replicated count data

    USGS Publications Warehouse

    Toribo, S.G.; Gray, B.R.; Liang, S.

    2011-01-01

    The N-mixture model proposed by Royle in 2004 may be used to approximate the abundance and detection probability of animal species in a given region. In 2006, Royle and Dorazio discussed the advantages of using a Bayesian approach in modelling animal abundance and occurrence using a hierarchical N-mixture model. N-mixture models assume replication on sampling sites, an assumption that may be violated when the site is not closed to changes in abundance during the survey period or when nominal replicates are defined spatially. In this paper, we studied the robustness of a Bayesian approach to fitting the N-mixture model for pseudo-replicated count data. Our simulation results showed that the Bayesian estimates for abundance and detection probability are slightly biased when the actual detection probability is small and are sensitive to the presence of extra variability within local sites.

  14. Modeling Analysis for NASA GRC Vacuum Facility 5 Upgrade

    NASA Technical Reports Server (NTRS)

    Yim, J. T.; Herman, D. A.; Burt, J. M.

    2013-01-01

    A model of the VF5 test facility at NASA Glenn Research Center was developed using the direct simulation Monte Carlo Hypersonic Aerothermodynamics Particle (HAP) code. The model results were compared to several cold flow and thruster hot fire cases. The main uncertainty in the model is the determination of the effective sticking coefficient -- which sets the pumping effectiveness of the cryopanels and oil diffusion pumps including baffle transmission. An effective sticking coefficient of 0.25 was found to provide generally good agreement with the experimental chamber pressure data. The model, which assumes a cold diffuse inflow, also fared satisfactorily in predicting the pressure distribution during thruster operation. The model was used to assess other chamber configurations to improve the local effective pumping speed near the thruster. A new configuration of the existing cryopumps is found to show more than 2x improvement over the current baseline configuration.

  15. A Hierarchy of Heuristic-Based Models of Crowd Dynamics

    NASA Astrophysics Data System (ADS)

    Degond, P.; Appert-Rolland, C.; Moussaïd, M.; Pettré, J.; Theraulaz, G.

    2013-09-01

    We derive a hierarchy of kinetic and macroscopic models from a noisy variant of the heuristic behavioral Individual-Based Model of Ngai et al. (Disaster Med. Public Health Prep. 3:191-195, 2009) where pedestrians are supposed to have constant speeds. This IBM supposes that pedestrians seek the best compromise between navigation towards their target and collisions avoidance. We first propose a kinetic model for the probability distribution function of pedestrians. Then, we derive fluid models and propose three different closure relations. The first two closures assume that the velocity distribution function is either a Dirac delta or a von Mises-Fisher distribution respectively. The third closure results from a hydrodynamic limit associated to a Local Thermodynamical Equilibrium. We develop an analogy between this equilibrium and Nash equilibria in a game theoretic framework. In each case, we discuss the features of the models and their suitability for practical use.

  16. Angioedema attacks in patients with hereditary angioedema: Local manifestations of a systemic activation process.

    PubMed

    Hofman, Zonne L M; Relan, Anurag; Zeerleder, Sacha; Drouet, Christian; Zuraw, Bruce; Hack, C Erik

    2016-08-01

    Hereditary angioedema (HAE) caused by a deficiency of functional C1-inhibitor (C1INH) becomes clinically manifest as attacks of angioedema. C1INH is the main inhibitor of the contact system. Poor control of a local activation process of this system at the site of the attack is believed to lead to the formation of bradykinin (BK), which increases local vasopermeability and mediates angioedema on interaction with BK receptor 2 on the endothelium. However, several observations in patients with HAE are difficult to explain from a pathogenic model claiming a local activation process at the site of the angioedema attack. Therefore we postulate an alternative model for angioedema attacks in patients with HAE, which assumes a systemic, fluid-phase activation of the contact system to generate BK and its breakdown products. Interaction of these peptides with endothelial receptors that are locally expressed in the affected tissues rather than with receptors constitutively expressed by the endothelium throughout the whole body explains that such a systemic activation process results in local manifestations of an attack. In particular, BK receptor 1, which is induced on the endothelium by inflammatory stimuli, such as kinins and cytokines, meets the specifications of the involved receptor. The pathogenic model discussed here also provides an explanation for why angioedema can occur at multiple sites during an attack and why HAE attacks respond well to modest increases of circulating C1INH activity levels because inhibition of fluid-phase Factor XIIa and kallikrein requires lower C1INH levels than inhibition of activator-bound factors. Copyright © 2016 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.

  17. The inverted pendulum model of bipedal standing cannot be stabilized through direct feedback of force and contractile element length and velocity at realistic series elastic element stiffness.

    PubMed

    van Soest, A J Knoek; Rozendaal, Leonard A

    2008-07-01

    Control of bipedal standing is typically analyzed in the context of a single-segment inverted pendulum model. The stiffness K (SE) of the series elastic element that transmits the force generated by the contractile elements of the ankle plantarflexors to the skeletal system has been reported to be smaller in magnitude than the destabilizing gravitational stiffness K ( g ). In this study, we assess, in case K (SE) + K ( g ) < 0, if bipedal standing can be locally stable under direct feedback of contractile element length, contractile element velocity (both sensed by muscle spindles) and muscle force (sensed by Golgi tendon organs) to alpha-motoneuron activity. A theoretical analysis reveals that even though positive feedback of force may increase the stiffness of the muscle-tendon complex to values well over the destabilizing gravitational stiffness, dynamic instability makes it impossible to obtain locally stable standing under the conditions assumed.

  18. Uncertainty analysis of an inflow forecasting model: extension of the UNEEC machine learning-based method

    NASA Astrophysics Data System (ADS)

    Pianosi, Francesca; Lal Shrestha, Durga; Solomatine, Dimitri

    2010-05-01

    This research presents an extension of UNEEC (Uncertainty Estimation based on Local Errors and Clustering, Shrestha and Solomatine, 2006, 2008 & Solomatine and Shrestha, 2009) method in the direction of explicit inclusion of parameter uncertainty. UNEEC method assumes that there is an optimal model and the residuals of the model can be used to assess the uncertainty of the model prediction. It is assumed that all sources of uncertainty including input, parameter and model structure uncertainty are explicitly manifested in the model residuals. In this research, theses assumptions are relaxed, and the UNEEC method is extended to consider parameter uncertainty as well (abbreviated as UNEEC-P). In UNEEC-P, first we use Monte Carlo (MC) sampling in parameter space to generate N model realizations (each of which is a time series), estimate the prediction quantiles based on the empirical distribution functions of the model residuals considering all the residual realizations, and only then apply the standard UNEEC method that encapsulates the uncertainty of a hydrologic model (expressed by quantiles of the error distribution) in a machine learning model (e.g., ANN). UNEEC-P is applied first to a linear regression model of synthetic data, and then to a real case study of forecasting inflow to lake Lugano in northern Italy. The inflow forecasting model is a stochastic heteroscedastic model (Pianosi and Soncini-Sessa, 2009). The preliminary results show that the UNEEC-P method produces wider uncertainty bounds, which is consistent with the fact that the method considers also parameter uncertainty of the optimal model. In the future UNEEC method will be further extended to consider input and structure uncertainty which will provide more realistic estimation of model predictions.

  19. Estimation of the reproduction number of dengue fever from spatial epidemic data.

    PubMed

    Chowell, G; Diaz-Dueñas, P; Miller, J C; Alcazar-Velazco, A; Hyman, J M; Fenimore, P W; Castillo-Chavez, C

    2007-08-01

    Dengue, a vector-borne disease, thrives in tropical and subtropical regions worldwide. A retrospective analysis of the 2002 dengue epidemic in Colima located on the Mexican central Pacific coast is carried out. We estimate the reproduction number from spatial epidemic data at the level of municipalities using two different methods: (1) Using a standard dengue epidemic model and assuming pure exponential initial epidemic growth and (2) Fitting a more realistic epidemic model to the initial phase of the dengue epidemic curve. Using Method I, we estimate an overall mean reproduction number of 3.09 (95% CI: 2.34,3.84) as well as local reproduction numbers whose values range from 1.24 (1.15,1.33) to 4.22 (2.90,5.54). Using Method II, the overall mean reproduction number is estimated to be 2.0 (1.75,2.23) and local reproduction numbers ranging from 0.49 (0.0,1.0) to 3.30 (1.63,4.97). Method I systematically overestimates the reproduction number relative to the refined Method II, and hence it would overestimate the intensity of interventions required for containment. Moreover, optimal intervention with defined resources demands different levels of locally tailored mitigation. Local epidemic peaks occur between the 24th and 35th week of the year, and correlate positively with the final local epidemic sizes (rho=0.92, P-value<0.001). Moreover, final local epidemic sizes are found to be linearly related to the local population size (P-value<0.001). This observation supports a roughly constant number of female mosquitoes per person across urban and rural regions.

  20. Intelligent simulation of aquatic environment economic policy coupled ABM and SD models.

    PubMed

    Wang, Huihui; Zhang, Jiarui; Zeng, Weihua

    2018-03-15

    Rapid urbanization and population growth have resulted in serious water shortage and pollution of the aquatic environment, which are important reasons for the complex increase in environmental deterioration in the region. This study examines the environmental consequences and economic impacts of water resource shortages under variant economic policies; however, this requires complex models that jointly consider variant agents and sectors within a systems perspective. Thus, we propose a complex system model that couples multi-agent based models (ABM) and system dynamics (SD) models to simulate the impact of alternative economic policies on water use and pricing. Moreover, this model took the constraint of the local water resources carrying capacity into consideration. Results show that to achieve the 13th Five Year Plan targets in Dianchi, water prices for local residents and industries should rise to 3.23 and 4.99 CNY/m 3 , respectively. The corresponding sewage treatment fees for residents and industries should rise to 1.50 and 2.25 CNY/m 3 , respectively, assuming comprehensive adjustment of industrial structure and policy. At the same time, the local government should exercise fine-scale economic policy combined with emission fees assessed for those exceeding a standard, and collect fines imposed as punishment for enterprises that exceed emission standards. When fines reach 500,000 CNY, the total number of enterprises that exceed emission standards in the basin can be controlled within 1%. Moreover, it is suggested that the volume of water diversion in Dianchi should be appropriately reduced to 3.06×10 8 m 3 . The reduced expense of water diversion should provide funds to use for the construction of recycled water facilities. Then the local rise in the rate of use of recycled water should reach 33%, and 1.4 CNY/m 3 for the price of recycled water could be provided to ensure the sustainable utilization of local water resources. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Instrumental record of debris flow initiation during natural rainfall: Implications for modeling slope stability

    USGS Publications Warehouse

    Montgomery, D.R.; Schmidt, K.M.; Dietrich, W.E.; McKean, J.

    2009-01-01

    The middle of a hillslope hollow in the Oregon Coast Range failed and mobilized as a debris flow during heavy rainfall in November 1996. Automated pressure transducers recorded high spatial variability of pore water pressure within the area that mobilized as a debris flow, which initiated where local upward flow from bedrock developed into overlying colluvium. Postfailure observations of the bedrock surface exposed in the debris flow scar reveal a strong spatial correspondence between elevated piezometric response and water discharging from bedrock fractures. Measurements of apparent root cohesion on the basal (Cb) and lateral (Cl) scarp demonstrate substantial local variability, with areally weighted values of Cb = 0.1 and Cl = 4.6 kPa. Using measured soil properties and basal root strength, the widely used infinite slope model, employed assuming slope parallel groundwater flow, provides a poor prediction of hydrologie conditions at failure. In contrast, a model including lateral root strength (but neglecting lateral frictional strength) gave a predicted critical value of relative soil saturation that fell within the range defined by the arithmetic and geometric mean values at the time of failure. The 3-D slope stability model CLARA-W, used with locally observed pore water pressure, predicted small areas with lower factors of safety within the overall slide mass at sites consistent with field observations of where the failure initiated. This highly variable and localized nature of small areas of high pore pressure that can trigger slope failure means, however, that substantial uncertainty appears inevitable for estimating hydrologie conditions within incipient debris flows under natural conditions. Copyright 2009 by the American Geophysical Union.

  2. Initial mass functions from ultraviolet stellar photometry: A comparison of Lucke and Hodge OB associations near 30 Doradus with the nearby field

    NASA Technical Reports Server (NTRS)

    Hill, Jesse K.; Isensee, Joan E.; Cornett, Robert H.; Bohlin, Ralph C.; O'Connell, Robert W.; Roberts, Morton S.; Smith, Andrew M.; Stecher, Theodore P.

    1994-01-01

    UV stellar photometry is presented for 1563 stars within a 40 minutes circular field in the Large Magellanic Cloud (LMC), excluding the 10 min x 10 min field centered on R136 investigated earlier by Hill et al. (1993). Magnitudes are computed from images obtained by the Ultraviolet Imaging Telescope (UIT) in bands centered at 1615 A and 2558 A. Stellar masses and extinctions are estimated for the stars in associations using the evolutionary models of Schaerer et al. (1993), assuming the age is 4 Myr and that the local LMC extinction follows the Fitzpatrick (1985) 30 Dor extinction curve. The estimated slope of the initial mass function (IMF) for massive stars (greater than 15 solar mass) within the Lucke and Hodge (LH) associations is Gamma = -1.08 +/- 0.2. Initial masses and extinctions for stars not within LH associations are estimated assuming that the stellar age is either 4 Myr or half the stellar lifetime, whichever is larger. The estimated slope of the IMF for massive stars not within LH associations is Gamma = -1.74 +/- 0.3 (assuming continuous star formation), compared with Gamma = -1.35, and Gamma = -1.7 +/- 0.5, obtained for the Galaxy by Salpeter (1955) and Scalo (1986), respectively, and Gamma = -1.6 obtained for massive stars in the Galaxy by Garmany, Conti, & Chiosi (1982). The shallower slope of the association IMF suggests that not only is the star formation rate higher in associations, but that the local conditions favor the formation of higher mass stars there. We make no corrections for binaries or incompleteness.

  3. Investigation on the electron flux to the wall in the VENUS ion source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thuillier, T.; Angot, J.; Benitez, J. Y.

    The long-term operation of high charge state electron cyclotron resonance ion sources fed with high microwave power has caused damage to the plasma chamber wall in several laboratories. Porosity, or a small hole, can be progressively created in the chamber wall which can destroy the plasma chamber over a few year time scale. Here, a burnout of the VENUS plasma chamber is investigated in which the hole formation in relation to the local hot electron power density is studied. First, the results of a simple model assuming that hot electrons are fully magnetized and strictly following magnetic field lines aremore » presented. The model qualitatively reproduces the experimental traces left by the plasma on the wall. However, it is too crude to reproduce the localized electron power density for creating a hole in the chamber wall. Second, the results of a Monte Carlo simulation, following a population of scattering hot electrons, indicate a localized high power deposited to the chamber wall consistent with the hole formation process. Finally, a hypervapotron cooling scheme is proposed to mitigate the hole formation in electron cyclotron resonance plasma chamber wall.« less

  4. Investigation on the electron flux to the wall in the VENUS ion source

    DOE PAGES

    Thuillier, T.; Angot, J.; Benitez, J. Y.; ...

    2015-12-01

    The long-term operation of high charge state electron cyclotron resonance ion sources fed with high microwave power has caused damage to the plasma chamber wall in several laboratories. Porosity, or a small hole, can be progressively created in the chamber wall which can destroy the plasma chamber over a few year time scale. Here, a burnout of the VENUS plasma chamber is investigated in which the hole formation in relation to the local hot electron power density is studied. First, the results of a simple model assuming that hot electrons are fully magnetized and strictly following magnetic field lines aremore » presented. The model qualitatively reproduces the experimental traces left by the plasma on the wall. However, it is too crude to reproduce the localized electron power density for creating a hole in the chamber wall. Second, the results of a Monte Carlo simulation, following a population of scattering hot electrons, indicate a localized high power deposited to the chamber wall consistent with the hole formation process. Finally, a hypervapotron cooling scheme is proposed to mitigate the hole formation in electron cyclotron resonance plasma chamber wall.« less

  5. Investigation on the electron flux to the wall in the VENUS ion source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thuillier, T., E-mail: thuillier@lpsc.in2p3.fr; Angot, J.; Benitez, J. Y.

    The long-term operation of high charge state electron cyclotron resonance ion sources fed with high microwave power has caused damage to the plasma chamber wall in several laboratories. Porosity, or a small hole, can be progressively created in the chamber wall which can destroy the plasma chamber over a few year time scale. A burnout of the VENUS plasma chamber is investigated in which the hole formation in relation to the local hot electron power density is studied. First, the results of a simple model assuming that hot electrons are fully magnetized and strictly following magnetic field lines are presented.more » The model qualitatively reproduces the experimental traces left by the plasma on the wall. However, it is too crude to reproduce the localized electron power density for creating a hole in the chamber wall. Second, the results of a Monte Carlo simulation, following a population of scattering hot electrons, indicate a localized high power deposited to the chamber wall consistent with the hole formation process. Finally, a hypervapotron cooling scheme is proposed to mitigate the hole formation in electron cyclotron resonance plasma chamber wall.« less

  6. Model for heat and mass transfer in freeze-drying of pellets.

    PubMed

    Trelea, Ioan Cristian; Passot, Stéphanie; Marin, Michèle; Fonseca, Fernanda

    2009-07-01

    Lyophilizing frozen pellets, and especially spray freeze-drying, have been receiving growing interest. To design efficient and safe freeze-drying cycles, local temperature and moisture content in the product bed have to be known, but both are difficult to measure in the industry. Mathematical modeling of heat and mass transfer helps to determine local freeze-drying conditions and predict effects of operation policy, and equipment and recipe changes on drying time and product quality. Representative pellets situated at different positions in the product slab were considered. One-dimensional transfer in the slab and radial transfer in the pellets were assumed. Coupled heat and vapor transfer equations between the temperature-controlled shelf, the product bulk, the sublimation front inside the pellets, and the chamber were established and solved numerically. The model was validated based on bulk temperature measurement performed at two different locations in the product slab and on partial vapor pressure measurement in the freeze-drying chamber. Fair agreement between measured and calculated values was found. In contrast, a previously developed model for compact product layer was found inadequate in describing freeze-drying of pellets. The developed model represents a good starting basis for studying freeze-drying of pellets. It has to be further improved and validated for a variety of product types and freeze-drying conditions (shelf temperature, total chamber pressure, pellet size, slab thickness, etc.). It could be used to develop freeze-drying cycles based on product quality criteria such as local moisture content and glass transition temperature.

  7. Survival and recovery rates of American woodcock banded in Michigan

    USGS Publications Warehouse

    Krementz, David G.; Hines, James E.; Luukkonen, David R.

    2003-01-01

    American woodcock (Scolopax minor) population indices have declined since U.S. Fish and Wildlife Service (USFWS) monitoring began in 1968. Management to stop and/or reverse this population trend has been hampered by the lack of recent information on woodcock population parameters. Without recent information on survival rate trends, managers have had to assume that the recent declines in recruitment indices are the only parameter driving woodcock declines. Using program MARK, we estimated annual survival and recovery rates of adult and juvenile American woodcock, and estimated summer survival of local (young incapable of sustained flight) woodcock banded in Michigan between 1978 and 1998. We constructed a set of candidate models from a global model with age (local, juvenile, adult) and time (year)-dependent survival and recovery rates to no age or time-dependent survival and recovery rates. Five models were supported by the data, with all models suggesting that survival rates differed among age classes, and 4 models had survival rates that were constant over time. The fifth model suggested that juvenile and adult survival rates were linear on a logit scale over time. Survival rates averaged over likelihood-weighted model results were 0.8784 +/- 0.1048 (SE) for locals, 0.2646 +/- 0.0423 (SE) for juveniles, and 0.4898 +/- 0.0329 (SE) for adults. Weighted average recovery rates were 0.0326 +/- 0.0053 (SE) for juveniles and 0.0313 +/- 0.0047 (SE) for adults. Estimated differences between our survival estimates and those from prior years were small, and our confidence around those differences was variable and uncertain. juvenile survival rates were low.

  8. A Description of Local Time Asymmetries in the Kronian Current Sheet

    NASA Astrophysics Data System (ADS)

    Nickerson, J. S.; Hansen, K. C.; Gombosi, T. I.

    2012-12-01

    Cassini observations imply that Saturn's magnetospheric current sheet is displaced northward above the rotational equator [C.S. Arridge et al., Warping of Saturn's magnetospheric and magnetotail current sheets, Journal of Geophysical Research, Vol. 113, August 2008]. Arridge et al. show that this hinging of the current sheet above the equator occurs over the noon, midnight, and dawn local time sectors. They present an azimuthally independent model to describe this paraboloid-like geometry. We have used our global MHD model, BATS-R-US/SWMF, to study Saturn's magnetospheric current sheet under various solar wind dynamic pressure and solar zenith angle conditions. We show that under reasonable conditions the current sheet does take on the basic shape of the Arridge model in the noon, midnight, and dawn sectors. However, the hinging distance parameter used in the Arridge model is not a constant and does in fact vary in Saturn local time. We recommend that the Arridge model should be adjusted to account for this azimuthal dependence. Arridge et al. does not discuss the shape of the current sheet in the dusk sector due to an absence of data but does presume that the current sheet will assume the same geometry in this region. On the contrary, our model shows that this is not the case. On the dusk side the current sheet hinges (aggressively) southward and cannot be accounted for by the Arridge model. We will present results from our simulations showing the deviation from axisymmetry and the general behavior of the current sheet under different conditions.

  9. Air-flow distortion and turbulence statistics near an animal facility

    NASA Astrophysics Data System (ADS)

    Prueger, J. H.; Eichinger, W. E.; Hipps, L. E.; Hatfield, J. L.; Cooper, D. I.

    The emission and dispersion of particulates and gases from concentrated animal feeding operations (CAFO) at local to regional scales is a current issue in science and society. The transport of particulates, odors and toxic chemical species from the source into the local and eventually regional atmosphere is largely determined by turbulence. Any models that attempt to simulate the dispersion of particles must either specify or assume various statistical properties of the turbulence field. Statistical properties of turbulence are well documented for idealized boundary layers above uniform surfaces. However, an animal production facility is a complex surface with structures that act as bluff bodies that distort the turbulence intensity near the buildings. As a result, the initial release and subsequent dispersion of effluents in the region near a facility will be affected by the complex nature of the surface. Previous Lidar studies of plume dispersion over the facility used in this study indicated that plumes move in complex yet organized patterns that would not be explained by the properties of turbulence generally assumed in models. The objective of this study was to characterize the near-surface turbulence statistics in the flow field around an array of animal confinement buildings. Eddy covariance towers were erected in the upwind, within the building array and downwind regions of the flow field. Substantial changes in turbulence intensity statistics and turbulence-kinetic energy (TKE) were observed as the mean wind flow encountered the building structures. Spectra analysis demonstrated unique distribution of the spectral energy in the vertical profile above the buildings.

  10. Estimating the influence of population density and dispersal behavior on the ability to detect and monitor Agrilus planipennis (Coleoptera: Buprestidae) populations.

    PubMed

    Mercader, R J; Siegert, N W; McCullough, D G

    2012-02-01

    Emerald ash borer, Agrilus planipennis Fairmaire (Coleoptera: Buprestidae), a phloem-feeding pest of ash (Fraxinus spp.) trees native to Asia, was first discovered in North America in 2002. Since then, A. planipennis has been found in 15 states and two Canadian provinces and has killed tens of millions of ash trees. Understanding the probability of detecting and accurately delineating low density populations of A. planipennis is a key component of effective management strategies. Here we approach this issue by 1) quantifying the efficiency of sampling nongirdled ash trees to detect new infestations of A. planipennis under varying population densities and 2) evaluating the likelihood of accurately determining the localized spread of discrete A. planipennis infestations. To estimate the probability a sampled tree would be detected as infested across a gradient of A. planipennis densities, we used A. planipennis larval density estimates collected during intensive surveys conducted in three recently infested sites with known origins. Results indicated the probability of detecting low density populations by sampling nongirdled trees was very low, even when detection tools were assumed to have three-fold higher detection probabilities than nongirdled trees. Using these results and an A. planipennis spread model, we explored the expected accuracy with which the spatial extent of an A. planipennis population could be determined. Model simulations indicated a poor ability to delineate the extent of the distribution of localized A. planipennis populations, particularly when a small proportion of the population was assumed to have a higher propensity for dispersal.

  11. Numerical solution of a spatio-temporal gender-structured model for hantavirus infection in rodents.

    PubMed

    Bürger, Raimund; Chowell, Gerardo; Gavilán, Elvis; Mulet, Pep; Villada, Luis M

    2018-02-01

    In this article we describe the transmission dynamics of hantavirus in rodents using a spatio-temporal susceptible-exposed-infective-recovered (SEIR) compartmental model that distinguishes between male and female subpopulations [L.J.S. Allen, R.K. McCormack and C.B. Jonsson, Bull. Math. Biol. 68 (2006), 511--524]. Both subpopulations are assumed to differ in their movement with respect to local variations in the densities of their own and the opposite gender group. Three alternative models for the movement of the male individuals are examined. In some cases the movement is not only directed by the gradient of a density (as in the standard diffusive case), but also by a non-local convolution of density values as proposed, in another context, in [R.M. Colombo and E. Rossi, Commun. Math. Sci., 13 (2015), 369--400]. An efficient numerical method for the resulting convection-diffusion-reaction system of partial differential equations is proposed. This method involves techniques of weighted essentially non-oscillatory (WENO) reconstructions in combination with implicit-explicit Runge-Kutta (IMEX-RK) methods for time stepping. The numerical results demonstrate significant differences in the spatio-temporal behavior predicted by the different models, which suggest future research directions.

  12. Epidemic outbreaks in growing scale-free networks with local structure

    NASA Astrophysics Data System (ADS)

    Ni, Shunjiang; Weng, Wenguo; Shen, Shifei; Fan, Weicheng

    2008-09-01

    The class of generative models has already attracted considerable interest from researchers in recent years and much expanded the original ideas described in BA model. Most of these models assume that only one node per time step joins the network. In this paper, we grow the network by adding n interconnected nodes as a local structure into the network at each time step with each new node emanating m new edges linking the node to the preexisting network by preferential attachment. This successfully generates key features observed in social networks. These include power-law degree distribution pk∼k, where μ=(n-1)/m is a tuning parameter defined as the modularity strength of the network, nontrivial clustering, assortative mixing, and modular structure. Moreover, all these features are dependent in a similar way on the parameter μ. We then study the susceptible-infected epidemics on this network with identical infectivity, and find that the initial epidemic behavior is governed by both of the infection scheme and the network structure, especially the modularity strength. The modularity of the network makes the spreading velocity much lower than that of the BA model. On the other hand, increasing the modularity strength will accelerate the propagation velocity.

  13. Influence of viscous dissipation on a copper oxide nanofluid in an oblique channel: Implementation of the KKL model

    NASA Astrophysics Data System (ADS)

    Ahmed, Naveed; Adnan; Khan, Umar; Mohyud-Din, Syed Tauseef; Manzoor, Raheela

    2017-05-01

    This paper aims to study the flow of a nanofluid in the presence of viscous dissipation in an oblique channel (nonparallel plane walls). For thermal conductivity of the nanofluid, the KKL model is utilized. Water is taken as the base fluid and it is assumed to be containing the solid nanoparticles of copper oxide. The appropriate set of partial differential equations is transformed into a self-similar system with the help of feasible similarity transformations. The solution of the model is obtained analytically and to ensure the validity of analytical solutions, numerically one is also calculated. The homotopy analysis method (HAM) and the Runge-Kutta numerical method (coupled with shooting techniques) have been employed for the said purpose. The influence of the different flow parameters in the model on velocity, thermal field, skin friction coefficient and local rate of heat transfer has been discussed with the help of graphs. Furthermore, graphical comparison between the local rate of heat transfer in regular fluids and nanofluids has been made which shows that in case of nanofluids, heat transfer is rapid as compared to regular fluids.

  14. Diffusion Restrictions Surrounding Mitochondria: A Mathematical Model of Heart Muscle Fibers

    PubMed Central

    Ramay, Hena R.; Vendelin, Marko

    2009-01-01

    Abstract Several experiments on permeabilized heart muscle fibers suggest the existence of diffusion restrictions grouping mitochondria and surrounding ATPases. The specific causes of these restrictions are not known, but intracellular structures are speculated to act as diffusion barriers. In this work, we assume that diffusion restrictions are induced by sarcoplasmic reticulum (SR), cytoskeleton proteins localized near SR, and crowding of cytosolic proteins. The aim of this work was to test whether such localization of diffusion restrictions would be consistent with the available experimental data and evaluate the extent of the restrictions. For that, a three-dimensional finite-element model was composed with the geometry based on mitochondrial and SR structural organization. Diffusion restrictions induced by SR and cytoskeleton proteins were varied with other model parameters to fit the set of experimental data obtained on permeabilized rat heart muscle fibers. There are many sets of model parameters that were able to reproduce all experiments considered in this work. However, in all the sets, <5–6% of the surface formed by SR and associated cytoskeleton proteins is permeable to metabolites. Such a low level of permeability indicates that the proteins should play a dominant part in formation of the diffusion restrictions. PMID:19619458

  15. The sometimes competing retrieval and Van Hamme & Wasserman models predict the selective role of within-compound associations in retrospective revaluation.

    PubMed

    Witnauer, James; Rhodes, L Jack; Kysor, Sarah; Narasiwodeyar, Sanjay

    2017-11-21

    The correlation between blocking and within-compound memory is stronger when compound training occurs before elemental training (i.e., backward blocking) than when the phases are reversed (i.e., forward blocking; Melchers et al., 2004, 2006). This trial order effect is often interpreted as problematic for performance-focused models that assume a critical role for within-compound associations in both retrospective revaluation and traditional cue competition. The present manuscript revisits this issue using a computational modeling approach. The fit of sometimes competing retrieval (SOCR; Stout & Miller, 2007) was compared to the fit of an acquisition-focused model of retrospective revaluation and cue competition. These simulations reveal that SOCR explains this trial order effect in some situations based on its use of local error reduction. Published by Elsevier B.V.

  16. Digital image registration method based upon binary boundary maps

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R., Jr.; Andrus, J. F.; Campbell, C. W.

    1974-01-01

    A relatively fast method is presented for matching or registering the digital data of imagery from the same ground scene acquired at different times, or from different multispectral images, sensors, or both. It is assumed that the digital images can be registed by using translations and rotations only, that the images are of the same scale, and that little or no distortion exists between images. It is further assumed that by working with several local areas of the image, the rotational effects in the local areas can be neglected. Thus, by treating the misalignments of local areas as translations, it is possible to determine rotational and translational misalignments for a larger portion of the image containing the local areas. This procedure of determining the misalignment and then registering the data according to the misalignment can be repeated until the desired degree of registration is achieved. The method to be presented is based upon the use of binary boundary maps produced from the raw digital imagery rather than the raw digital data.

  17. Housing land transaction data and structural econometric estimation of preference parameters for urban economic simulation models

    PubMed Central

    Caruso, Geoffrey; Cavailhès, Jean; Peeters, Dominique; Thomas, Isabelle; Frankhauser, Pierre; Vuidel, Gilles

    2015-01-01

    This paper describes a dataset of 6284 land transactions prices and plot surfaces in 3 medium-sized cities in France (Besançon, Dijon and Brest). The dataset includes road accessibility as obtained from a minimization algorithm, and the amount of green space available to households in the neighborhood of the transactions, as evaluated from a land cover dataset. Further to the data presentation, the paper describes how these variables can be used to estimate the non-observable parameters of a residential choice function explicitly derived from a microeconomic model. The estimates are used by Caruso et al. (2015) to run a calibrated microeconomic urban growth simulation model where households are assumed to trade-off accessibility and local green space amenities. PMID:26958606

  18. The dynamics and control of large flexible space structures, 3. Part A: Shape and orientation control of a platform in orbit using point actuators

    NASA Technical Reports Server (NTRS)

    Bainum, P. M.; Reddy, A. S. S. R.; Krishna, R.; James, P. K.

    1980-01-01

    The dynamics, attitude, and shape control of a large thin flexible square platform in orbit are studied. Attitude and shape control are assumed to result from actuators placed perpendicular to the main surface and one edge and their effect on the rigid body and elastic modes is modelled to first order. The equations of motion are linearized about three different nominal orientations: (1) the platform following the local vertical with its major surface perpendicular to the orbital plane; (2) the platform following the local horizontal with its major surface normal to the local vertical; and (3) the platform following the local vertical with its major surface perpendicular to the orbit normal. The stability of the uncontrolled system is investigated analytically. Once controllability is established for a set of actuator locations, control law development is based on decoupling, pole placement, and linear optimal control theory. Frequencies and elastic modal shape functions are obtained using a finite element computer algorithm, two different approximate analytical methods, and the results of the three methods compared.

  19. On the use of faults and background seismicity in Seismic Probabilistic Tsunami Hazard Analysis (SPTHA)

    NASA Astrophysics Data System (ADS)

    Selva, Jacopo; Lorito, Stefano; Basili, Roberto; Tonini, Roberto; Tiberti, Mara Monica; Romano, Fabrizio; Perfetti, Paolo; Volpe, Manuela

    2017-04-01

    Most of the SPTHA studies and applications rely on several working assumptions: i) the - mostly offshore - tsunamigenic faults are sufficiently well known; ii) the subduction zone earthquakes dominate the hazard; iii) and their location and geometry is sufficiently well constrained. Hence, a probabilistic model is constructed as regards the magnitude-frequency distribution and sometimes the slip distribution of earthquakes occurring on assumed known faults. Then, tsunami scenarios are usually constructed for all earthquakes location, sizes, and slip distributions included in the probabilistic model, through deterministic numerical modelling of tsunami generation, propagation and impact on realistic bathymetries. Here, we adopt a different approach (Selva et al., GJI, 2016) that releases some of the above assumptions, considering that i) also non-subduction earthquakes may contribute significantly to SPTHA, depending on the local tectonic context; ii) that not all the offshore faults are known or sufficiently well constrained; iii) and that the faulting mechanism of future earthquakes cannot be considered strictly predictable. This approach uses as much as possible information from known faults which, depending on the amount of available information and on the local tectonic complexity, among other things, are either modelled as Predominant Seismicity (PS) or as Background Seismicity (BS). PS is used when it is possible to assume sufficiently known geometry and mechanism (e.g. for the main subduction zones). Conversely, within the BS approach information on faults is merged with that on past seismicity, dominant stress regime, and tectonic characterisation, to determine a probability density function for the faulting mechanism. To illustrate the methodology and its impact on the hazard estimates, we present an application in the NEAM region (Northeast Atlantic, Mediterranean and connected seas), initially designed during the ASTARTE project and now applied for the regional-scale SPTHA in the TSUMAPS-NEAM project funded by DG-ECHO.

  20. Numerical analysis of the effect of turbulence transition on the hemodynamic parameters in human coronary arteries

    PubMed Central

    Gawandalkar, Udhav Ulhas; Kini, Girish; Buradi, Abdulrajak; Araki, Tadashi; Ikeda, Nobutaka; Nicolaides, Andrew; Laird, John R.; Saba, Luca; Suri, Jasjit S.

    2016-01-01

    Background Local hemodynamics plays an important role in atherogenesis and the progression of coronary atherosclerosis disease (CAD). The primary biological effect due to blood turbulence is the change in wall shear stress (WSS) on the endothelial cell membrane, while the local oscillatory nature of the blood flow affects the physiological changes in the coronary artery. In coronary arteries, the blood flow Reynolds number ranges from few tens to several hundreds and hence it is generally assumed to be laminar while calculating the WSS calculations. However, the pulsatile blood flow through coronary arteries under stenotic condition could result in transition from laminar to turbulent flow condition. Methods In the present work, the onset of turbulent transition during pulsatile flow through coronary arteries for varying degree of stenosis (i.e., 0%, 30%, 50% and 70%) is quantitatively analyzed by calculating the turbulent parameters distal to the stenosis. Also, the effect of turbulence transition on hemodynamic parameters such as WSS and oscillatory shear index (OSI) for varying degree of stenosis is quantified. The validated transitional shear stress transport (SST) k-ω model used in the present investigation is the best suited Reynolds averaged Navier-Stokes turbulence model to capture the turbulent transition. The arterial wall is assumed to be rigid and the dynamic curvature effect due to myocardial contraction on the blood flow has been neglected. Results Our observations shows that for stenosis 50% and above, the WSSavg, WSSmax and OSI calculated using turbulence model deviates from laminar by more than 10% and the flow disturbances seems to significantly increase only after 70% stenosis. Our model shows reliability and completely validated. Conclusions Blood flow through stenosed coronary arteries seems to be turbulent in nature for area stenosis above 70% and the transition to turbulent flow begins from 50% stenosis. PMID:27280084

  1. [On the relation between encounter rate and population density: Are classical models of population dynamics justified?].

    PubMed

    Nedorezov, L V

    2015-01-01

    A stochastic model of migrations on a lattice and with discrete time is considered. It is assumed that space is homogenous with respect to its properties and during one time step every individual (independently of local population numbers) can migrate to nearest nodes of lattice with equal probabilities. It is also assumed that population size remains constant during certain time interval of computer experiments. The following variants of estimation of encounter rate between individuals are considered: when for the fixed time moments every individual in every node of lattice interacts with all other individuals in the node; when individuals can stay in nodes independently, or can be involved in groups in two, three or four individuals. For each variant of interactions between individuals, average value (with respect to space and time) is computed for various values of population size. The samples obtained were compared with respective functions of classic models of isolated population dynamics: Verhulst model, Gompertz model, Svirezhev model, and theta-logistic model. Parameters of functions were calculated with least square method. Analyses of deviations were performed using Kolmogorov-Smirnov test, Lilliefors test, Shapiro-Wilk test, and other statistical tests. It is shown that from traditional point of view there are no correspondence between the encounter rate and functions describing effects of self-regulatory mechanisms on population dynamics. Best fitting of samples was obtained with Verhulst and theta-logistic models when using the dataset resulted from the situation when every individual in the node interacts with all other individuals.

  2. Modeling Kelvin Wave Cascades in Superfluid Helium

    NASA Astrophysics Data System (ADS)

    Boffetta, G.; Celani, A.; Dezzani, D.; Laurie, J.; Nazarenko, S.

    2009-09-01

    We study two different types of simplified models for Kelvin wave turbulence on quantized vortex lines in superfluids near zero temperature. Our first model is obtained from a truncated expansion of the Local Induction Approximation (Truncated-LIA) and it is shown to possess the same scalings and the essential behaviour as the full Biot-Savart model, being much simpler than the later and, therefore, more amenable to theoretical and numerical investigations. The Truncated-LIA model supports six-wave interactions and dual cascades, which are clearly demonstrated via the direct numerical simulation of this model in the present paper. In particular, our simulations confirm presence of the weak turbulence regime and the theoretically predicted spectra for the direct energy cascade and the inverse wave action cascade. The second type of model we study, the Differential Approximation Model (DAM), takes a further drastic simplification by assuming locality of interactions in k-space via using a differential closure that preserves the main scalings of the Kelvin wave dynamics. DAMs are even more amenable to study and they form a useful tool by providing simple analytical solutions in the cases when extra physical effects are present, e.g. forcing by reconnections, friction dissipation and phonon radiation. We study these models numerically and test their theoretical predictions, in particular the formation of the stationary spectra, and closeness of numerics for the higher-order DAM to the analytical predictions for the lower-order DAM.

  3. Do mesoscale faults in a young fold belt indicate regional or local stress?

    NASA Astrophysics Data System (ADS)

    Kokado, Akihiro; Yamaji, Atsushi; Sato, Katsushi

    2017-04-01

    The result of paleostress analyses of mesoscale faults is usually thought of as evidence of a regional stress. On the other hand, the recent advancement of the trishear modeling has enabled us to predict the deformation field around fault-propagation folds without the difficulty of assuming paleo mechanical properties of rocks and sediments. We combined the analysis of observed mesoscale faults and the trishear modeling to understand the significance of regional and local stresses for the formation of mesoscale faults. To this end, we conducted the 2D trishear inverse modeling with a curved thrust fault to predict the subsurface structure and strain field of an anticline, which has a more or less horizontal axis and shows a map-scale plane strain perpendicular to the axis, in the active fold belt of Niigata region, central Japan. The anticline is thought to have been formed by fault-propagation folding under WNW-ESE regional compression. Based on the attitudes of strata and the positions of key tephra beds in Lower Pleistocene soft sediments cropping out at the surface, we obtained (1) a fault-propagation fold with the fault tip at a depth of ca. 4 km as the optimal subsurface structure, and (2) the temporal variation of deformation field during the folding. We assumed that mesoscale faults were activated along the direction of maximum shear strain on the faults to test whether the fault-slip data collected at the surface were consistent with the deformation in some stage(s) of folding. The Wallace-Bott hypothesis was used to estimate the consistence of faults with the regional stress. As a result, the folding and the regional stress explained 27 and 33 of 45 observed faults, respectively, with the 11 faults being consistent with the both. Both the folding and regional one were inconsistent with the remaining 17 faults, which could be explained by transfer faulting and/or the gravitational spreading of the growing anticline. The lesson we learnt from this work was that we should pay attention not only to regional but also to local stresses to interpret the results of paleostress analysis in the shallow levels of young orogenic belts.

  4. The Unbiased Velocity Distribution of Neutron Stars from a Simulation of Pulsar Surveys

    NASA Astrophysics Data System (ADS)

    Arzoumanian, Z.; Cordes, J. M.; Chernoff, D.

    1997-12-01

    We present the results of a new simulation of the Galactic population of neutron stars: their birthrate, velocity distribution, luminosities, beaming characteristics, and spin evolution. The many simulations in the literature differ from one another primarily in their treatment of the selection effects associated with pulsar detection. Our method, the most realistic to date, goes beyond earlier efforts by retaining the full kinematic, rotational, luminosity, and beaming evolution of each simulated star: ``Monte-Carlo'' neutron stars are created according to assumed distributions (at birth) in spatial coordinates, kick velocity, and magnitudes and orientations of the spin and magnetic field vectors. The neutron stars spin down following an assumed braking law, and their Galactic trajectories are traced to the present epoch. For each star, a pulse waveform is generated using a phenomenological radio-beam model, obviating the need for an arbitrary beaming fraction. Luminosity is assumed to be a parameterized function of period and spin-down rate, with no intrinsic spread, and a parameterized death-line is applied. Interstellar dispersion and scattering consistent with survey instrumentation and the galactic locales of the neutron stars are applied to the pulse waveforms, which are Fourier analyzed and tested for detection following the techniques of real-world surveys. A unique algorithm is used to compare the populations of simulated and known, non-millisecond, pulsars in the multi-dimensional space of observables (any subset of galactic coordinates, dispersion measure, period, spin-down rate, flux, and proper motion). Model parameters are varied, and statistically independent neutron star populations are created until a maximum likelihood model is found. The highlight of this effort is an unbiased determination of the velocity distribution of neutron stars. We discuss the implications of our results for supernova physics, binary evolution, and the nature of gamma -ray transients.

  5. Theoretical model of impact damage in structural ceramics

    NASA Technical Reports Server (NTRS)

    Liaw, B. M.; Kobayashi, A. S.; Emery, A. G.

    1984-01-01

    This paper presents a mechanistically consistent model of impact damage based on elastic failures due to tensile and shear overloading. An elastic axisymmetric finite element model is used to determine the dynamic stresses generated by a single particle impact. Local failures in a finite element are assumed to occur when the primary/secondary principal stresses or the maximum shear stress reach critical tensile or shear stresses, respectively. The succession of failed elements thus models macrocrack growth. Sliding motions of cracks, which closed during unloading, are resisted by friction and the unrecovered deformation represents the 'plastic deformation' reported in the literature. The predicted ring cracks on the contact surface, as well as the cone cracks, median cracks, radial cracks, lateral cracks, and damage-induced porous zones in the interior of hot-pressed silicon nitride plates, matched those observed experimentally. The finite element model also predicted the uplifting of the free surface surrounding the impact site.

  6. Image-optimized Coronal Magnetic Field Models

    NASA Astrophysics Data System (ADS)

    Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M.

    2017-08-01

    We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work, we presented early tests of the method, which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper, we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outside of the assumed coronagraph image plane and the effect on the outcome of the optimization of errors in the localization of constraints. We find that substantial improvement in the model field can be achieved with these types of constraints, even when magnetic features in the images are located outside of the image plane.

  7. >From individual choice to group decision-making

    NASA Astrophysics Data System (ADS)

    Galam, Serge; Zucker, Jean-Daniel

    2000-12-01

    Some universal features are independent of both the social nature of the individuals making the decision and the nature of the decision itself. On this basis a simple magnet like model is built. Pair interactions are introduced to measure the degree of exchange among individuals while discussing. An external uniform field is included to account for a possible pressure from outside. Individual biases with respect to the issue at stake are also included using local random fields. A unique postulate of minimum conflict is assumed. The model is then solved with emphasis on its psycho-sociological implications. Counter-intuitive results are obtained. At this stage no new physical technicality is involved. Instead the full psycho-sociological implications of the model are drawn. Few cases are then detailed to enlight them. In addition, several numerical experiments based on our model are shown to give both an insight on the dynamics of the model and suggest further research directions.

  8. Image-Optimized Coronal Magnetic Field Models

    NASA Technical Reports Server (NTRS)

    Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M.

    2017-01-01

    We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work we presented early tests of the method which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outside of the assumed coronagraph image plane, and the effect on the outcome of the optimization of errors in localization of constraints. We find that substantial improvement in the model field can be achieved with this type of constraints, even when magnetic features in the images are located outside of the image plane.

  9. A model with competition between the cell lines in leukemia under treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halanay, A.; Cândea, D.; Rădulescu, R.

    2014-12-10

    The evolution of leukemia is modeled with a delay differential equation model of four cell populations: two populations (healthy and leukemic) ) of stem-like cells involving a larger category consisting of proliferating stem and progenitor cells with self-renew capacity and two populations (healthy and leukemic) of mature cells, considering the competition of healthy vs. leukemic cell populations and three types of division that a stem-like cell can exhibit: self-renew, asymmetric division and differentiation. In the model it is assumed that the treatment acts on the proliferation rate of the leukemic stem cells and on the apoptosis of stem and maturemore » cells. The emphasis in this model is on establishing relevant parameters for chronic and acute manifestations of leukemia. Stability of equilibria is investigated and sufficient conditions for local asymptotic stability will be given using a Lyapunov-Krasovskii functional.« less

  10. Negative differential velocity in ultradilute GaAs1-xNx alloys

    NASA Astrophysics Data System (ADS)

    Vogiatzis, N.; Rorison, J. M.

    2011-04-01

    We present theoretical results on steady state characteristics in bulk GaAs1-xNx alloys (x ≤ 0.2) using the single electron Monte-Carlo method. Two approaches have been used; the first assumes a GaAs band with a strong nitrogen scattering resonance and the second uses the band anti-crossing model, in which the localized N level interacts with the GaAs band strongly perturbing the conduction band. In the first model we observe two negative differential velocity peaks, the lower one associated with nitrogen scattering while the higher one with polar optical phonon emission accounting for the nonparabolicity effect. In the second model one negative differential velocity peak is observed associated with polar optical phonon emission. Good agreement with experimental low field mobility is obtained from the first model. We also comment on the results from both Models when the intervalley Г → L transfer is accounted for.

  11. Two Simple Models for Fracking

    NASA Astrophysics Data System (ADS)

    Norris, Jaren Quinn

    Recent developments in fracking have enable the recovery of oil and gas from tight shale reservoirs. These developments have also made fracking one of the most controversial environmental issues in the United States. Despite the growing controversy surrounding fracking, there is relatively little publicly available research. This dissertation introduces two simple models for fracking that were developed using techniques from non-linear and statistical physics. The first model assumes that the volume of induced fractures must be equal to the volume of injected fluid. For simplicity, these fractures are assumed to form a spherically symmetric damage region around the borehole. The predicted volumes of water necessary to create a damage region with a given radius are in good agreement with reported values. The second model is a modification of invasion percolation which was previously introduced to model water flooding. The reservoir rock is represented by a regular lattice of local traps that contain oil and/or gas separated by rock barriers. The barriers are assumed to be highly heterogeneous and are assigned random strengths. Fluid is injected from a central site and the weakest rock barrier breaks allowing fluid to flow into the adjacent site. The process repeats with the weakest barrier breaking and fluid flowing to an adjacent site each time step. Extensive numerical simulations were carried out to obtain statistical properties of the growing fracture network. The network was found to be fractal with fractal dimensions differing slightly from the accepted values for traditional percolation. Additionally, the network follows Horton-Strahler and Tokunaga branching statistics which have been used to characterize river networks. As with other percolation models, the growth of the network occurs in bursts. These bursts follow a power-law size distribution similar to observed microseismic events. Reservoir stress anisotropy is incorporated into the model by assigning horizontal bonds weaker strengths on average than vertical bonds. Numerical simulations show that increasing bond strength anisotropy tends to reduce the fractal dimension of the growing fracture network, and decrease the power-law slope of the burst size distribution. Although simple, these two models are useful for making informed decisions about fracking.

  12. Cross border health care provision: who gains, who loses.

    PubMed

    Levaggi, Rosella; Menoncin, Francesco

    2014-01-01

    The diffusion of the welfare state has produced a widespread involvement of the public sector in financing the production of private goods for paternalistic reasons. In this chapter we model the production of health care as a merit impure local public good whose consumption is subsidized and whose access is free, but not unlimited. The impure local public good aspect means that the production of health care spreads its benefits beyond the geographical boundaries of the Region where it is produced. Finally, we include the (optional) provision of an equalization grant that allows reduction of fiscal imbalance among Regions. In this framework we study the possible effects of cross border provision of health care. We assume that information is complete and symmetric and that there is no comparative advantage in local provision. In this context devolution is always suboptimal for the whole community: the lack of coordination means that the impure public good is under-provided. However, more efficient Regions may be better off because of the impure public good nature of health care.

  13. Regressed relations for forced convection heat transfer in a direct injection stratified charge rotary engine

    NASA Technical Reports Server (NTRS)

    Lee, Chi M.; Schock, Harold J.

    1988-01-01

    Currently, the heat transfer equation used in the rotary combustion engine (RCE) simulation model is taken from piston engine studies. These relations have been empirically developed by the experimental input coming from piston engines whose geometry differs considerably from that of the RCE. The objective of this work was to derive equations to estimate heat transfer coefficients in the combustion chamber of an RCE. This was accomplished by making detailed temperature and pressure measurements in a direct injection stratified charge (DISC) RCE under a range of conditions. For each specific measurement point, the local gas velocity was assumed equal to the local rotor tip speed. Local physical properties of the fluids were then calculated. Two types of correlation equations were derived and are described in this paper. The first correlation expresses the Nusselt number as a function of the Prandtl number, Reynolds number, and characteristic temperature ratio; the second correlation expresses the forced convection heat transfer coefficient as a function of fluid temperature, pressure and velocity.

  14. Annular convective-radiative fins with a step change in thickness, and temperature-dependent thermal conductivity and heat transfer coefficient

    NASA Astrophysics Data System (ADS)

    Barforoush, M. S. M.; Saedodin, S.

    2018-01-01

    This article investigates the thermal performance of convective-radiative annular fins with a step reduction in local cross section (SRC). The thermal conductivity of the fin's material is assumed to be a linear function of temperature, and heat transfer coefficient is assumed to be a power-law function of surface temperature. Moreover, nonzero convection and radiation sink temperatures are included in the mathematical model of the energy equation. The well-known differential transformation method (DTM) is used to derive the analytical solution. An exact analytical solution for a special case is derived to prove the validity of the obtained results from the DTM. The model provided here is a more realistic representation of SRC annular fins in actual engineering practices. Effects of many parameters such as conduction-convection parameters, conduction-radiation parameter and sink temperature, and also some parameters which deal with step fins such as thickness parameter and dimensionless parameter describing the position of junction in the fin on the temperature distribution of both thin and thick sections of the fin are investigated. It is believed that the obtained results will facilitate the design and performance evaluation of SRC annular fins.

  15. The magnetotelluric response over 2D media with resistivity frequency dispersion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mauriello, P.; Patella, D.; Siniscalchi, A.

    1996-09-01

    The authors investigate the magnetotelluric response of two-dimensional bodies, characterized by the presence of low-frequency dispersion phenomena of the electrical parameters. The Cole-Cole dispersion model is assumed to represent the frequency dependence of the impedivity complex function, defined as the inverse of Stoyer`s admittivity complex parameter. To simulate real geological situations, they consider three structural models, representing a sedimentary basin, a geothermal system and a magma chamber, assumed to be partially or totally dispersive. From a detailed study of the frequency and space behaviors of the magnetotelluric parameters, taking known non-dispersive results as reference, they outline the main peculiarities ofmore » the local distortion effects, caused by the presence of dispersion in the target media. Finally, they discuss the interpretive errors which can be made by neglecting the dispersion phenomena. The apparent dispersion function, which was defined in a previous paper to describe similar effects in the one-dimensional case, is again used as a reliable indicator of location, shape and spatial extent of the dispersive bodies. The general result of this study is a marked improvement in the resolution power of the magnetotelluric method.« less

  16. Sci—Fri PM: Topics — 04: What if bystander effects influence cell kill within a target volume? Potential consequences of dose heterogeneity on TCP and EUD on intermediate risk prostate patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balderson, M.J.; Kirkby, C.; Department of Medical Physics, Tom Baker Cancer Centre, Calgary, Alberta

    In vitro evidence has suggested that radiation induced bystander effects may enhance non-local cell killing which may influence radiotherapy treatment planning paradigms. This work applies a bystander effect model, which has been derived from published in vitro data, to calculate equivalent uniform dose (EUD) and tumour control probability (TCP) and compare them with predictions from standard linear quadratic (LQ) models that assume a response due only to local absorbed dose. Comparisons between the models were made under increasing dose heterogeneity scenarios. Dose throughout the CTV was modeled with normal distributions, where the degree of heterogeneity was then dictated by changingmore » the standard deviation (SD). The broad assumptions applied in the bystander effect model are intended to place an upper limit on the extent of the results in a clinical context. The bystander model suggests a moderate degree of dose heterogeneity yields as good or better outcome compared to a uniform dose in terms of EUD and TCP. Intermediate risk prostate prescriptions of 78 Gy over 39 fractions had maximum EUD and TCP values at SD of around 5Gy. The plots only dropped below the uniform dose values for SD ∼ 10 Gy, almost 13% of the prescribed dose. The bystander model demonstrates the potential to deviate from the common local LQ model predictions as dose heterogeneity through a prostate CTV is varies. The results suggest the potential for allowing some degree of dose heterogeneity within a CTV, although further investigations of the assumptions of the bystander model are warranted.« less

  17. A collisional-radiative model of iron vapour in a thermal arc plasma

    NASA Astrophysics Data System (ADS)

    Baeva, M.; Uhrlandt, D.; Murphy, A. B.

    2017-06-01

    A collisional-radiative model for the ground state and fifty effective excited levels of atomic iron, and one level for singly-ionized iron, is set up for technological plasmas. Attention is focused on the population of excited states of atomic iron as a result of excitation, de-excitation, ionization, recombination and spontaneous emission. Effective rate coefficients for ionization and recombination, required in non-equilibrium plasma transport models, are also obtained. The collisional-radiative model is applied to a thermal arc plasma. Input parameters for the collisional-radiative model are provided by a magnetohydrodynamic simulation of a gas-metal welding arc, in which local thermodynamic equilibrium is assumed and the treatment of the transport of metal vapour is based on combined diffusion coefficients. The results clearly identify the conditions in the arc, under which the atomic state distribution satisfies the Boltzmann distribution, with an excitation temperature equal to the plasma temperature. These conditions are met in the central part of the arc, even though a local temperature minimum occurs here. This provides assurance that diagnostic methods based on local thermodynamic equilibrium, in particular those of optical emission spectroscopy, are reliable here. In contrast, deviations from the equilibrium atomic-state distribution are obtained in the near-electrode and arc fringe regions. As a consequence, the temperatures determined from the ratio of line intensities and number densities obtained from the emission coefficient in these regions are questionable. In this situation, the collisional-radiative model can be used as a diagnostic tool to assist in the interpretation of spectroscopic measurements.

  18. Does objective cluster analysis serve as a useful precursor to seasonal precipitation prediction at local scale? Application to western Ethiopia

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Moges, Semu; Block, Paul

    2018-01-01

    Prediction of seasonal precipitation can provide actionable information to guide management of various sectoral activities. For instance, it is often translated into hydrological forecasts for better water resources management. However, many studies assume homogeneity in precipitation across an entire study region, which may prove ineffective for operational and local-level decisions, particularly for locations with high spatial variability. This study proposes advancing local-level seasonal precipitation predictions by first conditioning on regional-level predictions, as defined through objective cluster analysis, for western Ethiopia. To our knowledge, this is the first study predicting seasonal precipitation at high resolution in this region, where lives and livelihoods are vulnerable to precipitation variability given the high reliance on rain-fed agriculture and limited water resources infrastructure. The combination of objective cluster analysis, spatially high-resolution prediction of seasonal precipitation, and a modeling structure spanning statistical and dynamical approaches makes clear advances in prediction skill and resolution, as compared with previous studies. The statistical model improves versus the non-clustered case or dynamical models for a number of specific clusters in northwestern Ethiopia, with clusters having regional average correlation and ranked probability skill score (RPSS) values of up to 0.5 and 33 %, respectively. The general skill (after bias correction) of the two best-performing dynamical models over the entire study region is superior to that of the statistical models, although the dynamical models issue predictions at a lower resolution and the raw predictions require bias correction to guarantee comparable skills.

  19. Convenient models of the atmosphere: optics and solar radiation

    NASA Astrophysics Data System (ADS)

    Alexander, Ginsburg; Victor, Frolkis; Irina, Melnikova; Sergey, Novikov; Dmitriy, Samulenkov; Maxim, Sapunov

    2017-11-01

    Simple optical models of clear and cloudy atmosphere are proposed. Four versions of atmospheric aerosols content are considered: a complete lack of aerosols in the atmosphere, low background concentration (500 cm-3), high concentrations (2000 cm-3) and very high content of particles (5000 cm-3). In a cloud scenario, the model of external mixture is assumed. The values of optical thickness and single scattering albedo for 13 wavelengths are calculated in the short wavelength range of 0.28-0.90 µm, with regard to the molecular absorption bands, that is simulated with triangle function. A comparison of the proposed optical parameters with results of various measurements and retrieval (lidar measurement, sampling, processing radiation measurements) is presented. For a cloudy atmosphere models of single-layer and two-layer atmosphere are proposed. It is found that cloud optical parameters with assuming the "external mixture" agrees with retrieved values from airborne observations. The results of calculating hemispherical fluxes of the reflected and transmitted solar radiation and the radiative divergence are obtained with the Delta-Eddington approach. The calculation is done for surface albedo values of 0, 0.5, 0.9 and for spectral values of the sandy surface. Four values of solar zenith angle: 0°, 30°, 40° and 60° are taken. The obtained values are compared with data of radiative airborne observations. Estimating the local instantaneous radiative forcing of atmospheric aerosols and clouds for considered models is presented together with the heating rate.

  20. A scaling law of radial gas distribution in disk galaxies

    NASA Technical Reports Server (NTRS)

    Wang, Zhong

    1990-01-01

    Based on the idea that local conditions within a galactic disk largely determine the region's evolution time scale, researchers built a theoretical model to take into account molecular cloud and star formations in the disk evolution process. Despite some variations that may be caused by spiral arms and central bulge masses, they found that many late-type galaxies show consistency with the model in their radial atomic and molecular gas profiles. In particular, researchers propose that a scaling law be used to generalize the gas distribution characteristics. This scaling law may be useful in helping to understand the observed gas contents in many galaxies. Their model assumes an exponential mass distribution with disk radius. Most of the mass are in atomic gas state at the beginning of the evolution. Molecular clouds form through a modified Schmidt Law which takes into account gravitational instabilities in a possible three-phase structure of diffuse interstellar medium (McKee and Ostriker, 1977; Balbus and Cowie, 1985); whereas star formation proceeds presumably unaffected by the environmental conditions outside of molecular clouds (Young, 1987). In such a model both atomic and molecular gas profiles in a typical galactic disk (as a result of the evolution) can be fitted simultaneously by adjusting the efficiency constants. Galaxies of different sizes and masses, on the other hand, can be compared with the model by simply scaling their characteristic length scales and shifting their radial ranges to match the assumed disk total mass profile sigma tot(r).

  1. Solar wind/local interstellar medium interaction including charge exchange with neural hydrogen

    NASA Technical Reports Server (NTRS)

    Pauls, H. Louis; Zank, Gary P.

    1995-01-01

    We present results from a hydrodynamic model of the interaction of the solar wind with the local interstellar medium (LISM), self-consistently taking into account the effects of charge exchange between the plasma component and the interstellar neutrals. The simulation is fully time dependent, and is carried out in two or three dimensions, depending on whether the helio-latitudinal dependence of the solar wind speed and number density (both giving rise to three dimensional effects) are included. As a first approximation it is assumed that the neutral component of the flow can be described by a single, isotropic fluid. Clearly, this is not the actual situation, since charge exchange with the supersonic solar wind plasma in the region of the nose results in a 'second' neutral fluid propagating in the opposite direction as that of the LISM neutrals.

  2. Local Earthquake Tomography in the Eifel Region, Middle Europe

    NASA Astrophysics Data System (ADS)

    Gaensicke, H.

    2001-12-01

    The aim of the Eifel Plume project is to verify the existence of an assumed mantle plume responsible for the Tertiary and Quaternary volcanism in the Eifel region of midwest Germany. During a large passive and semi-active seismological experiment (November 1997 - June 1998) about 160 mobil broadband and short period stations were operated in addition to about 100 permanent stations in the area of interest. The stations registered teleseismic and local events. Local events are used to obtain a threedimensional tomographic model of seismic velocities in the crust. Since local earthquake tomography requires a large set of crustal travel paths, seismograms of local events recorded from July 1998 to June 2001 by permanent stations were added to the Eifel Plume data set. In addition to travel time corrections for the teleseismic tomography of the upper mantle, the new 3D velocity model should improve the precision for location of local events. From a total of 832 local seismic events, 172 were identified as tectonic earthquakes. The other events were either quarry blasts or shallow mine-induced seismic events. The locations of 60 quarry blasts are known and for 30 of them the firing time was measured during the field experiment. Since the origin time and location of these events are known with high precision, they are used to validate inverted velocity models. Station corrections from simultaneous 1D-inversion of local earthquake traveltimes and hypocenters are in good agreement with travel time residuals calculated from teleseismic rays. A strong azimuthal dependency of travel time residuals resulting from a 1D velocity model was found for quarry blasts with hypocenters in the volcanic field in the center of the Eifel. Simultaneous 3D-inversion calculations show strong heterogeneities in the upper crust and a negative anomaly for p-wave velocities in the lower crust. The latter either could indicate a low velocity zone close to the Moho or subsidence of the Moho. We present preliminary results obtained by simultaneous inversion of earthquake and velocity parameters constrained by known geological parameters and the controlled source information from calibrated quarry blasts.

  3. Bell's theorem and the problem of decidability between the views of Einstein and Bohr.

    PubMed

    Hess, K; Philipp, W

    2001-12-04

    Einstein, Podolsky, and Rosen (EPR) have designed a gedanken experiment that suggested a theory that was more complete than quantum mechanics. The EPR design was later realized in various forms, with experimental results close to the quantum mechanical prediction. The experimental results by themselves have no bearing on the EPR claim that quantum mechanics must be incomplete nor on the existence of hidden parameters. However, the well known inequalities of Bell are based on the assumption that local hidden parameters exist and, when combined with conflicting experimental results, do appear to prove that local hidden parameters cannot exist. This fact leaves only instantaneous actions at a distance (called "spooky" by Einstein) to explain the experiments. The Bell inequalities are based on a mathematical model of the EPR experiments. They have no experimental confirmation, because they contradict the results of all EPR experiments. In addition to the assumption that hidden parameters exist, Bell tacitly makes a variety of other assumptions; for instance, he assumes that the hidden parameters are governed by a single probability measure independent of the analyzer settings. We argue that the mathematical model of Bell excludes a large set of local hidden variables and a large variety of probability densities. Our set of local hidden variables includes time-like correlated parameters and a generalized probability density. We prove that our extended space of local hidden variables does permit derivation of the quantum result and is consistent with all known experiments.

  4. Ultra-Dense Optical Mass Storage

    DTIC Science & Technology

    1991-02-11

    Technologies develops equipment for telephone company central offices which allows users within a local area to have personal mailboxes for voicemail and FAX...externally applied stress field can alter the energy level of a molecule by slightly dis- torting the local environment surrounding the photochemical...permit us to raise the temperature even further during part of the channel creation process. It is probably reasonable to assume that local heating

  5. Tribal and Locality Dynamics in Afghanistan: A View from the National Military Academy of Afghanistan

    DTIC Science & Technology

    2009-01-01

    CD VVV ∪= with DV countable and nCV ℜ∈ ; XInit ⊆ is a set of initial states; CXVXf →×: is a vector field, assumed to be 4 globally...DV countable and nCV ℜ∈ ; XInit ⊆ is a set of initial states; CXVXf →×: is a vector field, assumed to be globally Lipschitz in CX and...8217 ; V is a finite collection of input variables. We assume ( )CD VVV ∪= with DV countable and nCV ℜ∈ ; XInit ⊆ is a set of initial states

  6. A mechanistic stress model of protein evolution accounts for site-specific evolutionary rates and their relationship with packing density and flexibility

    PubMed Central

    2014-01-01

    Background Protein sites evolve at different rates due to functional and biophysical constraints. It is usually considered that the main structural determinant of a site’s rate of evolution is its Relative Solvent Accessibility (RSA). However, a recent comparative study has shown that the main structural determinant is the site’s Local Packing Density (LPD). LPD is related with dynamical flexibility, which has also been shown to correlate with sequence variability. Our purpose is to investigate the mechanism that connects a site’s LPD with its rate of evolution. Results We consider two models: an empirical Flexibility Model and a mechanistic Stress Model. The Flexibility Model postulates a linear increase of site-specific rate of evolution with dynamical flexibility. The Stress Model, introduced here, models mutations as random perturbations of the protein’s potential energy landscape, for which we use simple Elastic Network Models (ENMs). To account for natural selection we assume a single active conformation and use basic statistical physics to derive a linear relationship between site-specific evolutionary rates and the local stress of the mutant’s active conformation. We compare both models on a large and diverse dataset of enzymes. In a protein-by-protein study we found that the Stress Model outperforms the Flexibility Model for most proteins. Pooling all proteins together we show that the Stress Model is strongly supported by the total weight of evidence. Moreover, it accounts for the observed nonlinear dependence of sequence variability on flexibility. Finally, when mutational stress is controlled for, there is very little remaining correlation between sequence variability and dynamical flexibility. Conclusions We developed a mechanistic Stress Model of evolution according to which the rate of evolution of a site is predicted to depend linearly on the local mutational stress of the active conformation. Such local stress is proportional to LPD, so that this model explains the relationship between LPD and evolutionary rate. Moreover, the model also accounts for the nonlinear dependence between evolutionary rate and dynamical flexibility. PMID:24716445

  7. Performance of the reverse Helmbold universal portfolio

    NASA Astrophysics Data System (ADS)

    Tan, Choon Peng; Kuang, Kee Seng; Lee, Yap Jia

    2017-04-01

    The universal portfolio is an important investment strategy in a stock market where no stochastic model is assumed for the stock prices. The zero-gradient set of the objective function estimating the next-day portfolio which contains the reverse Kullback-Leibler order-alpha divergence is considered. From the zero-gradient set, the explicit, reverse Helmbold universal portfolio is obtained. The performance of the explicit, reverse Helmbold universal portfolio is studied by running them on some stock-price data sets from the local stock exchange. It is possible to increase the wealth of the investor by using these portfolios in investment.

  8. Analysis of skin tissues spatial fluorescence distribution by the Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Y Churmakov, D.; Meglinski, I. V.; Piletsky, S. A.; Greenhalgh, D. A.

    2003-07-01

    A novel Monte Carlo technique of simulation of spatial fluorescence distribution within the human skin is presented. The computational model of skin takes into account the spatial distribution of fluorophores, which would arise due to the structure of collagen fibres, compared to the epidermis and stratum corneum where the distribution of fluorophores is assumed to be homogeneous. The results of simulation suggest that distribution of auto-fluorescence is significantly suppressed in the near-infrared spectral region, whereas the spatial distribution of fluorescence sources within a sensor layer embedded in the epidermis is localized at an `effective' depth.

  9. Empirical resistive-force theory for slender biological filaments in shear-thinning fluids

    NASA Astrophysics Data System (ADS)

    Riley, Emily E.; Lauga, Eric

    2017-06-01

    Many cells exploit the bending or rotation of flagellar filaments in order to self-propel in viscous fluids. While appropriate theoretical modeling is available to capture flagella locomotion in simple, Newtonian fluids, formidable computations are required to address theoretically their locomotion in complex, nonlinear fluids, e.g., mucus. Based on experimental measurements for the motion of rigid rods in non-Newtonian fluids and on the classical Carreau fluid model, we propose empirical extensions of the classical Newtonian resistive-force theory to model the waving of slender filaments in non-Newtonian fluids. By assuming the flow near the flagellum to be locally Newtonian, we propose a self-consistent way to estimate the typical shear rate in the fluid, which we then use to construct correction factors to the Newtonian local drag coefficients. The resulting non-Newtonian resistive-force theory, while empirical, is consistent with the Newtonian limit, and with the experiments. We then use our models to address waving locomotion in non-Newtonian fluids and show that the resulting swimming speeds are systematically lowered, a result which we are able to capture asymptotically and to interpret physically. An application of the models to recent experimental results on the locomotion of Caenorhabditis elegans in polymeric solutions shows reasonable agreement and thus captures the main physics of swimming in shear-thinning fluids.

  10. Optimal control of predator-prey mathematical model with infection and harvesting on prey

    NASA Astrophysics Data System (ADS)

    Diva Amalia, R. U.; Fatmawati; Windarto; Khusnul Arif, Didik

    2018-03-01

    This paper presents a predator-prey mathematical model with infection and harvesting on prey. The infection and harvesting only occur on the prey population and it assumed that the prey infection would not infect predator population. We analysed the mathematical model of predator-prey with infection and harvesting in prey. Optimal control, which is a prevention of the prey infection, also applied in the model and denoted as U. The purpose of the control is to increase the susceptible prey. The analytical result showed that the model has five equilibriums, namely the extinction equilibrium (E 0), the infection free and predator extinction equilibrium (E 1), the infection free equilibrium (E 2), the predator extinction equilibrium (E 3), and the coexistence equilibrium (E 4). The extinction equilibrium (E 0) is not stable. The infection free and predator extinction equilibrium (E 1), the infection free equilibrium (E 2), also the predator extinction equilibrium (E 3), are locally asymptotically stable with some certain conditions. The coexistence equilibrium (E 4) tends to be locally asymptotically stable. Afterwards, by using the Maximum Pontryagin Principle, we obtained the existence of optimal control U. From numerical simulation, we can conclude that the control could increase the population of susceptible prey and decrease the infected prey.

  11. Effect of initial conditions on constant pressure mixing between two turbulent streams

    NASA Astrophysics Data System (ADS)

    Kangovi, S.

    1983-02-01

    It is pointed out that a study of the process of mixing between two dissimilar streams has varied applications in different fields. The applications include the design of an after burner in a high by-pass ratio aircraft engine and the disposal of effluents in a stream. The mixing process determines important quantities related to the energy transfer from main stream to the secondary stream, the temperature and velocity profiles, and the local kinematic and dissipative structure within the mixing region, and the growth of the mixing layer. Hill and Page (1968) have proposed the employment of an 'assumed epsilon' method in which the eddy viscosity model of Goertler (1942) is modified to account for the initial boundary layer. The present investigation is concerned with the application of the assumed epsilon technique to the study of the effect of initial conditions on the development of the turbulent mixing layer between two compressible, nonisoenergetic streams at constant pressure.

  12. The average 0.5-200 keV spectrum of local active galactic nuclei and a new determination of the 2-10 keV luminosity function at z ≈ 0

    NASA Astrophysics Data System (ADS)

    Ballantyne, D. R.

    2014-01-01

    The broad-band X-ray spectra of active galactic nuclei (AGNs) contains information about the nuclear environment from Schwarzschild radii scales (where the primary power law is generated in a corona) to distances of ˜1 pc (where the distant reflector may be located). In addition, the average shape of the X-ray spectrum is an important input into X-ray background synthesis models. Here, local (z ≈ 0) AGN luminosity functions (LFs) in five energy bands are used as a low-resolution, luminosity-dependent X-ray spectrometer in order to constrain the average AGN X-ray spectrum between 0.5 and 200 keV. The 15-55 keV LF measured by Swift-BAT is assumed to be the best determination of the local LF, and then a spectral model is varied to determine the best fit to the 0.5-2 keV, 2-10 keV, 3-20 keV and 14-195 keV LFs. The spectral model consists of a Gaussian distribution of power laws with a mean photon-index <Γ> and cutoff energy Ecut, as well as contributions from distant and disc reflection. The reflection strength is parametrized by varying the Fe abundance relative to solar, AFe, and requiring a specific Fe Kα equivalent width (EW). In this way, the presence of the X-ray Baldwin effect can be tested. The spectral model that best fits the four LFs has <Γ> = 1.85 ± 0.15, E_{cut}=270^{+170}_{-80} keV, A_{Fe}=0.3^{+0.3}_{-0.15}. The sub-solar AFe is unlikely to be a true measure of the gas-phase metallicity, but indicates the presence of strong reflection given the assumed Fe Kα EW. Indeed, parametrizing the reflection strength with the R parameter gives R=1.7^{+1.7}_{-0.85}. There is moderate evidence for no X-ray Baldwin effect. Accretion disc reflection is included in the best-fitting model, but it is relatively weak (broad iron Kα EW < 100 eV) and does not significantly affect any of the conclusions. A critical result of our procedure is that the shape of the local 2-10 keV LF measured by HEAO-1 and MAXI is incompatible with the LFs measured in the hard X-rays by Swift-BAT and RXTE. We therefore present a new determination of the local 2-10 keV LF that is consistent with all other energy bands, as well as the de-evolved 2-10 keV LF estimated from the XMM-Newton Hard Bright Survey. This new LF should be used to revise current measurements of the evolving AGN LF in the 2-10 keV band. Finally, the suggested absence of the X-ray Baldwin effect points to a possible origin for the distant reflector in dusty gas not associated with the AGN obscuring medium. This may be the same material that produces the compact 12 μm source in local AGNs.

  13. A model for discriminating reinforcers in time and space.

    PubMed

    Cowie, Sarah; Davison, Michael; Elliffe, Douglas

    2016-06-01

    Both the response-reinforcer and stimulus-reinforcer relation are important in discrimination learning; differential responding requires a minimum of two discriminably-different stimuli and two discriminably-different associated contingencies of reinforcement. When elapsed time is a discriminative stimulus for the likely availability of a reinforcer, choice over time may be modeled by an extension of the Davison and Nevin (1999) model that assumes that local choice strictly matches the effective local reinforcer ratio. The effective local reinforcer ratio may differ from the obtained local reinforcer ratio for two reasons: Because the animal inaccurately estimates times associated with obtained reinforcers, and thus incorrectly discriminates the stimulus-reinforcer relation across time; and because of error in discriminating the response-reinforcer relation. In choice-based timing tasks, the two responses are usually highly discriminable, and so the larger contributor to differences between the effective and obtained reinforcer ratio is error in discriminating the stimulus-reinforcer relation. Such error may be modeled either by redistributing the numbers of reinforcers obtained at each time across surrounding times, or by redistributing the ratio of reinforcers obtained at each time in the same way. We assessed the extent to which these two approaches to modeling discrimination of the stimulus-reinforcer relation could account for choice in a range of temporal-discrimination procedures. The version of the model that redistributed numbers of reinforcers accounted for more variance in the data. Further, this version provides an explanation for shifts in the point of subjective equality that occur as a result of changes in the local reinforcer rate. The inclusion of a parameter reflecting error in discriminating the response-reinforcer relation enhanced the ability of each version of the model to describe data. The ability of this class of model to account for a range of data suggests that timing, like other conditional discriminations, is choice under the joint discriminative control of elapsed time and differential reinforcement. Understanding the role of differential reinforcement is therefore critical to understanding control by elapsed time. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Assessing measurement error in surveys using latent class analysis: application to self-reported illicit drug use in data from the Iranian Mental Health Survey.

    PubMed

    Khalagi, Kazem; Mansournia, Mohammad Ali; Rahimi-Movaghar, Afarin; Nourijelyani, Keramat; Amin-Esmaeili, Masoumeh; Hajebi, Ahmad; Sharif, Vandad; Radgoodarzi, Reza; Hefazi, Mitra; Motevalian, Abbas

    2016-01-01

    Latent class analysis (LCA) is a method of assessing and correcting measurement error in surveys. The local independence assumption in LCA assumes that indicators are independent from each other condition on the latent variable. Violation of this assumption leads to unreliable results. We explored this issue by using LCA to estimate the prevalence of illicit drug use in the Iranian Mental Health Survey. The following three indicators were included in the LCA models: five or more instances of using any illicit drug in the past 12 months (indicator A), any use of any illicit drug in the past 12 months (indicator B), and the self-perceived need of treatment services or having received treatment for a substance use disorder in the past 12 months (indicator C). Gender was also used in all LCA models as a grouping variable. One LCA model using indicators A and B, as well as 10 different LCA models using indicators A, B, and C, were fitted to the data. The three models that had the best fit to the data included the following correlations between indicators: (AC and AB), (AC), and (AC, BC, and AB). The estimated prevalence of illicit drug use based on these three models was 28.9%, 6.2% and 42.2%, respectively. None of these models completely controlled for violation of the local independence assumption. In order to perform unbiased estimations using the LCA approach, the factors violating the local independence assumption (behaviorally correlated error, bivocality, and latent heterogeneity) should be completely taken into account in all models using well-known methods.

  15. Three dimensional simulation of hydrogen chloride and hydrogen fluoride during the Airborne Arctic Stratospheric Expedition

    NASA Technical Reports Server (NTRS)

    Kaye, Jack A.; Rood, Richard B.; Stolarski, Richard S.; Douglass, Anne R.; Newman, Paul A.; Allen, Dale J.; Larson, Edmund M.; Coffey, Michael T.; Mankin, William G.; Toon, Geoffrey C.

    1990-01-01

    Simulations of the evolution of stratospheric distributions of hydrogen chloride (HCl) and hydrogen fluoride (HF) have been carried out for the period of the Airborne Arctic Stratospheric Expedition (AASE) with a three-dimensional chemistry-transport model. Simulations were performed assuming only homogeneous gas phase chemistry for HF and both homogeneous gas phase and heterogeneous chemistry for HCl. Results show heterogeneous loss of HCl is needed to provide agreement with infrared column measurements. Estimates of the impact of heterogeneous loss on the global HCl distribution are obtained from the model. Reductions of HCl due to heterogeneous loss are calculated to be localized to regions of high vorticity, even after more than a month of integration.

  16. Interacting Electrons in Graphene: Fermi Velocity Renormalization and Optical Response

    NASA Astrophysics Data System (ADS)

    Stauber, T.; Parida, P.; Trushin, M.; Ulybyshev, M. V.; Boyda, D. L.; Schliemann, J.

    2017-06-01

    We have developed a Hartree-Fock theory for electrons on a honeycomb lattice aiming to solve a long-standing problem of the Fermi velocity renormalization in graphene. Our model employs no fitting parameters (like an unknown band cutoff) but relies on a topological invariant (crystal structure function) that makes the Hartree-Fock sublattice spinor independent of the electron-electron interaction. Agreement with the experimental data is obtained assuming static self-screening including local field effects. As an application of the model, we derive an explicit expression for the optical conductivity and discuss the renormalization of the Drude weight. The optical conductivity is also obtained via precise quantum Monte Carlo calculations which compares well to our mean-field approach.

  17. The 2014 United States National Seismic Hazard Model

    USGS Publications Warehouse

    Petersen, Mark D.; Moschetti, Morgan P.; Powers, Peter; Mueller, Charles; Haller, Kathleen; Frankel, Arthur; Zeng, Yuehua; Rezaeian, Sanaz; Harmsen, Stephen; Boyd, Oliver; Field, Edward; Chen, Rui; Rukstales, Kenneth S.; Luco, Nicolas; Wheeler, Russell; Williams, Robert; Olsen, Anna H.

    2015-01-01

    New seismic hazard maps have been developed for the conterminous United States using the latest data, models, and methods available for assessing earthquake hazard. The hazard models incorporate new information on earthquake rupture behavior observed in recent earthquakes; fault studies that use both geologic and geodetic strain rate data; earthquake catalogs through 2012 that include new assessments of locations and magnitudes; earthquake adaptive smoothing models that more fully account for the spatial clustering of earthquakes; and 22 ground motion models, some of which consider more than double the shaking data applied previously. Alternative input models account for larger earthquakes, more complicated ruptures, and more varied ground shaking estimates than assumed in earlier models. The ground motions, for levels applied in building codes, differ from the previous version by less than ±10% over 60% of the country, but can differ by ±50% in localized areas. The models are incorporated in insurance rates, risk assessments, and as input into the U.S. building code provisions for earthquake ground shaking.

  18. A local scale assessment of the climate change sensitivity of snow in Pyrenean ski resorts

    NASA Astrophysics Data System (ADS)

    Pesado, Cristina; Pons, Marc; Vilella, Marc; López-Moreno, Juan Ignacio

    2016-04-01

    The Pyrenees host one of the largest ski area in Europe after the Alps that encompasses the mountain area of the south of France, the north of Spain and the small country of Andorra. In this region, winter tourism is one of the main source of income and driving force of local development on these mountain communities. However, this activity was identified as one of the most vulnerable to a future climate change due to the projected decrease of natural snow and snowmaking capacity. However, within the same ski resorts different areas showed to have a very different vulnerability within the same resort based on the geographic features of the area and the technical management of the slopes. Different areas inside a same ski resort could have very different vulnerability to future climate change based on aspect, steepness or elevation. Furthermore, the technical management of ski resorts, such as snowmaking and grooming were identified to have a significant impact on the response of the snowpack in a warmer climate. In this line, two different ski resorts were deeply analyzed taken into account both local geographical features as well as the effect of the technical management of the runs. Principal Component Analysis was used to classify the main areas of the resort based on the geographic features (elevation, aspect and steepness) and identify the main representative areas with different local features. Snow energy and mass balance was simulated in the different representative areas using the Cold Regions Hydrological Model (CRHM) assuming different magnitudes of climate warming (increases of 2°C and 4°C in the mean winter temperature) both in natural conditions and assuming technical management of the slopes. Theses first results showed the different sensitivity and vulnerability to climate changes based on the local geography of the resort and the management of the ski runs, showing the importance to include these variables when analyzing the local vulnerability of a ski resort and the potential adaptation measures in each particular case.

  19. Analysis of interacting entropy-corrected holographic and new agegraphic dark energies

    NASA Astrophysics Data System (ADS)

    Ranjit, Chayan; Debnath, Ujjal

    In the present work, we assume the flat FRW model of the universe is filled with dark matter and dark energy where they are interacting. For dark energy model, we consider the entropy-corrected HDE (ECHDE) model and the entropy-corrected NADE (ECNADE). For entropy-corrected models, we assume logarithmic correction and power law correction. For ECHDE model, length scale L is assumed to be Hubble horizon and future event horizon. The ωde-ωde‧ analysis for our different horizons are discussed.

  20. On the Uniqueness and Consistency of Scattering Amplitudes

    NASA Astrophysics Data System (ADS)

    Rodina, Laurentiu

    In this dissertation, we study constraints imposed by locality, unitarity, gauge invariance, the Adler zero, and constructability (scaling under BCFW shifts). In the first part we study scattering amplitudes as the unique mathematical objects which can satisfy various combinations of such principles. In all cases we find that locality and unitarity may be derived from gauge invariance (for Yang-Mills and General Relativity) or from the Adler zero (for the non-linear sigma model and the Dirac-Born-Infeld model), together with mild assumptions on the singularity structure and mass dimension. We also conjecture that constructability and locality together imply gauge invariance, hence also unitarity. All claims are proved through a soft expansion, and in the process we end re-deriving the well-known leading soft theorems for all four theories. Unlike other proofs of these theorems, we do not assume any form of factorization (unitarity). In the second part we show how tensions arising between gauge invariance (as encoded by spinor helicity variables in four dimensions), locality, unitarity and constructability give rise to various physical properties. These include high-spin no-go theorems, the equivalence principle, and the emergence of supersymmetry from spin 3/2 particles. We also complete the fully on-shell constructability proof of gravity amplitudes, by showing that the improved "bonus'' behavior of gravity under BCFW shifts is a simple consequence of Bose symmetry.

  1. 3D Protein structure prediction with genetic tabu search algorithm

    PubMed Central

    2010-01-01

    Background Protein structure prediction (PSP) has important applications in different fields, such as drug design, disease prediction, and so on. In protein structure prediction, there are two important issues. The first one is the design of the structure model and the second one is the design of the optimization technology. Because of the complexity of the realistic protein structure, the structure model adopted in this paper is a simplified model, which is called off-lattice AB model. After the structure model is assumed, optimization technology is needed for searching the best conformation of a protein sequence based on the assumed structure model. However, PSP is an NP-hard problem even if the simplest model is assumed. Thus, many algorithms have been developed to solve the global optimization problem. In this paper, a hybrid algorithm, which combines genetic algorithm (GA) and tabu search (TS) algorithm, is developed to complete this task. Results In order to develop an efficient optimization algorithm, several improved strategies are developed for the proposed genetic tabu search algorithm. The combined use of these strategies can improve the efficiency of the algorithm. In these strategies, tabu search introduced into the crossover and mutation operators can improve the local search capability, the adoption of variable population size strategy can maintain the diversity of the population, and the ranking selection strategy can improve the possibility of an individual with low energy value entering into next generation. Experiments are performed with Fibonacci sequences and real protein sequences. Experimental results show that the lowest energy obtained by the proposed GATS algorithm is lower than that obtained by previous methods. Conclusions The hybrid algorithm has the advantages from both genetic algorithm and tabu search algorithm. It makes use of the advantage of multiple search points in genetic algorithm, and can overcome poor hill-climbing capability in the conventional genetic algorithm by using the flexible memory functions of TS. Compared with some previous algorithms, GATS algorithm has better performance in global optimization and can predict 3D protein structure more effectively. PMID:20522256

  2. Modeling of aluminum/gallium interdiffusion in aluminum gallium arsenide/gallium arsenide heterostructure materials

    NASA Astrophysics Data System (ADS)

    Tai, Cheng-Yu

    There is considerable interest in interdiffusion in III-IV based structures, such as AlGaAs/GaAs heterojunctions and superlattices (SL). This topic is of practical and fundamental interest since it relates to the stability of devices based on superlattices and heterojunctions, as well as to fundamental diffusion theory. The main goals of this study are to obtain the Al/Ga interdiffusivity, to understand Al/Ga interdiffusion behavior, and to understand how Si doping enhances the diffusion in AlGaAs/GaAs structures. Our first approach entails experimental studies of Al/Ga interdiffusion using Molecular Beam Epitaxy (MBE) samples of AlGaAs/GaAs structures, with or without Si doping. SUPREM-IV.GS was used to model the Fermi-level dependencies and extract the diffusivities. The experimental results show that Al/Ga interdiffusion in undoped AlGaAs/GaAs structures is small, but can be greatly enhanced in doped materials. The extracted Al/Ga interdiffusivity values match well with the Al/Ga interdiffusivity values reported by other groups, and they appear to be composition-independent. The interdiffusivity values are smaller than published Ga self-diffusivity values which are often mistakenly assumed to be equivalent to the interdiffusivity. Another set of Al/Ga interdiffusion experiments using AlAs/GaAs SL were performed to study Al/Ga interdiffusion. The experimental results are consistent with the previously discussed heterostructure results. Using Darken's analysis and treating the AlAs/GaAs SL material as a non-ideal solution, ALAMODE was used to model our SL disordering results explicitly. Assuming that the Al/Ga interdiffusivity is different from the Ga and Al self-diffusivities, we extracted the Al self-diffusivity and the Al activity coefficient as a function of composition using published Ga self-diffusivity values. The simulation results fit well with the experimental results. The extracted Al self-diffusivity value is close to the extracted Al/Ga interdiffusivity but different from the Ga self-diffusivity. The last part of this thesis focuses on modeling localized Al/Ga disordering in AlGaAs/GaAs devices. We present a localized disordering process as a solution to controlling the lateral oxidation process in AlGaAs/GaAs materials. SUPREM can predict these localized disordering results and can help to design an annealing process corresponding to the required aperture size in devices.

  3. Determining metal origins and availability in fluvial deposits by analysis of geochemical baselines and solid-solution partitioning measurements and modelling.

    PubMed

    Vijver, Martina G; Spijker, Job; Vink, Jos P M; Posthuma, Leo

    2008-12-01

    Metals in floodplain soils and sediments (deposits) can originate from lithogenic and anthropogenic sources, and their availability for uptake in biota is hypothesized to depend on both origin and local sediment conditions. In criteria-based environmental risk assessments, these issues are often neglected, implying local risks to be often over-estimated. Current problem definitions in river basin management tend to require a refined, site-specific focus, resulting in a need to address both aspects. This paper focuses on the determination of local environmental availabilities of metals in fluvial deposits by addressing both the origins of the metals and their partitioning over the solid and solution phases. The environmental availability of metals is assumed to be a key force influencing exposure levels in field soils and sediments. Anthropogenic enrichments of Cu, Zn and Pb in top layers could be distinguished from lithogenic background concentrations and described using an aluminium-proxy. Cd in top layers was attributed to anthropogenic enrichment almost fully. Anthropogenic enrichments for Cu and Zn appeared further to be also represented by cold 2M HNO3 extraction of site samples. For Pb the extractions over-estimated the enrichments. Metal partitioning was measured, and measurements were compared to predictions generated by an empirical regression model and by a mechanistic-kinetic model. The partitioning models predicted metal partitioning in floodplain deposits within about one order of magnitude, though a large inter-sample variability was found for Pb.

  4. Complexity analysis of dual-channel game model with different managers' business objectives

    NASA Astrophysics Data System (ADS)

    Li, Ting; Ma, Junhai

    2015-01-01

    This paper considers dual-channel game model with bounded rationality, using the theory of bifurcations of dynamical system. The business objectives of retailers are assumed to be different, which is closer to reality than previous studies. We study the local stable region of Nash equilibrium point and find that business objectives can expand the stable region and play an important role in price strategy. One interesting finding is that a fiercer competition tends to stabilize the Nash equilibrium. Simulation shows the complex behavior of two dimensional dynamic system, we find period doubling bifurcation and chaos phenomenon. We measure performances of the model in different period by using the index of average profit. The results show that unstable behavior in economic system is often an unfavorable outcome. So this paper discusses the application of adaptive adjustment mechanism when the model exhibits chaotic behavior and then allows the retailers to eliminate the negative effects.

  5. A semi-empirical model for the formation and depletion of the high burnup structure in UO 2

    DOE PAGES

    Pizzocri, D.; Cappia, F.; Luzzi, L.; ...

    2017-01-31

    In the rim zone of UO 2 nuclear fuel pellets, the combination of high burnup and low temperature drives a microstructural change, leading to the formation of the high burnup structure (HBS). In this work, we propose a semi-empirical model to describe the formation of the HBS, which embraces the polygonisation/recrystallization process and the depletion of intra-granular fission gas, describing them as inherently related. To this end, we per-formed grain-size measurements on samples at radial positions in which the restructuring was incomplete. Moreover, based on these new experimental data, we assume an exponential reduction of the average grain size withmore » local effective burnup, paired with a simultaneous depletion of intra-granular fission gas driven by diffusion. The comparison with currently used models indicates the applicability of the herein developed model within integral fuel performance codes.« less

  6. A non-LTE model for the Jovian methane infrared emissions at high spectral resolution

    NASA Technical Reports Server (NTRS)

    Halthore, Rangasayi N.; Allen, J. E., Jr.; Decola, Philip L.

    1994-01-01

    High resolution spectra of Jupiter in the 3.3 micrometer region have so far failed to reveal either the continuum or the line emissions that can be unambiguously attributed to the nu(sub 3) band of methane (Drossart et al. 1993; Kim et al. 1991). Nu(sub 3) line intensities predicted with the help of two simple non-Local Thermodynamic Equilibrium (LTE) models -- a two-level model and a three-level model, using experimentally determined relaxation coefficients, are shown to be one to three orders of magnitude respectively below the 3-sigma noise level of these observations. Predicted nu(sub 4) emission intensities are consistent with observed values. If the methane mixing ratio below the homopause is assumed as 2 x 10(exp -3), a value of about 300 K is derived as an upper limit to the temperature of the high stratosphere at microbar levels.

  7. On the orientation of the backbone dipoles in native folds

    PubMed Central

    Ripoll, Daniel R.; Vila, Jorge A.; Scheraga, Harold A.

    2005-01-01

    The role of electrostatic interactions in determining the native fold of proteins has been investigated by analyzing the alignment of peptide bond dipole moments with the local electrostatic field generated by the rest of the molecule with and without solvent effects. This alignment was calculated for a set of 112 native proteins by using charges from a gas phase potential. Most of the peptide dipoles in this set of proteins are on average aligned with the electrostatic field. The dipole moments associated with α-helical conformations show the best alignment with the electrostatic field, followed by residues in β-strand conformations. The dipole moments associated with other secondary structure elements are on average better aligned than in randomly generated conformations. The alignment of a dipole with the local electrostatic field depends on both the topology of the native fold and the charge distribution assumed for all of the residues. The influences of (i) solvent effects, (ii) different sets of charges, and (iii) the charge distribution assumed for the whole molecule were examined with a subset of 22 proteins each of which contains <30 ionizable groups. The results show that alternative charge distribution models lead to significant differences among the associated electrostatic fields, whereas the electrostatic field is less sensitive to the particular set of the adopted charges themselves (empirical conformational energy program for peptides or parameters for solvation energy). PMID:15894608

  8. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    NASA Astrophysics Data System (ADS)

    Köpke, Corinna; Irving, James; Elsheikh, Ahmed H.

    2018-06-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward model linking subsurface physical properties to measured data, which is typically assumed to be perfectly known in the inversion procedure. However, to make the stochastic solution of the inverse problem computationally tractable using methods such as Markov-chain-Monte-Carlo (MCMC), fast approximations of the forward model are commonly employed. This gives rise to model error, which has the potential to significantly bias posterior statistics if not properly accounted for. Here, we present a new methodology for dealing with the model error arising from the use of approximate forward solvers in Bayesian solutions to hydrogeophysical inverse problems. Our approach is geared towards the common case where this error cannot be (i) effectively characterized through some parametric statistical distribution; or (ii) estimated by interpolating between a small number of computed model-error realizations. To this end, we focus on identification and removal of the model-error component of the residual during MCMC using a projection-based approach, whereby the orthogonal basis employed for the projection is derived in each iteration from the K-nearest-neighboring entries in a model-error dictionary. The latter is constructed during the inversion and grows at a specified rate as the iterations proceed. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar travel-time data considering three different subsurface parameterizations of varying complexity. Synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed for their inversion. In each case, our developed approach enables us to remove posterior bias and obtain a more realistic characterization of uncertainty.

  9. Response to “Comment on ‘General rotating quantum vortex filaments in the low-temperature Svistunov model of the local induction approximation’” [Phys. Fluids 26, 119101 (2014)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Gorder, Robert A., E-mail: rav@knights.ucf.edu

    2014-11-15

    In R. A. Van Gorder, “General rotating quantum vortex filaments in the low-temperature Svistunov model of the local induction approximation,” Phys. Fluids 26, 065105 (2014) I discussed properties of generalized vortex filaments exhibiting purely rotational motion under the low-temperature Svistunov model of the local induction approximation. Such solutions are stationary in terms of translational motion. In the Comment [N. Hietala, “Comment on ‘General rotating quantum vortex filaments in the low-temperature Svistunov model of the local induction approximation’ [Phys. Fluids 26, 065105 (2014)],” Phys. Fluids 26, 119101 (2014)], the author criticizes my paper for not including translational motion (although it wasmore » clearly stated that the filament motion was assumed rotational). As it turns out, if one is interested in studying the geometric structure of solutions (which was the point of my paper), one obtains the needed qualitative results on the structure of such solutions by studying the purely rotational case. Nevertheless, in this Response I shall discuss the vortex filaments that have both rotational and translational motions. I then briefly discuss why one might want to study such generalized rotating filament solutions, in contrast to simple the standard helical or planar examples (which are really special cases). I also discuss how one can study the time evolution of filaments which exhibit more complicated dynamics than pure translation and rotation. Doing this, one can study non-stationary solutions which initially appear purely rotational and gradually display other dynamics as the filaments evolve.« less

  10. Evolutionary speed of species invasions.

    PubMed

    García-Ramos, Gisela; Rodríguez, Diego

    2002-04-01

    Successful invasion may depend of the capacity of a species to adjust genetically to a spatially varying environment. This research modeled a species invasion by examining the interaction between a quantitative genetic trait and population density. It assumed: (I) a quantitative genetic trait describes the adaptation of an individual to its local ecological conditions; (2) populations far from the local optimum grow more slowly than those near the optimum; and (3) the evolution of a trait depends on local population density, because differences in local population densities cause asymmetrical gene flow. This genetics-density interaction determined the propagation speed of populations. Numerical simulations showed that populations spread by advancing as two synchronic traveling waves, one for population density and one for trait adaptation. The form of the density wave was a step front that advances homogenizing populations at their carrying capacity; the adaptation wave was a curve with finite slope that homogenizes populations at full adaptation. The largest speed of population expansion, for a dimensionless analysis, corresponded to an almost homogeneous spatial environment when this model approached an ecological description such as the Fisher-Skellam's model. A large genetic response also favored faster speeds. Evolutionary speeds, in a natural scale, showed a wide range of rates that were also slower compared to models that only consider demographics. This evolutionary speed increased with high heritability, strong stabilizing selection, and high intrinsic growth rate. It decreased for steeper environmental gradients. Also indicated was an optimal dispersal rate over which evolutionary speed declined. This is expected because dispersal moves individuals further, but homogenizes populations genetically, making them maladapted. The evolutionary speed was compared to observed data. Furthermore, a moderate increase in the speed of expansion was predicted for ecological changes related to global warming.

  11. Effective elastic thicknesses of the lithosphere and mechanisms of isostatic compensation in Australia

    NASA Technical Reports Server (NTRS)

    Zuber, Maria T.; Bechtel, Timothy D.; Forsyth, Donald W.

    1989-01-01

    The isostatic compensation of Australia is investigated using an isostatic model for the Australian lithosphere that assumes regional compensation of an elastic plate which undergoes flexure in response to surface and subsurface loading. Using the coherence between Bouguer gravity and topography and two separate gravity/topography data sets, it was found that, for the continent as a whole, loads with wavelengths above 1500 km are locally compensated. Loads with wavelengths in the range 600-1500 km are partially supported by regional stresses, and loads with wavelengths less than 600 km are almost entirely supported by the strength of the lithosphere. It was found that the predicted coherence for a flexural model of a continuous elastic plate does not provide a good fit to the observed coherence of central Australia. The disagreement between model and observations is explained.

  12. The ratio of effective building height to street width governs dispersion of local vehicle emissions

    NASA Astrophysics Data System (ADS)

    Schulte, Nico; Tan, Si; Venkatram, Akula

    2015-07-01

    Analysis of data collected in street canyons located in Hanover, Germany and Los Angeles, USA, suggests that street-level concentrations of vehicle-related pollutants can be estimated with a model that assumes that vertical turbulent transport of emissions dominates the governing processes. The dispersion model relates surface concentrations to traffic flow rate, the effective aspect ratio of the street, and roof level turbulence. The dispersion model indicates that magnification of concentrations relative to those in the absence of buildings is most sensitive to the aspect ratio of the street, which is the ratio of the effective height of the buildings on the street to the width of the street. This result can be useful in the design of transit oriented developments that increase building density to reduce emissions from transportation.

  13. Calibration of a Land Subsidence Model Using InSAR Data via the Ensemble Kalman Filter.

    PubMed

    Li, Liangping; Zhang, Meijing; Katzenstein, Kurt

    2017-11-01

    The application of interferometric synthetic aperture radar (InSAR) has been increasingly used to improve capabilities to model land subsidence in hydrogeologic studies. A number of investigations over the last decade show how spatially detailed time-lapse images of ground displacements could be utilized to advance our understanding for better predictions. In this work, we use simulated land subsidences as observed measurements, mimicking InSAR data to inversely infer inelastic specific storage in a stochastic framework. The inelastic specific storage is assumed as a random variable and modeled using a geostatistical method such that the detailed variations in space could be represented and also that the uncertainties of both characterization of specific storage and prediction of land subsidence can be assessed. The ensemble Kalman filter (EnKF), a real-time data assimilation algorithm, is used to inversely calibrate a land subsidence model by matching simulated subsidences with InSAR data. The performance of the EnKF is demonstrated in a synthetic example in which simulated surface deformations using a reference field are assumed as InSAR data for inverse modeling. The results indicate: (1) the EnKF can be used successfully to calibrate a land subsidence model with InSAR data; the estimation of inelastic specific storage is improved, and uncertainty of prediction is reduced, when all the data are accounted for; and (2) if the same ensemble is used to estimate Kalman gain, the analysis errors could cause filter divergence; thus, it is essential to include localization in the EnKF for InSAR data assimilation. © 2017, National Ground Water Association.

  14. 3-D Modeling of Irregular Volcanic Sources Using Sparsity-Promoting Inversions of Geodetic Data and Boundary Element Method

    NASA Astrophysics Data System (ADS)

    Zhai, Guang; Shirzaei, Manoochehr

    2017-12-01

    Geodetic observations of surface deformation associated with volcanic activities can be used to constrain volcanic source parameters and their kinematics. Simple analytical models, such as point and spherical sources, are widely used to model deformation data. The inherent nature of oversimplified model geometries makes them unable to explain fine details of surface deformation. Current nonparametric, geometry-free inversion approaches resolve the distributed volume change, assuming it varies smoothly in space, which may detect artificial volume change outside magmatic source regions. To obtain a physically meaningful representation of an irregular volcanic source, we devise a new sparsity-promoting modeling scheme assuming active magma bodies are well-localized melt accumulations, namely, outliers in the background crust. First, surface deformation data are inverted using a hybrid L1- and L2-norm regularization scheme to solve for sparse volume change distributions. Next, a boundary element method is implemented to solve for the displacement discontinuity distribution of the reservoir, which satisfies a uniform pressure boundary condition. The inversion approach is thoroughly validated using benchmark and synthetic tests, of which the results show that source dimension, depth, and shape can be recovered appropriately. We apply this modeling scheme to deformation observed at Kilauea summit for periods of uplift and subsidence leading to and following the 2007 Father's Day event. We find that the magmatic source geometries for these periods are statistically distinct, which may be an indicator that magma is released from isolated compartments due to large differential pressure leading to the rift intrusion.

  15. Hot-spot model for accretion disc variability as random process. II. Mathematics of the power-spectrum break frequency

    NASA Astrophysics Data System (ADS)

    Pecháček, T.; Goosmann, R. W.; Karas, V.; Czerny, B.; Dovčiak, M.

    2013-08-01

    Context. We study some general properties of accretion disc variability in the context of stationary random processes. In particular, we are interested in mathematical constraints that can be imposed on the functional form of the Fourier power-spectrum density (PSD) that exhibits a multiply broken shape and several local maxima. Aims: We develop a methodology for determining the regions of the model parameter space that can in principle reproduce a PSD shape with a given number and position of local peaks and breaks of the PSD slope. Given the vast space of possible parameters, it is an important requirement that the method is fast in estimating the PSD shape for a given parameter set of the model. Methods: We generated and discuss the theoretical PSD profiles of a shot-noise-type random process with exponentially decaying flares. Then we determined conditions under which one, two, or more breaks or local maxima occur in the PSD. We calculated positions of these features and determined the changing slope of the model PSD. Furthermore, we considered the influence of the modulation by the orbital motion for a variability pattern assumed to result from an orbiting-spot model. Results: We suggest that our general methodology can be useful for describing non-monotonic PSD profiles (such as the trend seen, on different scales, in exemplary cases of the high-mass X-ray binary Cygnus X-1 and the narrow-line Seyfert galaxy Ark 564). We adopt a model where these power spectra are reproduced as a superposition of several Lorentzians with varying amplitudes in the X-ray-band light curve. Our general approach can help in constraining the model parameters and in determining which parts of the parameter space are accessible under various circumstances.

  16. Information flow in an atmospheric model and data assimilation

    NASA Astrophysics Data System (ADS)

    Yoon, Young-noh

    2011-12-01

    Weather forecasting consists of two processes, model integration and analysis (data assimilation). During the model integration, the state estimate produced by the analysis evolves to the next cycle time according to the atmospheric model to become the background estimate. The analysis then produces a new state estimate by combining the background state estimate with new observations, and the cycle repeats. In an ensemble Kalman filter, the probability distribution of the state estimate is represented by an ensemble of sample states, and the covariance matrix is calculated using the ensemble of sample states. We perform numerical experiments on toy atmospheric models introduced by Lorenz in 2005 to study the information flow in an atmospheric model in conjunction with ensemble Kalman filtering for data assimilation. This dissertation consists of two parts. The first part of this dissertation is about the propagation of information and the use of localization in ensemble Kalman filtering. If we can perform data assimilation locally by considering the observations and the state variables only near each grid point, then we can reduce the number of ensemble members necessary to cover the probability distribution of the state estimate, reducing the computational cost for the data assimilation and the model integration. Several localized versions of the ensemble Kalman filter have been proposed. Although tests applying such schemes have proven them to be extremely promising, a full basic understanding of the rationale and limitations of localization is currently lacking. We address these issues and elucidate the role played by chaotic wave dynamics in the propagation of information and the resulting impact on forecasts. The second part of this dissertation is about ensemble regional data assimilation using joint states. Assuming that we have a global model and a regional model of higher accuracy defined in a subregion inside the global region, we propose a data assimilation scheme that produces the analyses for the global and the regional model simultaneously, considering forecast information from both models. We show that our new data assimilation scheme produces better results both in the subregion and the global region than the data assimilation scheme that produces the analyses for the global and the regional model separately.

  17. Including local rainfall dynamics and uncertain boundary conditions into a 2-D regional-local flood modelling cascade

    NASA Astrophysics Data System (ADS)

    Bermúdez, María; Neal, Jeffrey C.; Bates, Paul D.; Coxon, Gemma; Freer, Jim E.; Cea, Luis; Puertas, Jerónimo

    2016-04-01

    Flood inundation models require appropriate boundary conditions to be specified at the limits of the domain, which commonly consist of upstream flow rate and downstream water level. These data are usually acquired from gauging stations on the river network where measured water levels are converted to discharge via a rating curve. Derived streamflow estimates are therefore subject to uncertainties in this rating curve, including extrapolating beyond the maximum observed ratings magnitude. In addition, the limited number of gauges in reach-scale studies often requires flow to be routed from the nearest upstream gauge to the boundary of the model domain. This introduces additional uncertainty, derived not only from the flow routing method used, but also from the additional lateral rainfall-runoff contributions downstream of the gauging point. Although generally assumed to have a minor impact on discharge in fluvial flood modeling, this local hydrological input may become important in a sparse gauge network or in events with significant local rainfall. In this study, a method to incorporate rating curve uncertainty and the local rainfall-runoff dynamics into the predictions of a reach-scale flood inundation model is proposed. Discharge uncertainty bounds are generated by applying a non-parametric local weighted regression approach to stage-discharge measurements for two gauging stations, while measured rainfall downstream from these locations is cascaded into a hydrological model to quantify additional inflows along the main channel. A regional simplified-physics hydraulic model is then applied to combine these inputs and generate an ensemble of discharge and water elevation time series at the boundaries of a local-scale high complexity hydraulic model. Finally, the effect of these rainfall dynamics and uncertain boundary conditions are evaluated on the local-scale model. Improvements in model performance when incorporating these processes are quantified using observed flood extent data and measured water levels from a 2007 summer flood event on the river Severn. The area of interest is a 7 km reach in which the river passes through the city of Worcester, a low water slope, subcritical reach in which backwater effects are significant. For this domain, the catchment area between flow gauging stations extends over 540 km2. Four hydrological models from the FUSE framework (Framework for Understanding Structural Errors) were set up to simulate the rainfall-runoff process over this area. At this regional scale, a 2-dimensional hydraulic model that solves the local inertial approximation of the shallow water equations was applied to route the flow, whereas the full form of these equations was solved at the local scale to predict the urban flow field. This nested approach hence allows an examination of water fluxes from the catchment to the building scale, while requiring short setup and computational times. An accurate prediction of the magnitude and timing of the flood peak was obtained with the proposed method, in spite of the unusual structure of the rain episode and the complexity of the River Severn system. The findings highlight the importance of estimating boundary condition uncertainty and local rainfall contribution for accurate prediction of river flows and inundation.

  18. On well-posedness of variational models of charged drops.

    PubMed

    Muratov, Cyrill B; Novaga, Matteo

    2016-03-01

    Electrified liquids are well known to be prone to a variety of interfacial instabilities that result in the onset of apparent interfacial singularities and liquid fragmentation. In the case of electrically conducting liquids, one of the basic models describing the equilibrium interfacial configurations and the onset of instability assumes the liquid to be equipotential and interprets those configurations as local minimizers of the energy consisting of the sum of the surface energy and the electrostatic energy. Here we show that, surprisingly, this classical geometric variational model is mathematically ill-posed irrespective of the degree to which the liquid is electrified. Specifically, we demonstrate that an isolated spherical droplet is never a local minimizer, no matter how small is the total charge on the droplet, as the energy can always be lowered by a smooth, arbitrarily small distortion of the droplet's surface. This is in sharp contrast to the experimental observations that a critical amount of charge is needed in order to destabilize a spherical droplet. We discuss several possible regularization mechanisms for the considered free boundary problem and argue that well-posedness can be restored by the inclusion of the entropic effects resulting in finite screening of free charges.

  19. On well-posedness of variational models of charged drops

    PubMed Central

    Novaga, Matteo

    2016-01-01

    Electrified liquids are well known to be prone to a variety of interfacial instabilities that result in the onset of apparent interfacial singularities and liquid fragmentation. In the case of electrically conducting liquids, one of the basic models describing the equilibrium interfacial configurations and the onset of instability assumes the liquid to be equipotential and interprets those configurations as local minimizers of the energy consisting of the sum of the surface energy and the electrostatic energy. Here we show that, surprisingly, this classical geometric variational model is mathematically ill-posed irrespective of the degree to which the liquid is electrified. Specifically, we demonstrate that an isolated spherical droplet is never a local minimizer, no matter how small is the total charge on the droplet, as the energy can always be lowered by a smooth, arbitrarily small distortion of the droplet's surface. This is in sharp contrast to the experimental observations that a critical amount of charge is needed in order to destabilize a spherical droplet. We discuss several possible regularization mechanisms for the considered free boundary problem and argue that well-posedness can be restored by the inclusion of the entropic effects resulting in finite screening of free charges. PMID:27118921

  20. Propagation of the state change induced by external forces in local interactions

    NASA Astrophysics Data System (ADS)

    Lu, Jianjun; Tokinaga, Shozo

    2016-10-01

    This paper analyses the propagation of the state changes of agents that are induced by external forces applied to a plane. In addition, we propose two models for the behavior of the agents placed on a lattice plane, both of which are affected by local interactions. We first assume that agents are allowed to move to another site to maximise their satisfaction. Second, we utilise a model in which the agents choose activities on each site. The results show that the migration (activity) patterns of agents in both models achieve stability without any external forces. However, when we apply an impulsive external force to the state of the agents, we then observe the propagation of the changes in the agents' states. Using simulation studies, we show the conditions for the propagation of the state changes of the agents. We also show the propagation of the state changes of the agents allocated in scale-free networks and discuss the estimation of the agents' decisions in real state changes. Finally, we discuss the estimation of the agents' decisions in real state temporal changes using economic and social data from Japan and the United States.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Domènech, Guillem; Hiramatsu, Takashi; Lin, Chunshan

    We consider a cosmological model in which the tensor mode becomes massive during inflation, and study the Cosmic Microwave Background (CMB) temperature and polarization bispectra arising from the mixing between the scalar mode and the massive tensor mode during inflation. The model assumes the existence of a preferred spatial frame during inflation. The local Lorentz invariance is already broken in cosmology due to the existence of a preferred rest frame. The existence of a preferred spatial frame further breaks the remaining local SO(3) invariance and in particular gives rise to a mass in the tensor mode. At linear perturbation level,more » we minimize our model so that the vector mode remains non-dynamical, while the scalar mode is the same as the one in single-field slow-roll inflation. At non-linear perturbation level, this inflationary massive graviton phase leads to a sizeable scalar-scalar-tensor coupling, much greater than the scalar-scalar-scalar one, as opposed to the conventional case. This scalar-scalar-tensor interaction imprints a scale dependent feature in the CMB temperature and polarization bispectra. Very intriguingly, we find a surprizing similarity between the predicted scale dependence and the scale-dependent non-Gaussianities at low multipoles hinted in the WMAP and Planck results.« less

  2. Dynamical implications of bi-directional resource exchange within a meta-ecosystem.

    PubMed

    Messan, Marisabel Rodriguez; Kopp, Darin; Allen, Daniel C; Kang, Yun

    2018-05-05

    The exchange of resources across ecosystem boundaries can have large impacts on ecosystem structures and functions at local and regional scales. In this article, we develop a simple model to investigate dynamical implications of bi-directional resource exchanges between two local ecosystems in a meta-ecosystem framework. In our model, we assume that (1) Each local ecosystem acts as both a resource donor and recipient, such that one ecosystem donating resources to another results in a cost to the donating system and a benefit to the recipient; and (2) The costs and benefits of the bi-directional resource exchange between two ecosystems are correlated in a nonlinear fashion. Our model could apply to the resource interactions between terrestrial and aquatic ecosystems that are supported by the literature. Our theoretical results show that bi-directional resource exchange between two ecosystems can indeed generate complicated dynamical outcomes, including the coupled ecosystems having amensalistic, antagonistic, competitive, or mutualistic interactions, with multiple alternative stable states depending on the relative costs and benefits. In addition, if the relative cost for resource exchange for an ecosystem is decreased or the relative benefit for resource exchange for an ecosystem is increased, the production of that ecosystem would increase; however, depending on the local environment, the production of the other ecosystem may increase or decrease. We expect that our work, by evaluating the potential outcomes of resource exchange theoretically, can facilitate empirical evaluations and advance the understanding of spatial ecosystem ecology where resource exchanges occur in varied ecosystems through a complicated network. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Kids, Crime, and Local Television News

    ERIC Educational Resources Information Center

    Yanich, Danilo

    2005-01-01

    The vast majority of crime reporting occurs on local television news and in newspapers. Although crimes are extraordinary events, they assume an ordinariness that only daily reporting can give them. The obvious question is what does the news tell us about crime. This article compares the coverage of adult crime and the coverage of what the author…

  4. Disaster management and the critical thinking skills of local emergency managers: correlations with age, gender, education, and years in occupation.

    PubMed

    Peerbolte, Stacy L; Collins, Matthew Lloyd

    2013-01-01

    Emergency managers must be able to think critically in order to identify and anticipate situations, solve problems, make judgements and decisions effectively and efficiently, and assume and manage risk. Heretofore, a critical thinking skills assessment of local emergency managers had yet to be conducted that tested for correlations among age, gender, education, and years in occupation. An exploratory descriptive research design, using the Watson-Glaser Critical Thinking Appraisal-Short Form (WGCTA-S), was employed to determine the extent to which a sample of 54 local emergency managers demonstrated the critical thinking skills associated with the ability to assume and manage risk as compared to the critical thinking scores of a group of 4,790 peer-level managers drawn from an archival WGCTA-S database. This exploratory design suggests that the local emergency managers, surveyed in this study, had lower WGCTA-S critical thinking scores than their equivalents in the archival database with the exception of those in the high education and high experience group. © 2013 The Author(s). Journal compilation © Overseas Development Institute, 2013.

  5. Effect of misspecification of gene frequency on the two-point LOD score.

    PubMed

    Pal, D K; Durner, M; Greenberg, D A

    2001-11-01

    In this study, we used computer simulation of simple and complex models to ask: (1) What is the penalty in evidence for linkage when the assumed gene frequency is far from the true gene frequency? (2) If the assumed model for gene frequency and inheritance are misspecified in the analysis, can this lead to a higher maximum LOD score than that obtained under the true parameters? Linkage data simulated under simple dominant, recessive, dominant and recessive with reduced penetrance, and additive models, were analysed assuming a single locus with both the correct and incorrect dominance model and assuming a range of different gene frequencies. We found that misspecifying the analysis gene frequency led to little penalty in maximum LOD score in all models examined, especially if the assumed gene frequency was lower than the generating one. Analysing linkage data assuming a gene frequency of the order of 0.01 for a dominant gene, and 0.1 for a recessive gene, appears to be a reasonable tactic in the majority of realistic situations because underestimating the gene frequency, even when the true gene frequency is high, leads to little penalty in the LOD score.

  6. Push-pull tracer tests: Their information content and use for characterizing non-Fickian, mobile-immobile behavior: INFORMATION CONTENT OF PUSH-PULL TESTS

    DOE PAGES

    Hansen, Scott K.; Berkowitz, Brian; Vesselinov, Velimir V.; ...

    2016-12-01

    Path reversibility and radial symmetry are often assumed in push-pull tracer test analysis. In reality, heterogeneous flow fields mean that both assumptions are idealizations. In this paper, to understand their impact, we perform a parametric study which quantifies the scattering effects of ambient flow, local-scale dispersion, and velocity field heterogeneity on push-pull breakthrough curves and compares them to the effects of mobile-immobile mass transfer (MIMT) processes including sorption and diffusion into secondary porosity. We identify specific circumstances in which MIMT overwhelmingly determines the breakthrough curve, which may then be considered uninformative about drift and local-scale dispersion. Assuming path reversibility, wemore » develop a continuous-time-random-walk-based interpretation framework which is flow-field-agnostic and well suited to quantifying MIMT. Adopting this perspective, we show that the radial flow assumption is often harmless: to the extent that solute paths are reversible, the breakthrough curve is uninformative about velocity field heterogeneity. Our interpretation method determines a mapping function (i.e., subordinator) from travel time in the absence of MIMT to travel time in its presence. A mathematical theory allowing this function to be directly “plugged into” an existing Laplace-domain transport model to incorporate MIMT is presented and demonstrated. Algorithms implementing the calibration are presented and applied to interpretation of data from a push-pull test performed in a heterogeneous environment. A successful four-parameter fit is obtained, of comparable fidelity to one obtained using a million-node 3-D numerical model. In conclusion, we demonstrate analytically and numerically how push-pull tests quantifying MIMT are sensitive to remobilization, but not immobilization, kinetics.« less

  7. Push-pull tracer tests: Their information content and use for characterizing non-Fickian, mobile-immobile behavior: INFORMATION CONTENT OF PUSH-PULL TESTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Scott K.; Berkowitz, Brian; Vesselinov, Velimir V.

    Path reversibility and radial symmetry are often assumed in push-pull tracer test analysis. In reality, heterogeneous flow fields mean that both assumptions are idealizations. In this paper, to understand their impact, we perform a parametric study which quantifies the scattering effects of ambient flow, local-scale dispersion, and velocity field heterogeneity on push-pull breakthrough curves and compares them to the effects of mobile-immobile mass transfer (MIMT) processes including sorption and diffusion into secondary porosity. We identify specific circumstances in which MIMT overwhelmingly determines the breakthrough curve, which may then be considered uninformative about drift and local-scale dispersion. Assuming path reversibility, wemore » develop a continuous-time-random-walk-based interpretation framework which is flow-field-agnostic and well suited to quantifying MIMT. Adopting this perspective, we show that the radial flow assumption is often harmless: to the extent that solute paths are reversible, the breakthrough curve is uninformative about velocity field heterogeneity. Our interpretation method determines a mapping function (i.e., subordinator) from travel time in the absence of MIMT to travel time in its presence. A mathematical theory allowing this function to be directly “plugged into” an existing Laplace-domain transport model to incorporate MIMT is presented and demonstrated. Algorithms implementing the calibration are presented and applied to interpretation of data from a push-pull test performed in a heterogeneous environment. A successful four-parameter fit is obtained, of comparable fidelity to one obtained using a million-node 3-D numerical model. In conclusion, we demonstrate analytically and numerically how push-pull tests quantifying MIMT are sensitive to remobilization, but not immobilization, kinetics.« less

  8. Fluid-assisted melting in a collisional orogen

    NASA Astrophysics Data System (ADS)

    Berger, A.; Burri, T.; Engi, M.; Roselle, G. T.

    2003-04-01

    The Southern Steep Belt (SSB) of the Central Alps is the location of backthrusting during syn- to post-collisional deformation. From its metamorphic evolution and lithological contents the SSB has been interpreted as a tectonic accretion channel (TAC [1]). The central part of the SSB is additionally characterized by anatexites, leucogranitic aplites and pegmatites. Dehydration melting of muscovite is rare but did occurr locally. Moreover, no evidence of dehydration melting of biotite has been formed in that products of incongruent melting reactions (garnet, opx or cordierite) are missing. The melts are mainly produced by the infiltration of an external aqueous fluid. The fluids must have originated from the breakdown of hydrous minerals at temperatures below the water saturated solidus of the quartz-feldspar-system, such that the liberated fluids could not been trapped in the melt. Using the thermal modeling program MELONPIT [2] and assuming that solid fragments ascended in combination with tectonic accreated radioactive material, a complex thermal evolution inside the TAC has been derived. During subduction of the downgoing plate, isotherms were locally inverted, then subsequently relaxed, when subduction slowed down. At the collisional stage a small region develope, where the isotherms were still bent, and where temperatures increased during decompression. Assuming that dehydration reactions were followed by upward flow of fluids released from this region fluid present partial melting was triggered. The flow direction of the fluid was controlled by the pressure gradient and the steeply oriented foliations in the SSB. According to the model, the area of upward flowing fluids should be limited to the SSB. This is consistent with the observed regional distribution of leucosomes derived from in-situ melts. [1] Engi et al. (2001) Geology 29: 1143-1146 [2] Roselle et al. (2002) Am. J. Sci. 302: 381-409

  9. Atmospheric Transference of the Toxic Burden of Atmosphere-Surface Exchangeable Pollutants to the Great Lakes Region

    NASA Astrophysics Data System (ADS)

    Kumar, A.; Perlinger, J. A.; Giang, A.; Zhang, H.; Selin, N. E.; Wu, S.

    2016-12-01

    Toxic pollutants that share certain chemical properties undergo repeated emission and deposition between Earth's surfaces and the atmosphere. Following their emission through anthropogenic activities, they are transported locally, regionally or globally through the atmosphere, are deposited, and impact local ecosystems, in some cases as a result of bioaccumulation in food webs. We call them atmosphere-surface exchangeable pollutants or "ASEPs", wherein this group is comprised of thousands of chemicals. We are studying potential future contamination in the Great Lakes region by modeling scenarios of the future for three compounds/compound classes, mercury, polychlorinated biphenyl compounds, and polycyclic aromatic hydrocarbons. In this presentation we focus on mercury and future scenarios of contamination of the Great Lake region. The atmospheric transport of mercury under specific scenarios will be discussed. The global 3-D chemical transport model GEOS-Chem has been applied to estimate future atmospheric concentrations and deposition rates of mercury in the Great Lakes region for selected future scenarios of emissions and climate. We find that, assuming no changes in climate, annual mean net deposition flux of mercury to the Great Lakes Region may increase by approximately 50% over 2005 levels by 2050, without global or regional policies addressing mercury, air pollution, and climate. In contrast, we project that the combination of global and North American action on mercury could lead to a 21% reduction in deposition from 2005 levels by 2050. US action alone results in a projected 18% reduction over 2005 levels by 2050. We also find that, assuming no changes in anthropogenic emissions, climate change and biomass burning emissions would, respectively, cause annual mean net deposition flux of mercury to the Great Lakes Region to increase by approximately 5% and decrease by approximately 2% over 2000 levels by 2050.

  10. A 1-D evolutionary model for icy satellites, applied to Enceladus

    NASA Astrophysics Data System (ADS)

    Prialnik, Dina; Malamud, Uri

    2015-11-01

    A 1-D long-term evolution code for icy satellites is presented, which couples multiple processes: water migration, geochemical reactions, water and silicate phase transitions, crystallization, compaction by self-gravity, and ablation. The code takes into account various energy sources: tidal heating, radiogenic heating, geochemical energy released by serpentinization or absorbed by mineral dehydration, gravitational energy, and insolation. It includes heat transport by conduction, convection, and advection.The code is applied to Enceladus, by guessing the initial conditions that would render a structure compatible with present-day observations, and adopting a homogeneous initial structure. Assuming that the satellite has been losing water continually along its evolution, it follows that it was formed as a more massive, more ice-rich and more porous object, and gradually transformed into its present day state, due to sustained tidal heating. Several initial compositions and evolution scenarios are considered, and the evolution is simulated for the age of the Solar System. The results corresponding to the present configuration are confronted with the available observational constraints. The present configuration is shown to be differentiated into a pure icy mantle, several tens of km thick, overlying a rocky core, composed of dehydrated rock in the central part and hydrated rock in the outer part. Such a differentiated structure is obtained not only for Enceladus, but for other medium size ice-rich bodies as well.Predictions for Enceladus are a higher rock/ice mass ratio than previously assumed, and a thinner ice mantle, compatible with recent estimates based on gravity field measurements. Although, obviously, the 1-D model cannot be used to explain local phenomena, it sheds light on the internal structure invoked in explanations of localized features and activities.

  11. Tidal stripping and the structure of dwarf galaxies in the Local Group

    NASA Astrophysics Data System (ADS)

    Fattahi, Azadeh; Navarro, Julio F.; Frenk, Carlos S.; Oman, Kyle A.; Sawala, Till; Schaller, Matthieu

    2018-05-01

    The shallow faint-end slope of the galaxy mass function is usually reproduced in Λ cold dark matter (ΛCDM) galaxy formation models by assuming that the fraction of baryons that turn into stars drops steeply with decreasing halo mass and essentially vanishes in haloes with maximum circular velocities Vmax < 20-30 km s-1. Dark-matter-dominated dwarfs should therefore have characteristic velocities of about that value, unless they are small enough to probe only the rising part of the halo circular velocity curve (i.e. half-mass radii, r1/2 ≪ 1 kpc). Many dwarfs have properties in disagreement with this prediction: they are large enough to probe their halo Vmax but their characteristic velocities are well below 20 km s-1. These `cold faint giants' (an extreme example is the recently discovered Crater 2 Milky Way satellite) can only be reconciled with our ΛCDM models if they are the remnants of once massive objects heavily affected by tidal stripping. We examine this possibility using the APOSTLE cosmological hydrodynamical simulations of the Local Group. Assuming that low-velocity-dispersion satellites have been affected by stripping, we infer their progenitor masses, radii, and velocity dispersions, and find them in remarkable agreement with those of isolated dwarfs. Tidal stripping also explains the large scatter in the mass discrepancy-acceleration relation in the dwarf galaxy regime: tides remove preferentially dark matter from satellite galaxies, lowering their accelerations below the amin ˜ 10-11 m s-2 minimum expected for isolated dwarfs. In many cases, the resulting velocity dispersions are inconsistent with the predictions from Modified Newtonian Dynamics, a result that poses a possibly insurmountable challenge to that scenario.

  12. Modelling long-term fire occurrence factors in Spain by accounting for local variations with geographically weighted regression

    NASA Astrophysics Data System (ADS)

    Martínez-Fernández, J.; Chuvieco, E.; Koutsias, N.

    2013-02-01

    Humans are responsible for most forest fires in Europe, but anthropogenic factors behind these events are still poorly understood. We tried to identify the driving factors of human-caused fire occurrence in Spain by applying two different statistical approaches. Firstly, assuming stationary processes for the whole country, we created models based on multiple linear regression and binary logistic regression to find factors associated with fire density and fire presence, respectively. Secondly, we used geographically weighted regression (GWR) to better understand and explore the local and regional variations of those factors behind human-caused fire occurrence. The number of human-caused fires occurring within a 25-yr period (1983-2007) was computed for each of the 7638 Spanish mainland municipalities, creating a binary variable (fire/no fire) to develop logistic models, and a continuous variable (fire density) to build standard linear regression models. A total of 383 657 fires were registered in the study dataset. The binary logistic model, which estimates the probability of having/not having a fire, successfully classified 76.4% of the total observations, while the ordinary least squares (OLS) regression model explained 53% of the variation of the fire density patterns (adjusted R2 = 0.53). Both approaches confirmed, in addition to forest and climatic variables, the importance of variables related with agrarian activities, land abandonment, rural population exodus and developmental processes as underlying factors of fire occurrence. For the GWR approach, the explanatory power of the GW linear model for fire density using an adaptive bandwidth increased from 53% to 67%, while for the GW logistic model the correctly classified observations improved only slightly, from 76.4% to 78.4%, but significantly according to the corrected Akaike Information Criterion (AICc), from 3451.19 to 3321.19. The results from GWR indicated a significant spatial variation in the local parameter estimates for all the variables and an important reduction of the autocorrelation in the residuals of the GW linear model. Despite the fitting improvement of local models, GW regression, more than an alternative to "global" or traditional regression modelling, seems to be a valuable complement to explore the non-stationary relationships between the response variable and the explanatory variables. The synergy of global and local modelling provides insights into fire management and policy and helps further our understanding of the fire problem over large areas while at the same time recognizing its local character.

  13. Budget impact model of adding erlotinib to a regimen of gemcitabine for the treatment of locally advanced, nonresectable or metastatic pancreatic cancer.

    PubMed

    Danese, Mark D; Reyes, Carolina; Northridge, Kelly; Lubeck, Deborah; Lin, Chin-Yu; O'Connor, Paula

    2008-04-01

    The aim of this study was to determine the budget impact of adding erlotinib to a US health plan insurer's formulary as a combination therapy with gemcitabine for the treatment of nonresectable pancreatic cancer. An Excel-based budget impact model was developed to evaluate the costs for National Comprehensive Cancer Network guideline-recommended treatment options for patients with locally advanced, nonresectable or metastatic pancreatic cancer from the perspective of a US managed care plan. The model compared treatment with gemcitabine alone and in combination with erlotinib, including the costs of treatment, adverse events (AEs), and administration. Inputs for the model were derived from the Surveillance, Epidemiology and End Results Cancer Registry, clinical trials, and publicly available sources and were varied in sensitivity analyses to identify influential inputs. The model addressed first-line use in a single indication and assumed that the proportion of patients aged >or=65 years in a managed care organization was the same as in the general population. The model did not account for patient copayments for oral medications, a factor that could lower a plan's overall cost further than estimated herein. For a hypothetical managed care plan with 500,000 members, the model estimated 43 newly diagnosed pancreatic cancer cases each year, of which 56% (n=24) would be treated with gemcitabine as first-line therapy. Assuming that erlotinib were added to the treatment regimen in 40% (n=10) of gemcitabine-treated patients for 15.7 weeks of therapy per patient, the expected 1-year cost in 2006 dollars would be US $466,700 compared with $346,700 had all patients been treated with gemcitabine alone. Administration costs accounted for 10% to 12% of total costs, while AE management costs made up 14% to 16% of total costs. These estimates corresponded to an incremental cost of $120,000, or $0.020 per member per month (PMPM). The results were relatively insensitive to drug costs, drug administration costs, and costs of treatment of AEs based on sensitivity analyses. In this analysis of the budget impact of adding to the health plan formulary erlotinib to a regimen of gemcitabine as first-line treatment of locally advanced, nonresectable or metastatic pancreatic cancer in the United States, the budget impact was $0.020 PMPM. The relatively low incidence of pancreatic cancer and the assumption of treating only 23% of these patients with erlotinib were likely the principal reasons for the low budgetary impact of erlotinib. In this model and using these reasonable assumptions, the results suggested that the incremental cost impact on a PMPM basis may be small.

  14. Intrachain exciton dynamics in conjugated polymer chains in solution.

    PubMed

    Tozer, Oliver Robert; Barford, William

    2015-08-28

    We investigate exciton dynamics on a polymer chain in solution induced by the Brownian rotational motion of the monomers. Poly(para-phenylene) is chosen as the model system and excitons are modeled via the Frenkel exciton Hamiltonian. The Brownian fluctuations of the torsional modes were modeled via the Langevin equation. The rotation of monomers in polymer chains in solution has a number of important consequences for the excited state properties. First, the dihedral angles assume a thermal equilibrium which causes off-diagonal disorder in the Frenkel Hamiltonian. This disorder Anderson localizes the Frenkel exciton center-of-mass wavefunctions into super-localized local exciton ground states (LEGSs) and higher-energy more delocalized quasi-extended exciton states (QEESs). LEGSs correspond to chromophores on polymer chains. The second consequence of rotations-that are low-frequency-is that their coupling to the exciton wavefunction causes local planarization and the formation of an exciton-polaron. This torsional relaxation causes additional self-localization. Finally, and crucially, the torsional dynamics cause the Frenkel Hamiltonian to be time-dependent, leading to exciton dynamics. We identify two distinct types of dynamics. At low temperatures, the torsional fluctuations act as a perturbation on the polaronic nature of the exciton state. Thus, the exciton dynamics at low temperatures is a small-displacement diffusive adiabatic motion of the exciton-polaron as a whole. The temperature dependence of the diffusion constant has a linear dependence, indicating an activationless process. As the temperature increases, however, the diffusion constant increases at a faster than linear rate, indicating a second non-adiabatic dynamics mechanism begins to dominate. Excitons are thermally activated into higher energy more delocalized exciton states (i.e., LEGSs and QEESs). These states are not self-localized by local torsional planarization. During the exciton's temporary occupation of a LEGS-and particularly a quasi-band QEES-its motion is semi-ballistic with a large group velocity. After a short period of rapid transport, the exciton wavefunction collapses again into an exciton-polaron state. We present a simple model for the activated dynamics which is in agreement with the data.

  15. Integrating count and detection–nondetection data to model population dynamics

    USGS Publications Warehouse

    Zipkin, Elise F.; Rossman, Sam; Yackulic, Charles B.; Wiens, David; Thorson, James T.; Davis, Raymond J.; Grant, Evan H. Campbell

    2017-01-01

    There is increasing need for methods that integrate multiple data types into a single analytical framework as the spatial and temporal scale of ecological research expands. Current work on this topic primarily focuses on combining capture–recapture data from marked individuals with other data types into integrated population models. Yet, studies of species distributions and trends often rely on data from unmarked individuals across broad scales where local abundance and environmental variables may vary. We present a modeling framework for integrating detection–nondetection and count data into a single analysis to estimate population dynamics, abundance, and individual detection probabilities during sampling. Our dynamic population model assumes that site-specific abundance can change over time according to survival of individuals and gains through reproduction and immigration. The observation process for each data type is modeled by assuming that every individual present at a site has an equal probability of being detected during sampling processes. We examine our modeling approach through a series of simulations illustrating the relative value of count vs. detection–nondetection data under a variety of parameter values and survey configurations. We also provide an empirical example of the model by combining long-term detection–nondetection data (1995–2014) with newly collected count data (2015–2016) from a growing population of Barred Owl (Strix varia) in the Pacific Northwest to examine the factors influencing population abundance over time. Our model provides a foundation for incorporating unmarked data within a single framework, even in cases where sampling processes yield different detection probabilities. This approach will be useful for survey design and to researchers interested in incorporating historical or citizen science data into analyses focused on understanding how demographic rates drive population abundance.

  16. Is cosmic acceleration proven by local cosmological probes?

    NASA Astrophysics Data System (ADS)

    Tutusaus, I.; Lamine, B.; Dupays, A.; Blanchard, A.

    2017-06-01

    Context. The cosmological concordance model (ΛCDM) matches the cosmological observations exceedingly well. This model has become the standard cosmological model with the evidence for an accelerated expansion provided by the type Ia supernovae (SNIa) Hubble diagram. However, the robustness of this evidence has been addressed recently with somewhat diverging conclusions. Aims: The purpose of this paper is to assess the robustness of the conclusion that the Universe is indeed accelerating if we rely only on low-redshift (z ≲ 2) observations, that is to say with SNIa, baryonic acoustic oscillations, measurements of the Hubble parameter at different redshifts, and measurements of the growth of matter perturbations. Methods: We used the standard statistical procedure of minimizing the χ2 function for the different probes to quantify the goodness of fit of a model for both ΛCDM and a simple nonaccelerated low-redshift power law model. In this analysis, we do not assume that supernovae intrinsic luminosity is independent of the redshift, which has been a fundamental assumption in most previous studies that cannot be tested. Results: We have found that, when SNIa intrinsic luminosity is not assumed to be redshift independent, a nonaccelerated low-redshift power law model is able to fit the low-redshift background data as well as, or even slightly better, than ΛCDM. When measurements of the growth of structures are added, a nonaccelerated low-redshift power law model still provides an excellent fit to the data for all the luminosity evolution models considered. Conclusions: Without the standard assumption that supernovae intrinsic luminosity is independent of the redshift, low-redshift probes are consistent with a nonaccelerated universe.

  17. Integrating count and detection-nondetection data to model population dynamics.

    PubMed

    Zipkin, Elise F; Rossman, Sam; Yackulic, Charles B; Wiens, J David; Thorson, James T; Davis, Raymond J; Grant, Evan H Campbell

    2017-06-01

    There is increasing need for methods that integrate multiple data types into a single analytical framework as the spatial and temporal scale of ecological research expands. Current work on this topic primarily focuses on combining capture-recapture data from marked individuals with other data types into integrated population models. Yet, studies of species distributions and trends often rely on data from unmarked individuals across broad scales where local abundance and environmental variables may vary. We present a modeling framework for integrating detection-nondetection and count data into a single analysis to estimate population dynamics, abundance, and individual detection probabilities during sampling. Our dynamic population model assumes that site-specific abundance can change over time according to survival of individuals and gains through reproduction and immigration. The observation process for each data type is modeled by assuming that every individual present at a site has an equal probability of being detected during sampling processes. We examine our modeling approach through a series of simulations illustrating the relative value of count vs. detection-nondetection data under a variety of parameter values and survey configurations. We also provide an empirical example of the model by combining long-term detection-nondetection data (1995-2014) with newly collected count data (2015-2016) from a growing population of Barred Owl (Strix varia) in the Pacific Northwest to examine the factors influencing population abundance over time. Our model provides a foundation for incorporating unmarked data within a single framework, even in cases where sampling processes yield different detection probabilities. This approach will be useful for survey design and to researchers interested in incorporating historical or citizen science data into analyses focused on understanding how demographic rates drive population abundance. © 2017 by the Ecological Society of America.

  18. Convective drying of osmo-dehydrated apple slices: kinetics and spatial behavior of effective mass diffusivity and moisture content

    NASA Astrophysics Data System (ADS)

    de Farias Aires, Juarez Everton; da Silva, Wilton Pereira; de Almeida Farias Aires, Kalina Lígia Cavalcante; da Silva Júnior, Aluízio Freire; da Silva e Silva, Cleide Maria Diniz Pereira

    2018-04-01

    The main objective of this study is the presentation of a numerical model of liquid diffusion for the description of the convective drying of apple slices submitted to pretreatment of osmotic dehydration able of predicting the spatial distribution of effective mass diffusivity values in apple slabs. Two models that use numerical solutions of the two-dimensional diffusion equation in Cartesian coordinates with the boundary condition of third kind were proposed to describe drying. The first one does not consider the shrinkage of the product and assumes that the process parameters remain constant along the convective drying. The second one considers the shrinkage of the product and assumes that the effective mass diffusivity of water varies according to the local value of the water content in the apple samples. Process parameters were estimated from experimental data through an optimizer coupled to the numerical solutions. The osmotic pretreatment did not reduce the drying time in relation to the fresh fruits when the drying temperature was equal to 40 °C. The use of the temperature of 60 °C led to a reduction in the drying time. The model that considers the variations in the dimensions of the product and the variation in the effective mass diffusivity proved to be more adequate to describe the process.

  19. Understanding the complex dynamics of stock markets through cellular automata

    NASA Astrophysics Data System (ADS)

    Qiu, G.; Kandhai, D.; Sloot, P. M. A.

    2007-04-01

    We present a cellular automaton (CA) model for simulating the complex dynamics of stock markets. Within this model, a stock market is represented by a two-dimensional lattice, of which each vertex stands for a trader. According to typical trading behavior in real stock markets, agents of only two types are adopted: fundamentalists and imitators. Our CA model is based on local interactions, adopting simple rules for representing the behavior of traders and a simple rule for price updating. This model can reproduce, in a simple and robust manner, the main characteristics observed in empirical financial time series. Heavy-tailed return distributions due to large price variations can be generated through the imitating behavior of agents. In contrast to other microscopic simulation (MS) models, our results suggest that it is not necessary to assume a certain network topology in which agents group together, e.g., a random graph or a percolation network. That is, long-range interactions can emerge from local interactions. Volatility clustering, which also leads to heavy tails, seems to be related to the combined effect of a fast and a slow process: the evolution of the influence of news and the evolution of agents’ activity, respectively. In a general sense, these causes of heavy tails and volatility clustering appear to be common among some notable MS models that can confirm the main characteristics of financial markets.

  20. Dissipative dark matter halos: The steady state solution. II.

    NASA Astrophysics Data System (ADS)

    Foot, R.

    2018-05-01

    Within the mirror dark matter model and dissipative dark matter models in general, halos around galaxies with active star formation (including spirals and gas-rich dwarfs) are dynamical: they expand and contract in response to heating and cooling processes. Ordinary type II supernovae (SNe) can provide the dominant heat source, which is possible if kinetic mixing interaction exists with strength ɛ ˜10-9- 10-10 . Dissipative dark matter halos can be modeled as a fluid governed by Euler's equations. Around sufficiently isolated and unperturbed galaxies the halo can relax to a steady state configuration, where heating and cooling rates locally balance and hydrostatic equilibrium prevails. These steady state conditions can be solved to derive the physical properties, including the halo density and temperature profiles, for model galaxies. Here, we consider idealized spherically symmetric galaxies within the mirror dark particle model, as in our earlier paper [Phys. Rev. D 97, 043012 (2018), 10.1103/PhysRevD.97.043012], but we assume that the local halo heating in the SN vicinity dominates over radiative sources. With this assumption, physically interesting steady state solutions arise which we compute for a representative range of model galaxies. The end result is a rather simple description of the dark matter halo around idealized spherically symmetric systems, characterized in principle by only one parameter, with physical properties that closely resemble the empirical properties of disk galaxies.

  1. Micromechanics Modeling of Composites Subjected to Multiaxial Progressive Damage in the Constituents

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Aboudi, Jacob; Amold, Steven M.

    2010-01-01

    The high-fidelity generalized method of cells composite micromechanics model is extended to include constituent-scale progressive damage via a proposed damage model. The damage model assumes that all material nonlinearity is due to damage in the form of reduced stiffness, and it uses six scalar damage variables (three for tension and three for compression) to track the damage. Damage strains are introduced that account for interaction among the strain components and that also allow the development of the damage evolution equations based on the constituent material uniaxial stress strain response. Local final-failure criteria are also proposed based on mode-specific strain energy release rates and total dissipated strain energy. The coupled micromechanics-damage model described herein is applied to a unidirectional E-glass/epoxy composite and a proprietary polymer matrix composite. Results illustrate the capability of the coupled model to capture the vastly different character of the monolithic (neat) resin matrix and the composite in response to far-field tension, compression, and shear loading.

  2. A signal detection model predicts the effects of set size on visual search accuracy for feature, conjunction, triple conjunction, and disjunction displays

    NASA Technical Reports Server (NTRS)

    Eckstein, M. P.; Thomas, J. P.; Palmer, J.; Shimozaki, S. S.

    2000-01-01

    Recently, quantitative models based on signal detection theory have been successfully applied to the prediction of human accuracy in visual search for a target that differs from distractors along a single attribute (feature search). The present paper extends these models for visual search accuracy to multidimensional search displays in which the target differs from the distractors along more than one feature dimension (conjunction, disjunction, and triple conjunction displays). The model assumes that each element in the display elicits a noisy representation for each of the relevant feature dimensions. The observer combines the representations across feature dimensions to obtain a single decision variable, and the stimulus with the maximum value determines the response. The model accurately predicts human experimental data on visual search accuracy in conjunctions and disjunctions of contrast and orientation. The model accounts for performance degradation without resorting to a limited-capacity spatially localized and temporally serial mechanism by which to bind information across feature dimensions.

  3. Supply network science: Emergence of a new perspective on a classical field

    NASA Astrophysics Data System (ADS)

    Brintrup, Alexandra; Ledwoch, Anna

    2018-03-01

    Supply networks emerge as companies procure goods from one another to produce their own products. Due to a chronic lack of data, studies on these emergent structures have long focussed on local neighbourhoods, assuming simple, chain-like structures. However, studies conducted since 2001 have shown that supply chains are indeed complex networks that exhibit similar organisational patterns to other network types. In this paper, we present a critical review of theoretical and model based studies which conceptualise supply chains from a network science perspective, showing that empirical data do not always support theoretical models that were developed, and argue that different industrial settings may present different characteristics. Consequently, a need that arises is the development and reconciliation of interpretation across different supply network layers such as contractual relations, material flow, financial links, and co-patenting, as these different projections tend to remain in disciplinary siloes. Other gaps include a lack of null models that show whether the observed properties are meaningful, a lack of dynamical models that can inform how layers evolve and adopt to changes, and a lack of studies that investigate how local decisions enable emergent outcomes. We conclude by asking the network science community to help bridge these gaps by engaging with this important area of research.

  4. Supply network science: Emergence of a new perspective on a classical field.

    PubMed

    Brintrup, Alexandra; Ledwoch, Anna

    2018-03-01

    Supply networks emerge as companies procure goods from one another to produce their own products. Due to a chronic lack of data, studies on these emergent structures have long focussed on local neighbourhoods, assuming simple, chain-like structures. However, studies conducted since 2001 have shown that supply chains are indeed complex networks that exhibit similar organisational patterns to other network types. In this paper, we present a critical review of theoretical and model based studies which conceptualise supply chains from a network science perspective, showing that empirical data do not always support theoretical models that were developed, and argue that different industrial settings may present different characteristics. Consequently, a need that arises is the development and reconciliation of interpretation across different supply network layers such as contractual relations, material flow, financial links, and co-patenting, as these different projections tend to remain in disciplinary siloes. Other gaps include a lack of null models that show whether the observed properties are meaningful, a lack of dynamical models that can inform how layers evolve and adopt to changes, and a lack of studies that investigate how local decisions enable emergent outcomes. We conclude by asking the network science community to help bridge these gaps by engaging with this important area of research.

  5. CCN numerical simulations for the GoAmazon with the OLAM model

    NASA Astrophysics Data System (ADS)

    Ramos-da-Silva, R.; Haas, R.; Barbosa, H. M.; Machado, L.

    2015-12-01

    Manaus is a large city in the center of the Amazon rainforest. The GoAmazon field project is exploring the region through various data collection and modeling to investigate in impacts of the urban polluted plume on the surrounding pristine areas. In this study a numerical model was applied to simulate the atmospheric dynamics and the Cloud Condensation Nucleai (CCN) concentrations evolution. Simulations with and without the urban plume was performed to identify its dynamics and local impacts. The results show that the land surface characteristics has important hole on the CCN distribution and rainfall over the region. At the south of Manaus the atmospheric dynamics is dominated by the cloud streets that are aligned with the trade winds and the Amazon River. At the north of Manaus, the Negro River produces the advection of a more stable atmosphere causing a higher CCN concentration on the boundary layer. Assuming a local high CCN concentration at the Manaus boundary layer region, the simulations show that the land-atmosphere interaction sets important dynamics on the plume. The model shows that the CCN plume moves along with the flow towards southwest of Manaus following the cloud streets and the river direction having the highest concentrations over the most stable water surface regions.

  6. Collective effects on activated segmental relaxation in supercooled polymer melts

    NASA Astrophysics Data System (ADS)

    Mirigian, Stephen; Schweizer, Kenneth

    2013-03-01

    We extend the polymer nonlinear Langevin equation (NLE) theory of activated segmental dynamics in supercooled polymer melts in two new directions. First, a well-defined mapping from real monomers to a freely-jointed chain is formulated that retains information about chain stiffness, monomer volume, and the amplitude of thermal density fluctuations. Second, collective effects beyond the local cage scale are included based on an elastic solid-state perspective in the ``shoving model'' spirit which accounts for longer range contributions to the activation barrier. In contrast to previous phenomenological treatments of this model, we formulate an explicit microscopic picture of the hopping event, and derive, not assume, that the collective barrier is directly related to the elastic shear modulus. Local hopping is thus renormalized by collective motions of the surroundings that are required to physically accommodate it. Using the PRISM theory of structure, and known compressibility and chain statistics information, quantitative applications of the new theory to predict the temperature and chain length dependence of the alpha time, shear modulus, and fragility are carried out for a range of real polymer liquids and compared to experiment.

  7. Growing and navigating the small world Web by local content

    PubMed Central

    Menczer, Filippo

    2002-01-01

    Can we model the scale-free distribution of Web hypertext degree under realistic assumptions about the behavior of page authors? Can a Web crawler efficiently locate an unknown relevant page? These questions are receiving much attention due to their potential impact for understanding the structure of the Web and for building better search engines. Here I investigate the connection between the linkage and content topology of Web pages. The relationship between a text-induced distance metric and a link-based neighborhood probability distribution displays a phase transition between a region where linkage is not determined by content and one where linkage decays according to a power law. This relationship is used to propose a Web growth model that is shown to accurately predict the distribution of Web page degree, based on textual content and assuming only local knowledge of degree for existing pages. A qualitatively similar phase transition is found between linkage and semantic distance, with an exponential decay tail. Both relationships suggest that efficient paths can be discovered by decentralized Web navigation algorithms based on textual and/or categorical cues. PMID:12381792

  8. Growing and navigating the small world Web by local content

    NASA Astrophysics Data System (ADS)

    Menczer, Filippo

    2002-10-01

    Can we model the scale-free distribution of Web hypertext degree under realistic assumptions about the behavior of page authors? Can a Web crawler efficiently locate an unknown relevant page? These questions are receiving much attention due to their potential impact for understanding the structure of the Web and for building better search engines. Here I investigate the connection between the linkage and content topology of Web pages. The relationship between a text-induced distance metric and a link-based neighborhood probability distribution displays a phase transition between a region where linkage is not determined by content and one where linkage decays according to a power law. This relationship is used to propose a Web growth model that is shown to accurately predict the distribution of Web page degree, based on textual content and assuming only local knowledge of degree for existing pages. A qualitatively similar phase transition is found between linkage and semantic distance, with an exponential decay tail. Both relationships suggest that efficient paths can be discovered by decentralized Web navigation algorithms based on textual and/or categorical cues.

  9. Growing and navigating the small world Web by local content.

    PubMed

    Menczer, Filippo

    2002-10-29

    Can we model the scale-free distribution of Web hypertext degree under realistic assumptions about the behavior of page authors? Can a Web crawler efficiently locate an unknown relevant page? These questions are receiving much attention due to their potential impact for understanding the structure of the Web and for building better search engines. Here I investigate the connection between the linkage and content topology of Web pages. The relationship between a text-induced distance metric and a link-based neighborhood probability distribution displays a phase transition between a region where linkage is not determined by content and one where linkage decays according to a power law. This relationship is used to propose a Web growth model that is shown to accurately predict the distribution of Web page degree, based on textual content and assuming only local knowledge of degree for existing pages. A qualitatively similar phase transition is found between linkage and semantic distance, with an exponential decay tail. Both relationships suggest that efficient paths can be discovered by decentralized Web navigation algorithms based on textual and/or categorical cues.

  10. Do ecohydrology and community dynamics feed back to banded-ecosystem structure and productivity?

    NASA Astrophysics Data System (ADS)

    Callegaro, Chiara; Ursino, Nadia

    2016-04-01

    Mixed communities including grass, shrubs and trees are often reported to populate self-organized vegetation patterns. Patterns of survey data suggest that species diversity and complementarity strengthen the dynamics of banded environments. Resource scarcity and local facilitation trigger self organization, whereas coexistence of multiple species in vegetated self-organizing patches, implying competition for water and nutrients and favorable reproduction sites, is made possible by differing adaptation strategies. Mixed community spatial self-organization has so far received relatively little attention, compared with local net facilitation of isolated species. We assumed that soil moisture availability is a proxy for the environmental niche of plant species according to Ursino and Callegaro (2016). Our modelling effort was focused on niche differentiation of coexisting species within a tiger bush type ecosystem. By minimal numerical modelling and stability analysis we try to answer a few open scientific questions: Is there an adaptation strategy that increases biodiversity and ecosystem functioning? Does specific adaptation to environmental niches influence the structure of self-organizing vegetation pattern? What specific niche distribution along the environmental gradient gives the highest global productivity?

  11. Stellar occultation spikes as probes of atmospheric structure and composition. [for Jupiter

    NASA Technical Reports Server (NTRS)

    Elliot, J. L.; Veverka, J.

    1976-01-01

    The characteristics of spikes observed in occultation light curves of Beta Scorpii by Jupiter are discussed in terms of the gravity-gradient model. The occultation of Beta Sco by Jupiter on May 13, 1971, is reviewed, and the gravity-gradient model is defined as an isothermal atmosphere of constant composition in which the refractivity is a function only of the radial coordinate from the center of refraction, which is assumed to lie parallel to the local gravity gradient. The derivation of the occultation light curve in terms of the atmosphere, the angular diameter of the occulted star, and the occultation geometry is outlined. It is shown that analysis of the light-curve spikes can yield the He/H2 concentration ratio in a well-mixed atmosphere, information on fine-scale atmospheric structure, high-resolution images of the occulted star, and information on ray crossing. Observational limits are placed on the magnitude of horizontal refractivity gradients, and it is concluded that the spikes are the result of local atmospheric density variations: atmospheric layers, density waves, or turbulence.

  12. [Cognitive advantages of the third age: a neural network model of brain aging].

    PubMed

    Karpenko, M P; Kachalova, L M; Budilova, E V; Terekhin, A T

    2009-01-01

    We consider a neural network model of age-related cognitive changes in aging brain based on Hopfield network with a sigmoid function of neuron activation. Age is included in the activation function as a parameter in the form of exponential rate denominator, which makes it possible to take into account the weakening of interneuronal links really observed in the aging brain. Analysis of properties of the Lyapunov function associated with the network shows that, with increasing parameter of age, its relief becomes smoother and the number of local minima (network attractors) decreases. As a result, the network gets less frequently stuck in the nearest local minima of the Lyapunov function and reaches a global minimum corresponding to the most effective solution of the cognitive task. It is reasonable to assume that similar changes really occur in the aging brain. Phenomenologically, these changes can be manifested as emergence in aged people of a cognitive quality such as wisdom i.e. ability to find optimal decisions in difficult controversial situations, to distract from secondary aspects and to see the problem as a whole.

  13. Dependence of radiation belt simulations to assumed radial diffusion rates

    NASA Astrophysics Data System (ADS)

    Drozdov, A.; Shprits, Y.; Aseev, N.; Kellerman, A. C.; Reeves, G. D.

    2017-12-01

    Radial diffusion is one of the dominant physical mechanisms that drives acceleration and loss of the radiation belt electrons due to wave-particle interaction with ultra low frequency (ULF) waves, which makes it very important for radiation belt modeling and forecasting. We investigate the sensitivity of several parameterizations of the radial diffusion including Brautigam and Albert [2000], Ozeke et al. [2014] and Ali et al. [2016] on long-term radiation belt modeling using the Versatile Electron Radiation Belt (VERB). Following previous studies, we first perform 1-D radial diffusion simulations. To take into account effects of local acceleration and loss, we perform additional 3-D simulations, including pitch-angle, energy and mixed diffusion. The obtained result demonstrates that the inclusion of local acceleration and pitch-angle diffusion can provide a negative feedback effect, such that the result is largely indistinguishable between simulations conducted with different radial diffusion parameterizations. We also perform a number of sensitivity tests by multiplying radial diffusion rates by constant factors and show that such an approach leads to unrealistic predictions of radiation belt dynamics.

  14. Nonlinear periodic wavetrains in thin liquid films falling on a uniformly heated horizontal plate

    NASA Astrophysics Data System (ADS)

    Issokolo, Remi J. Noumana; Dikandé, Alain M.

    2018-05-01

    A thin liquid film falling on a uniformly heated horizontal plate spreads into fingering ripples that can display a complex dynamics ranging from continuous waves, nonlinear spatially localized periodic wave patterns (i.e., rivulet structures) to modulated nonlinear wavetrain structures. Some of these structures have been observed experimentally; however, conditions under which they form are still not well understood. In this work, we examine profiles of nonlinear wave patterns formed by a thin liquid film falling on a uniformly heated horizontal plate. For this purpose, the Benney model is considered assuming a uniform temperature distribution along the film propagation on the horizontal surface. It is shown that for strong surface tension but a relatively small Biot number, spatially localized periodic-wave structures can be analytically obtained by solving the governing equation under appropriate conditions. In the regime of weak nonlinearity, a multiple-scale expansion combined with the reductive perturbation method leads to a complex Ginzburg-Landau equation: the solutions of which are modulated periodic pulse trains which amplitude and width and period are expressed in terms of characteristic parameters of the model.

  15. Discrete meso-element simulation of chemical reactions in shear bands

    NASA Astrophysics Data System (ADS)

    Tamura, S.; Horie, Y.

    1998-07-01

    A meso-dynamic simulation technique is used to investigate the chemical reactions in high speed shearing of reactive porous mixtures. The reaction speed is assumed to be a function of temperature, pressure and mixing of materials. To gain a theoretical insight into the experiments reported by Nesterenko et al., a parametric study of material flow and local temperature was carried out using a Nb and Si mixture. In the model calculation, a heterogeneous shear region of 5 μm width, consisting of alternating layers of Nb and Si, was created first in a mixture and then sheared at the rate of 8.0×107s-1. Results show that the material flow is mostly homogeneous, but contains a local agglomeration and circulatory flow. This behavior accelerates mass mixing and causes a significant temperature increase. To evaluate the mixing of material, average minimum distance of materials separation was calculated. Voids effect were also investigated.

  16. CPI motif interaction is necessary for capping protein function in cells

    PubMed Central

    Edwards, Marc; McConnell, Patrick; Schafer, Dorothy A.; Cooper, John A.

    2015-01-01

    Capping protein (CP) has critical roles in actin assembly in vivo and in vitro. CP binds with high affinity to the barbed end of actin filaments, blocking the addition and loss of actin subunits. Heretofore, models for actin assembly in cells generally assumed that CP is constitutively active, diffusing freely to find and cap barbed ends. However, CP can be regulated by binding of the ‘capping protein interaction' (CPI) motif, found in a diverse and otherwise unrelated set of proteins that decreases, but does not abolish, the actin-capping activity of CP and promotes uncapping in biochemical experiments. Here, we report that CP localization and the ability of CP to function in cells requires interaction with a CPI-motif-containing protein. Our discovery shows that cells target and/or modulate the capping activity of CP via CPI motif interactions in order for CP to localize and function in cells. PMID:26412145

  17. Large data series: Modeling the usual to identify the unusual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Downing, D.J.; Fedorov, V.V.; Lawkins, W.F.

    {open_quotes}Standard{close_quotes} approaches such as regression analysis, Fourier analysis, Box-Jenkins procedure, et al., which handle a data series as a whole, are not useful for very large data sets for at least two reasons. First, even with computer hardware available today, including parallel processors and storage devices, there are no effective means for manipulating and analyzing gigabyte, or larger, data files. Second, in general it can not be assumed that a very large data set is {open_quotes}stable{close_quotes} by the usual measures, like homogeneity, stationarity, and ergodicity, that standard analysis techniques require. Both reasons dictate the necessity to use {open_quotes}local{close_quotes} data analysismore » methods whereby the data is segmented and ordered, where order leads to a sense of {open_quotes}neighbor,{close_quotes} and then analyzed segment by segment. The idea of local data analysis is central to the study reported here.« less

  18. Numerical analysis of flows of rarefied gases in long channels with octagonal cross section shapes

    NASA Astrophysics Data System (ADS)

    Szalmas, L.

    2014-12-01

    Isothermal, pressure driven rarefied gas flows through long channels with octagonal cross section shapes are analyzed computationally. The capillary is between inlet and outlet reservoirs. The cross section is constant along the axial direction. The boundary condition at the solid-gas interface is assumed to be diffuse reflection. Since the channel is long, the gaseous velocity is small compared to the average molecular speed. Consequently, a linearized description can be used. The flow is described by the linearized Bhatnagar-Gross-Krook kinetic model. The solution of the problem is divided into two stages. First, the local flow field is determined by assuming the local pressure gradient. Secondly, the global flow behavior is deduced by the consideration of the conservation of the mass along the axis of the capillary. The kinetic equation is solved by the discrete velocity method on the cross section. Both spatial and velocity spaces are discretized. A body fitted rectangular grid is used for the spatial space. Near the boundary, first-order, while in the interior part of the flow domain, second-order finite-differences are applied to approximate the spatial derivatives. This combination results into an efficient and straightforward numerical treatment. The velocity space is represented by a Gauss-Legendre quadrature. The kinetic equation is solved in an iterative manner. The local dimensionless flow rate is calculated and tabulated for a wide range of the gaseous rarefaction for octagonal cross sections with various geometrical parameters. It exhibits the Knudsen minimum phenomenon. The flow rates in the octagonal channel are compared to those through capillaries with circular and square cross sections. Typical velocity profiles are also shown. The mass flow rate and the distribution of the pressure are determined and presented for global pressure driven flows.

  19. The excitation and characteristic frequency of the long-period volcanic event: An approach based on an inhomogeneous autoregressive model of a linear dynamic system

    USGS Publications Warehouse

    Nakano, M.; Kumagai, H.; Kumazawa, M.; Yamaoka, K.; Chouet, B.A.

    1998-01-01

    We present a method to quantify the source excitation function and characteristic frequencies of long-period volcanic events. The method is based on an inhomogeneous autoregressive (AR) model of a linear dynamic system, in which the excitation is assumed to be a time-localized function applied at the beginning of the event. The tail of an exponentially decaying harmonic waveform is used to determine the characteristic complex frequencies of the event by the Sompi method. The excitation function is then derived by operating an AR filter constructed from the characteristic frequencies to the entire seismogram of the event, including the inhomogeneous part of the signal. We apply this method to three long-period events at Kusatsu-Shirane Volcano, central Japan, whose waveforms display simple decaying monochromatic oscillations except for the beginning of the events. We recover time-localized excitation functions lasting roughly 1 s at the start of each event and find that the estimated functions are very similar to each other at all the stations of the seismic network for each event. The phases of the characteristic oscillations referred to the estimated excitation function fall within a narrow range for almost all the stations. These results strongly suggest that the excitation and mode of oscillation are both dominated by volumetric change components. Each excitation function starts with a pronounced dilatation consistent with a sudden deflation of the volumetric source which may be interpreted in terms of a choked-flow transport mechanism. The frequency and Q of the characteristic oscillation both display a temporal evolution from event to event. Assuming a crack filled with bubbly water as seismic source for these events, we apply the Van Wijngaarden-Papanicolaou model to estimate the acoustic properties of the bubbly liquid and find that the observed changes in the frequencies and Q are consistently explained by a temporal change in the radii of the bubbles characterizing the bubbly water in the crack.

  20. Revisiting Tectonic Corrections Applied to Pleistocene Sea-Level Highstands

    NASA Astrophysics Data System (ADS)

    Creveling, J. R.; Mitrovica, J. X.; Hay, C.; Austermann, J.; Kopp, R. E.

    2015-12-01

    The robustness of stratigraphic- and geomorphic-based inferences of Quaternary peak interglacial sea levels — and equivalent minimum continental ice volumes — depends on the accuracy with which highstand markers can be corrected for vertical tectonic displacement. For sites that preserve a Marine Isotope Stage (MIS) 5e sea-level highstand marker, the customary method for estimating tectonic uplift/subsidence rate computes the difference between the local elevation of the highstand marker and a reference eustatic (i.e., global mean) MIS 5e sea-level height, typically assumed to be +6 m, and then divides this height difference by the age of the highstand marker. This rate is then applied to correct the elevation of other observed sea-level markers at that site for tectonic displacement. Subtracting a reference eustatic value from a local MIS 5e highstand marker elevation introduces two potentially significant errors. First, the commonly adopted peak eustatic MIS 5e sea-level value (i.e., +6 m) is likely too low; recent studies concluded that MIS 5e peak eustatic sea level was ~6-9 m. Second, local peak MIS 5e sea level was not globally uniform, but instead characterized by significant departures from eustasy due to glacial isostatic adjustment (GIA) in response to successive glacial-interglacial cycles and excess polar ice-sheet melt relative to present day. We present numerical models of GIA that incorporate both of these effects in order to quantify the plausible range in error of previous tectonic corrections. We demonstrate that, even far from melting ice sheets, local peak MIS 5e sea level may have departed from eustasy by 2-4 m, or more. Thus, adopting an assumed reference eustatic value to estimate tectonic displacement, rather than a site-specific GIA signal, can introduce significant error in estimates of peak eustatic sea level (and minimum ice volumes) during Quaternary highstands (e.g., MIS 11, MIS 5c and MIS 5a).

  1. Applicability of the linear-quadratic formalism for modeling local tumor control probability in high dose per fraction stereotactic body radiotherapy for early stage non-small cell lung cancer.

    PubMed

    Guckenberger, Matthias; Klement, Rainer Johannes; Allgäuer, Michael; Appold, Steffen; Dieckmann, Karin; Ernst, Iris; Ganswindt, Ute; Holy, Richard; Nestle, Ursula; Nevinny-Stickel, Meinhard; Semrau, Sabine; Sterzing, Florian; Wittig, Andrea; Andratschke, Nicolaus; Flentje, Michael

    2013-10-01

    To compare the linear-quadratic (LQ) and the LQ-L formalism (linear cell survival curve beyond a threshold dose dT) for modeling local tumor control probability (TCP) in stereotactic body radiotherapy (SBRT) for stage I non-small cell lung cancer (NSCLC). This study is based on 395 patients from 13 German and Austrian centers treated with SBRT for stage I NSCLC. The median number of SBRT fractions was 3 (range 1-8) and median single fraction dose was 12.5 Gy (2.9-33 Gy); dose was prescribed to the median 65% PTV encompassing isodose (60-100%). Assuming an α/β-value of 10 Gy, we modeled TCP as a sigmoid-shaped function of the biologically effective dose (BED). Models were compared using maximum likelihood ratio tests as well as Bayes factors (BFs). There was strong evidence for a dose-response relationship in the total patient cohort (BFs>20), which was lacking in single-fraction SBRT (BFs<3). Using the PTV encompassing dose or maximum (isocentric) dose, our data indicated a LQ-L transition dose (dT) at 11 Gy (68% CI 8-14 Gy) or 22 Gy (14-42 Gy), respectively. However, the fit of the LQ-L models was not significantly better than a fit without the dT parameter (p=0.07, BF=2.1 and p=0.86, BF=0.8, respectively). Generally, isocentric doses resulted in much better dose-response relationships than PTV encompassing doses (BFs>20). Our data suggest accurate modeling of local tumor control in fractionated SBRT for stage I NSCLC with the traditional LQ formalism. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  2. Assessment of water droplet evaporation mechanisms on hydrophobic and superhydrophobic substrates.

    PubMed

    Pan, Zhenhai; Dash, Susmita; Weibel, Justin A; Garimella, Suresh V

    2013-12-23

    Evaporation rates are predicted and important transport mechanisms identified for evaporation of water droplets on hydrophobic (contact angle ~110°) and superhydrophobic (contact angle ~160°) substrates. Analytical models for droplet evaporation in the literature are usually simplified to include only vapor diffusion in the gas domain, and the system is assumed to be isothermal. In the comprehensive model developed in this study, evaporative cooling of the interface is accounted for, and vapor concentration is coupled to local temperature at the interface. Conjugate heat and mass transfer are solved in the solid substrate, liquid droplet, and surrounding gas. Buoyancy-driven convective flows in the droplet and vapor domains are also simulated. The influences of evaporative cooling and convection on the evaporation characteristics are determined quantitatively. The liquid-vapor interface temperature drop induced by evaporative cooling suppresses evaporation, while gas-phase natural convection acts to enhance evaporation. While the effects of these competing transport mechanisms are observed to counterbalance for evaporation on a hydrophobic surface, the stronger influence of evaporative cooling on a superhydrophobic surface accounts for an overprediction of experimental evaporation rates by ~20% with vapor diffusion-based models. The local evaporation fluxes along the liquid-vapor interface for both hydrophobic and superhydrophobic substrates are investigated. The highest local evaporation flux occurs at the three-phase contact line region due to proximity to the higher temperature substrate, rather than at the relatively colder droplet top; vapor diffusion-based models predict the opposite. The numerically calculated evaporation rates agree with experimental results to within 2% for superhydrophobic substrates and 3% for hydrophobic substrates. The large deviations between past analytical models and the experimental data are therefore reconciled with the comprehensive model developed here.

  3. Non-local thermodynamic equilibrium 1.5D modeling of red giant stars

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, Mitchell E.; Short, C. Ian, E-mail: myoung@ap.smu.ca

    Spectra for two-dimensional (2D) stars in the 1.5D approximation are created from synthetic spectra of one-dimensional (1D) non-local thermodynamic equilibrium (NLTE) spherical model atmospheres produced by the PHOENIX code. The 1.5D stars have the spatially averaged Rayleigh-Jeans flux of a K3-4 III star while varying the temperature difference between the two 1D component models (ΔT {sub 1.5D}) and the relative surface area covered. Synthetic observable quantities from the 1.5D stars are fitted with quantities from NLTE and local thermodynamic equilibrium (LTE) 1D models to assess the errors in inferred T {sub eff} values from assuming horizontal homogeneity and LTE. Fivemore » different quantities are fit to determine the T {sub eff} of the 1.5D stars: UBVRI photometric colors, absolute surface flux spectral energy distributions (SEDs), relative SEDs, continuum normalized spectra, and TiO band profiles. In all cases except the TiO band profiles, the inferred T {sub eff} value increases with increasing ΔT {sub 1.5D}. In all cases, the inferred T {sub eff} value from fitting 1D LTE quantities is higher than from fitting 1D NLTE quantities and is approximately constant as a function of ΔT {sub 1.5D} within each case. The difference between LTE and NLTE for the TiO bands is caused indirectly by the NLTE temperature structure of the upper atmosphere, as the bands are computed in LTE. We conclude that the difference between T {sub eff} values derived from NLTE and LTE modeling is relatively insensitive to the degree of the horizontal inhomogeneity of the star being modeled and largely depends on the observable quantity being fit.« less

  4. Prefission Constriction of Golgi Tubular Carriers Driven by Local Lipid Metabolism: A Theoretical Model

    PubMed Central

    Shemesh, Tom; Luini, Alberto; Malhotra, Vivek; Burger, Koert N. J.; Kozlov, Michael M.

    2003-01-01

    Membrane transport within mammalian cells is mediated by small vesicular as well as large pleiomorphic transport carriers (TCs). A major step in the formation of TCs is the creation and subsequent narrowing of a membrane neck connecting the emerging carrier with the initial membrane. In the case of small vesicular TCs, neck formation may be directly induced by the coat proteins that cover the emerging vesicle. However, the mechanism underlying the creation and narrowing of a membrane neck in the generation of large TCs remains unknown. We present a theoretical model for neck formation based on the elastic model of membranes. Our calculations suggest a lipid-driven mechanism with a central role for diacylglycerol (DAG). The model is applied to a well-characterized in vitro system that reconstitutes TC formation from the Golgi complex, namely the pearling and fission of Golgi tubules induced by CtBP/BARS, a protein that catalyzes the conversion of lysophosphatidic acid into phosphatidic acid. In view of the importance of a PA-DAG cycle in the formation of Golgi TCs, we assume that the newly formed phosphatidic acid undergoes rapid dephosphorylation into DAG. DAG possesses a unique molecular shape characterized by an extremely large negative spontaneous curvature, and it redistributes rapidly between the membrane monolayers and along the membrane surface. Coupling between local membrane curvature and local lipid composition results, by mutual enhancement, in constrictions of the tubule into membrane necks, and a related inhomogeneous lateral partitioning of DAG. Our theoretical model predicts the exact dimensions of the constrictions observed in the pearling Golgi tubules. Moreover, the model is able to explain membrane neck formation by physiologically relevant mole fractions of DAG. PMID:14645071

  5. Anatomically constrained dipole adjustment (ANACONDA) for accurate MEG/EEG focal source localizations

    NASA Astrophysics Data System (ADS)

    Im, Chang-Hwan; Jung, Hyun-Kyo; Fujimaki, Norio

    2005-10-01

    This paper proposes an alternative approach to enhance localization accuracy of MEG and EEG focal sources. The proposed approach assumes anatomically constrained spatio-temporal dipoles, initial positions of which are estimated from local peak positions of distributed sources obtained from a pre-execution of distributed source reconstruction. The positions of the dipoles are then adjusted on the cortical surface using a novel updating scheme named cortical surface scanning. The proposed approach has many advantages over the conventional ones: (1) as the cortical surface scanning algorithm uses spatio-temporal dipoles, it is robust with respect to noise; (2) it requires no a priori information on the numbers and initial locations of the activations; (3) as the locations of dipoles are restricted only on a tessellated cortical surface, it is physiologically more plausible than the conventional ECD model. To verify the proposed approach, it was applied to several realistic MEG/EEG simulations and practical experiments. From the several case studies, it is concluded that the anatomically constrained dipole adjustment (ANACONDA) approach will be a very promising technique to enhance accuracy of focal source localization which is essential in many clinical and neurological applications of MEG and EEG.

  6. On decoupling of volatility smile and term structure in inverse option pricing

    NASA Astrophysics Data System (ADS)

    Egger, Herbert; Hein, Torsten; Hofmann, Bernd

    2006-08-01

    Correct pricing of options and other financial derivatives is of great importance to financial markets and one of the key subjects of mathematical finance. Usually, parameters specifying the underlying stochastic model are not directly observable, but have to be determined indirectly from observable quantities. The identification of local volatility surfaces from market data of European vanilla options is one very important example of this type. As with many other parameter identification problems, the reconstruction of local volatility surfaces is ill-posed, and reasonable results can only be achieved via regularization methods. Moreover, due to the sparsity of data, the local volatility is not uniquely determined, but depends strongly on the kind of regularization norm used and a good a priori guess for the parameter. By assuming a multiplicative structure for the local volatility, which is motivated by the specific data situation, the inverse problem can be decomposed into two separate sub-problems. This removes part of the non-uniqueness and allows us to establish convergence and convergence rates under weak assumptions. Additionally, a numerical solution of the two sub-problems is much cheaper than that of the overall identification problem. The theoretical results are illustrated by numerical tests.

  7. Magnetic Field Strengths and Grain Alignment Variations in the Local Bubble Wall

    NASA Astrophysics Data System (ADS)

    Medan, Ilija; Andersson, B.-G.

    2018-01-01

    Optical and infrared continuum polarization is known to be due to irregular dust grains aligned with the magnetic field. This provides an important tool to probe the geometry and strength of those fields, particularly if the variations in the grain alignment efficiencies can be understood. Here, we examine polarization variations observed throughout the Local Bubble for b>30○, using a large polarization survey of the North Galactic cap from Berdyugin et al. (2014). These data are supported by archival photometric and spectroscopic data along with the mapping of the Local Bubble by Lallement et al. (2003). We can accurately model the observational data assuming that the grain alignment variations are due to the radiation from the OB associations within 1 kpc of the sun. This strongly supports radiatively driven grain alignment. We also probe the relative strength of the magnetic field in the wall of the Local Bubble using the Davis-Chandrasekhar-Fermi method. We find evidence for a bimodal field strength distribution, where the variations in the field are correlated with the variations in grain alignment efficiency, indicating that the higher strength regions might represent a compression of the wall by the interaction of the outflow in the Local Bubble and the opposing flows by the surrounding OB associations.

  8. Magnetic Local Time dependency in modeling of the Earth radiation belts

    NASA Astrophysics Data System (ADS)

    Herrera, Damien; Maget, Vincent; Bourdarie, Sébastien; Rolland, Guy

    2017-04-01

    For many years, ONERA has been at the forefront of the modeling of the Earth radiation belts thanks to the Salammbô model, which accurately reproduces their dynamics over a time scale of the particles' drift period. This implies that we implicitly assume an homogeneous repartition of the trapped particles along a given drift shell. However, radiation belts are inhomogeneous in Magnetic Local Time (MLT). So, we need to take this new coordinate into account to model rigorously the dynamical structures, particularly induced during a geomagnetic storm. For this purpose, we are working on both the numerical resolution of the Fokker-Planck diffusion equation included in the model and on the MLT dependency of physic-based processes acting in the Earth radiation belts. The aim of this talk is first to present the 4D-equation used and the different steps we used to build Salammbô 4D model before focusing on physical processes taken into account in the Salammbô code, specially transport due to convection electric field. Firstly, we will briefly introduce the Salammbô 4D code developped by talking about its numerical scheme and physic-based processes modeled. Then, we will focus our attention on the impact of the outer boundary condition (localisation and spectrum) at lower L∗ shell by comparing modeling performed with geosynchronous data from LANL-GEO satellites. Finally, we will discuss the prime importance of the convection electric field to the radial and drift transport of low energy particles around the Earth.

  9. Special relativity from observer's mathematics point of view

    NASA Astrophysics Data System (ADS)

    Khots, Boris; Khots, Dmitriy

    2015-09-01

    When we create mathematical models for quantum theory of light we assume that the mathematical apparatus used in modeling, at least the simplest mathematical apparatus, is infallible. In particular, this relates to the use of "infinitely small" and "infinitely large" quantities in arithmetic and the use of Newton - Cauchy definitions of a limit and derivative in analysis. We believe that is where the main problem lies in contemporary study of nature. We have introduced a new concept of Observer's Mathematics (see www.mathrelativity.com). Observer's Mathematics creates new arithmetic, algebra, geometry, topology, analysis and logic which do not contain the concept of continuum, but locally coincide with the standard fields. We use Einstein special relativity principles and get the analogue of classical Lorentz transformation. This work considers this transformation from Observer's Mathematics point of view.

  10. A compound scattering pdf for the ultrasonic echo envelope and its relationship to K and Nakagami distributions.

    PubMed

    Shankar, P Mohana

    2003-03-01

    A compound probability density function (pdf) is presented to describe the envelope of the backscattered echo from tissue. This pdf allows local and global variation in scattering cross sections in tissue. The ultrasonic backscattering cross sections are assumed to be gamma distributed. The gamma distribution also is used to model the randomness in the average cross sections. This gamma-gamma model results in the compound scattering pdf for the envelope. The relationship of this compound pdf to the Rayleigh, K, and Nakagami distributions is explored through an analysis of the signal-to-noise ratio of the envelopes and random number simulations. The three parameter compound pdf appears to be flexible enough to represent envelope statistics giving rise to Rayleigh, K, and Nakagami distributions.

  11. Transport hysteresis and hydrogen isotope effect on confinement

    NASA Astrophysics Data System (ADS)

    Itoh, S.-I.; Itoh, K.

    2018-03-01

    A Gedankenexperiment on hydrogen isotope effect is developed, using the transport model with transport hysteresis. The transport model with hysteresis is applied to case where the modulational electron cyclotron heating is imposed near the mid-radius of the toroidal plasmas. The perturbation propagates either outward or inward, being associated with the clockwise (CW) hysteresis or counter-clockwise (CCW) hysteresis, respectively. The hydrogen isotope effects on the CW and CCW hysteresis are investigated. The local component of turbulence-driven transport is assumed to be the gyro-Bohm diffusion. While the effect of hydrogen mass number is screened in the response of CW hysteresis, it is amplified in CCW hysteresis. This result motivates the experimental studies to compare CW and CCW cases in order to obtain further insight into the physics of hydrogen isotope effects.

  12. Kinetic mechanism for modeling of electrochemical reactions.

    PubMed

    Cervenka, Petr; Hrdlička, Jiří; Přibyl, Michal; Snita, Dalimil

    2012-04-01

    We propose a kinetic mechanism of electrochemical interactions. We assume fast formation and recombination of electron donors D- and acceptors A+ on electrode surfaces. These mediators are continuously formed in the electrode matter by thermal fluctuations. The mediators D- and A+, chemically equivalent to the electrode metal, enter electrochemical interactions on the electrode surfaces. Electrochemical dynamics and current-voltage characteristics of a selected electrochemical system are studied. Our results are in good qualitative agreement with those given by the classical Butler-Volmer kinetics. The proposed model can be used to study fast electrochemical processes in microsystems and nanosystems that are often out of the thermal equilibrium. Moreover, the kinetic mechanism operates only with the surface concentrations of chemical reactants and local electric potentials, which facilitates the study of electrochemical systems with indefinable bulk.

  13. Dynamics of Whistler-mode Waves Below LHR Frequency: Application for the Equatorial Noise

    NASA Astrophysics Data System (ADS)

    Balikhin, M. A.; Shklyar, D. R.

    2017-12-01

    Plasma waves that are regularly observed in the vicinity of geomagnetic equator since 1970's are often referred to as "equatorial noise" or "equatorial magnetosonic" emission. Currently, it is accepted that these waves can have significant effects on both the processes of loss and acceleration of energetic electrons within the radiation belts. A model to explain the observed features of the equatorial noise is presented. It is assumed that the loss-cone instability of supra-thermal ions is the reason for their generation. It is argued that as these waves propagate their growth/damping rate changes and, therefore the integral wave amplification is more important to explain observed spectral features than the local growth rate. The qualitative correspondence of Cluster observations with dynamical spectra arising from the model is shown.

  14. Evaluation of hydrophilic permeant transport parameters in the localized and non-localized transport regions of skin treated simultaneously with low-frequency ultrasound and sodium lauryl sulfate.

    PubMed

    Kushner, Joseph; Blankschtein, Daniel; Langer, Robert

    2008-02-01

    The porosity (epsilon), the tortuosity (tau), and the hindrance factor (H) of the aqueous pore channels located in the localized transport regions (LTRs) and the non-LTRs formed in skin treated simultaneously with low-frequency ultrasound (US) and the surfactant sodium lauryl sulfate (SLS), were evaluated for the delivery of four hydrophilic permeants (urea, mannitol, raffinose, and inulin) by analyzing dual-radiolabeled diffusion masking experiments for three different idealized cases of the aqueous pore pathway hypothesis. When epsilon and tau were assumed to be independent of the permeant radius, H was found to be statistically larger in the LTRs than in the non-LTRs. When a distribution of pore radii was assumed to exist in the skin, no statistical differences in epsilon, tau, and H were observed due to the large variation in the pore radii distribution shape parameter (3 A to infinity). When infinitely large aqueous pores were assumed to exist in the skin, epsilon was found to be 3-8-fold greater in the LTRs than in the non-LTRs, while little difference was observed in the LTRs and in the non-LTRs for tau. This last result suggests that the efficacy of US/SLS treatment may be enhanced by increasing the porosity of the non-LTRs.

  15. Can the States Address Equity and Innovation? Rethinking the State's Fiscal Role in Public Education.

    ERIC Educational Resources Information Center

    Wong, Kenneth K.; Shen, Francis X.

    With federal funds accounting for only 7% of public elementary and secondary education revenue, funding responsibility for K-12 education is split primarily between state and local governments. Since the 1980s, state governments have generally assumed primary fiscal responsibility, with local governments supplying the rest of the necessary…

  16. Visuospatial Processing in Children with Autism: No Evidence for (Training-Resistant) Abnormalities

    ERIC Educational Resources Information Center

    Chabani, Ellahe; Hommel, Bernhard

    2014-01-01

    Individuals with autism spectrum disorders (ASDs) have been assumed to show evidence of abnormal visuospatial processing, which has been attributed to a failure to integrate local features into coherent global Gestalts and/or to a bias towards local processing. As the available data are based on baseline performance only, which does not provide…

  17. Functional Hemispheric Asymmetries of Global/Local Processing Mirrored by the Steady-State Visual Evoked Potential

    ERIC Educational Resources Information Center

    Martens, Ulla; Hubner, Ronald

    2013-01-01

    While hemispheric differences in global/local processing have been reported by various studies, it is still under dispute at which processing stage they occur. Primarily, it was assumed that these asymmetries originate from an early perceptual stage. Instead, the content-level binding theory (Hubner & Volberg, 2005) suggests that the hemispheres…

  18. Prediction of burnout of a conduction-cooled BSCCO current lead

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seol, S.Y.; Cha, Y.S.; Niemann, R.C.

    A one-dimensional heat conduction model is employed to predict burnout of a Bi{sub 2}Sr{sub 2}CaCu{sub 2}O{sub 8} current lead. The upper end of the lead is assumed to be at 77 K and the lower end is at 4 K. The results show that burnout always occurs at the warmer end of the lead. The lead reaches its burnout temperature in two distinct stage. Initially, the temperature rises slowly when part of the lead is in flux-flow state. As the local temperature reaches the critical temperature, it begins to increase sharply. Burnout time depends strongly on flux-flow resistivity.

  19. Accretion rates of protoplanets. II - Gaussian distributions of planetesimal velocities

    NASA Technical Reports Server (NTRS)

    Greenzweig, Yuval; Lissauer, Jack J.

    1992-01-01

    In the present growth-rate calculations for a protoplanet that is embedded in a disk of planetesimals with triaxial Gaussian velocity dispersion and uniform surface density, the protoplanet is on a circular orbit. The accretion rate in the two-body approximation is found to be enhanced by a factor of about 3 relative to the case where all planetesimals' eccentricities and inclinations are equal to the rms values of those disk variables having locally Gaussian velocity dispersion. This accretion-rate enhancement should be incorporated by all models that assume a single random velocity for all planetesimals in lieu of a Gaussian distribution.

  20. Characterization of reaction kinetics in a porous electrode

    NASA Technical Reports Server (NTRS)

    Fedkiw, Peter S.

    1990-01-01

    A continuum-model approach, analogous to porous electrode theory, was applied to a thin-layer cell of rectangular and cylindrical geometry. A reversible redox couple is assumed, and the local reaction current density is related to the potential through the formula of Hubbard and Anson for a uniformily accessible thin-layer cell. The placement of the reference electrode is also accounted for in the analysis. Primary emphasis is placed on the effect of the solution-phase ohmic potential drop on the voltammogram characteristics. Correlation equations for the peak-potential displacement from E(sup 0 prime) and the peak current are presented in terms of two dimensionless parameters.

  1. A WSN-based tool for urban and industrial fire-fighting.

    PubMed

    De San Bernabe Clemente, Alberto; Martínez-de Dios, José Ramiro; Ollero Baturone, Aníbal

    2012-11-06

    This paper describes a WSN tool to increase safety in urban and industrial fire-fighting activities. Unlike most approaches, we assume that there is no preexisting WSN in the building, which involves interesting advantages but imposes some constraints. The system integrates the following functionalities: fire monitoring, firefighter monitoring and dynamic escape path guiding. It also includes a robust localization method that employs RSSI-range models dynamically trained to cope with the peculiarities of the environment. The training and application stages of the method are applied simultaneously, resulting in significant adaptability. Besides simulations and laboratory tests, a prototype of the proposed system has been validated in close-to-operational conditions.

  2. Off-Axis Driven Current Effects on ETB and ITB Formations based on Bifurcation Concept

    NASA Astrophysics Data System (ADS)

    Pakdeewanich, J.; Onjun, T.; Chatthong, B.

    2017-09-01

    This research studies plasma performance in fusion Tokamak system by investigating parameters such as plasma pressure in the presence of an edge transport barrier (ETB) and an internal transport barrier (ITB) as the off-axis driven current position is varied. The plasma is modeled based on the bifurcation concept using a suppression function that can result in formation of transport barriers. In this model, thermal and particle transport equations, including both neoclassical and anomalous effects, are solved simultaneously in slab geometry. The neoclassical coefficients are assumed to be constant while the anomalous coefficients depend on gradients of local pressure and density. The suppression function, depending on flow shear and magnetic shear, is assumed to affect only on the anomalous channel. The flow shear can be calculated from the force balance equation, while the magnetic shear is calculated from the given plasma current. It is found that as the position of driven current peak is moved outwards from the plasma center, the central pressure is increased. But at some point it stars to decline, mostly when the driven current peak has reached the outer half of the plasma. The higher pressure value results from the combination of ETB and ITB formations. The drop in central pressure occurs because ITB stats to disappear.

  3. The two main theories on dental bruxism.

    PubMed

    Behr, Michael; Hahnel, Sebastian; Faltermeier, Andreas; Bürgers, Ralf; Kolbeck, Carola; Handel, Gerhard; Proff, Peter

    2012-03-20

    Bruxism is characterized by non-functional contact of mandibular and maxillary teeth resulting in clenching or grating of teeth. Theories on factors causing bruxism are a matter of controversy in current literature. The dental profession has predominantly viewed peripheral local morphological disorders, such as malocclusion, as the cause of clenching and gnashing. This etiological model is based on the theory that occlusal maladjustment results in reduced masticatory muscle tone. In the absence of occlusal equilibration, motor neuron activity of masticatory muscles is triggered by periodontal receptors. The second theory assumes that central disturbances in the area of the basal ganglia are the main cause of bruxism. An imbalance in the circuit processing of the basal ganglia is supposed to be responsible for muscle hyperactivity during nocturnal dyskinesia such as bruxism. Some authors assume that bruxism constitutes sleep-related parafunctional activity (parasomnia). A recent model, which may explain the potential imbalance of the basal ganglia, is neuroplasticity. Neural plasticity is based on the ability of synapses to change the way they work. Activation of neural plasticity can change the relationship between inhibitory and excitatory neurons. It seems obvious that bruxism is not a symptom specific to just one disease. Many forms (and causes) of bruxism may exist simultaneously, as, for example, peripheral or central forms. Copyright © 2011 Elsevier GmbH. All rights reserved.

  4. Prediction of inertial effects due to bone conduction in a 2D box model of the cochlea

    NASA Astrophysics Data System (ADS)

    Halpin, Alice A.; Elliott, Stephen J.; Ni, Guangjian

    2015-12-01

    A 2D box model of the cochlea has been used to predict the basilar membrane, BM, velocity and the fluid flow caused by two components of bone conduction: due to inertia of the middle ear and due to inertia of the cochlear fluids. A finite difference approach has been used with asymmetric fluid chambers, that enables an investigation of the effect of varying window stiffness, due to otosclerosis for example. The BM is represented as a series of locally reacting single degree of freedom systems, with graded stiffness along the cochlea to represent the distribution of natural frequencies and with a damping representative of the passive cochlea. The velocity distributions along the passive BM are similar for harmonic excitation via the middle ear inertia or via the fluid inertia, but the variation of the BM velocity magnitude with excitation frequency is different in the two cases. Excitation via the middle ear is suppressed if the oval window is assumed to be blocked, but the excitation via the cochlear fluids is still possible. By assuming a combined excitation due to both middle ear and fluid excitation, the difference between the overall response can be calculated with a flexible and a blocked oval window, which gives a reasonable prediction of Carhart's notch.

  5. Uranium(IV) adsorption by natural organic matter in anoxic sediments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bone, Sharon E.; Dynes, James J.; Cliff, John

    2017-01-09

    Uranium is an important fuel source and a global environmental contaminant. It accumulates in the tetravalent state, U(IV), in anoxic sediments, including ore deposits, marine basins, and contaminated aquifers. However, very little is known about the speciation of U(IV) in low temperature geochemical environments, inhibiting the development of a conceptual model of U behavior. Until recently, U(IV) was assumed to exist predominantly as the sparingly soluble mineral uraninite (UO 2) in anoxic sediments; yet studies now show that UO 2 is not often dominant in these environments. However, a model of U(IV) speciation under environmentally relevant conditions has not yetmore » been developed. Here we show that complexes of U(IV) adsorb on organic carbon and organic carbon-coated clays in an organic-rich natural substrate under field-relevant conditions. Whereas previous research assumed that the U(IV) product depended on the reduction pathway, our results demonstrate that UO 2 formation can be inhibited simply by decreasing the U:solid ratio. Thus, it is the number and type of surface ligands that controls U(IV) speciation subsequent to U(VI) reduction. Projections of U transport and bioavailability, and thus its threat to human and ecosystem health, must consider retention of U(IV) ions within the local sediment environment.« less

  6. Sources and production of organic aerosol in Mexico City: insights from the combination of a chemical transport model (PMCAMx-2008) and measurements during MILAGRO

    NASA Astrophysics Data System (ADS)

    Tsimpidi, A. P.; Karydis, V. A.; Zavala, M.; Lei, W.; Bei, N.; Molina, L.; Pandis, S. N.

    2011-06-01

    Urban areas are large sources of organic aerosols and their precursors. Nevertheless, the contributions of primary (POA) and secondary organic aerosol (SOA) to the observed particulate matter levels have been difficult to quantify. In this study the three-dimensional chemical transport model PMCAMx-2008 is used to investigate the temporal and geographic variability of organic aerosol in the Mexico City Metropolitan Area (MCMA) during the MILAGRO campaign that took place in the spring of 2006. The organic module of PMCAMx-2008 includes the recently developed volatility basis-set framework in which both primary and secondary organic components are assumed to be semi-volatile and photochemically reactive and are distributed in logarithmically spaced volatility bins. The MCMA emission inventory is modified and the POA emissions are distributed by volatility based on dilution experiments. The model predictions are compared with observations from four different types of sites, an urban (T0), a suburban (T1), a rural (T2), and an elevated site in Pico de Tres Padres (PTP). The performance of the model in reproducing organic mass concentrations in these sites is encouraging. The average predicted PM1 organic aerosol (OA) concentration in T0, T1, and T2 is 18 μg m-3, 11.7 μg m-3, and 10.5 μg m-3 respectively, while the corresponding measured values are 17.2 μg m-3, 11 μg m-3, and 9 μg m-3. The average predicted locally-emitted primary OA concentrations, 4.4 μg m-3 at T0, 1.2 μg m-3 at T1 and 1.7 μg m-3 at PTP, are in reasonably good agreement with the corresponding PMF analysis estimates based on the Aerosol Mass Spectrometer (AMS) observations of 4.5, 1.3, and 2.9 μg m-3 respectively. The model reproduces reasonably well the average oxygenated OA (OOA) levels in T0 (7.5 μg m-3 predicted versus 7.5 μg m-3 measured), in T1 (6.3 μg m-3 predicted versus 4.6 μg m-3 measured) and in PTP (6.6 μg m-3 predicted versus 5.9 μg m-3 measured). The rest of the OA mass (6.1 μg m-3 and 4.2 μg m-3 in T0 and T1 respectively) is assumed to originate from biomass burning activities and is introduced to the model as part of the boundary conditions. Inside Mexico City (at T0), the locally-produced OA is predicted to be on average 60 % locally-emitted primary (POA), 6 % semi-volatile (S-SOA) and intermediate volatile (I-SOA) organic aerosol, and 34 % traditional SOA from the oxidation of VOCs (V-SOA). The average contributions of the OA components to the locally-produced OA for the entire modelling domain are predicted to be 32 % POA, 10 % S-SOA and I-SOA, and 58 % V-SOA. The long range transport from biomass burning activities and other sources in Mexico is predicted to contribute on average almost as much as the local sources during the MILAGRO period.

  7. Non-local Second Order Closure Scheme for Boundary Layer Turbulence and Convection

    NASA Astrophysics Data System (ADS)

    Meyer, Bettina; Schneider, Tapio

    2017-04-01

    There has been scientific consensus that the uncertainty in the cloud feedback remains the largest source of uncertainty in the prediction of climate parameters like climate sensitivity. To narrow down this uncertainty, not only a better physical understanding of cloud and boundary layer processes is required, but specifically the representation of boundary layer processes in models has to be improved. General climate models use separate parameterisation schemes to model the different boundary layer processes like small-scale turbulence, shallow and deep convection. Small scale turbulence is usually modelled by local diffusive parameterisation schemes, which truncate the hierarchy of moment equations at first order and use second-order equations only to estimate closure parameters. In contrast, the representation of convection requires higher order statistical moments to capture their more complex structure, such as narrow updrafts in a quasi-steady environment. Truncations of moment equations at second order may lead to more accurate parameterizations. At the same time, they offer an opportunity to take spatially correlated structures (e.g., plumes) into account, which are known to be important for convective dynamics. In this project, we study the potential and limits of local and non-local second order closure schemes. A truncation of the momentum equations at second order represents the same dynamics as a quasi-linear version of the equations of motion. We study the three-dimensional quasi-linear dynamics in dry and moist convection by implementing it in a LES model (PyCLES) and compare it to a fully non-linear LES. In the quasi-linear LES, interactions among turbulent eddies are suppressed but nonlinear eddy—mean flow interactions are retained, as they are in the second order closure. In physical terms, suppressing eddy—eddy interactions amounts to suppressing, e.g., interactions among convective plumes, while retaining interactions between plumes and the environment (e.g., entrainment and detrainment). In a second part, we employ the possibility to include non-local statistical correlations in a second-order closure scheme. Such non-local correlations allow to directly incorporate the spatially coherent structures that occur in the form of convective updrafts penetrating the boundary layer. This allows us to extend the work that has been done using assumed-PDF schemes for parameterising boundary layer turbulence and shallow convection in a non-local sense.

  8. A hybrid model describing ion induced kinetic electron emission

    NASA Astrophysics Data System (ADS)

    Hanke, S.; Duvenbeck, A.; Heuser, C.; Weidtmann, B.; Wucher, A.

    2015-06-01

    We present a model to describe the kinetic internal and external electron emission from an ion bombarded metal target. The model is based upon a molecular dynamics treatment of the nuclear degree of freedom, the electronic system is assumed as a quasi-free electron gas characterized by its Fermi energy, electron temperature and a characteristic attenuation length. In a series of previous works we have employed this model, which includes the local kinetic excitation as well as the rapid spread of the generated excitation energy, in order to calculate internal and external electron emission yields within the framework of a Richardson-Dushman-like thermionic emission model. However, this kind of treatment turned out to fail in the realistic prediction of experimentally measured internal electron yields mainly due to the restriction of the treatment of electronic transport to a diffusive manner. Here, we propose a slightly modified approach additionally incorporating the contribution of hot electrons which are generated in the bulk material and undergo ballistic transport towards the emitting interface.

  9. Computational modeling of hypertensive growth in the human carotid artery

    NASA Astrophysics Data System (ADS)

    Sáez, Pablo; Peña, Estefania; Martínez, Miguel Angel; Kuhl, Ellen

    2014-06-01

    Arterial hypertension is a chronic medical condition associated with an elevated blood pressure. Chronic arterial hypertension initiates a series of events, which are known to collectively initiate arterial wall thickening. However, the correlation between macrostructural mechanical loading, microstructural cellular changes, and macrostructural adaptation remains unclear. Here, we present a microstructurally motivated computational model for chronic arterial hypertension through smooth muscle cell growth. To model growth, we adopt a classical concept based on the multiplicative decomposition of the deformation gradient into an elastic part and a growth part. Motivated by clinical observations, we assume that the driving force for growth is the stretch sensed by the smooth muscle cells. We embed our model into a finite element framework, where growth is stored locally as an internal variable. First, to demonstrate the features of our model, we investigate the effects of hypertensive growth in a real human carotid artery. Our results agree nicely with experimental data reported in the literature both qualitatively and quantitatively.

  10. Image-optimized Coronal Magnetic Field Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M., E-mail: shaela.i.jones-mecholsky@nasa.gov, E-mail: shaela.i.jonesmecholsky@nasa.gov

    We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work, we presented early tests of the method, which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper, we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outsidemore » of the assumed coronagraph image plane and the effect on the outcome of the optimization of errors in the localization of constraints. We find that substantial improvement in the model field can be achieved with these types of constraints, even when magnetic features in the images are located outside of the image plane.« less

  11. Microscopic theory of the superconducting gap in the quasi-one-dimensional organic conductor (TMTSF) 2ClO4 : Model derivation and two-particle self-consistent analysis

    NASA Astrophysics Data System (ADS)

    Aizawa, Hirohito; Kuroki, Kazuhiko

    2018-03-01

    We present a first-principles band calculation for the quasi-one-dimensional (Q1D) organic superconductor (TMTSF) 2ClO4 . An effective tight-binding model with the TMTSF molecule to be regarded as the site is derived from a calculation based on maximally localized Wannier orbitals. We apply a two-particle self-consistent (TPSC) analysis by using a four-site Hubbard model, which is composed of the tight-binding model and an onsite (intramolecular) repulsive interaction, which serves as a variable parameter. We assume that the pairing mechanism is mediated by the spin fluctuation, and the sign of the superconducting gap changes between the inner and outer Fermi surfaces, which correspond to a d -wave gap function in a simplified Q1D model. With the parameters we adopt, the critical temperature for superconductivity estimated by the TPSC approach is approximately 1 K, which is consistent with experiment.

  12. The Influence of Spatial Configuration of Residential Area and Vector Populations on Dengue Incidence Patterns in an Individual-Level Transmission Model.

    PubMed

    Kang, Jeon-Young; Aldstadt, Jared

    2017-07-15

    Dengue is a mosquito-borne infectious disease that is endemic in tropical and subtropical countries. Many individual-level simulation models have been developed to test hypotheses about dengue virus transmission. Often these efforts assume that human host and mosquito vector populations are randomly or uniformly distributed in the environment. Although, the movement of mosquitoes is affected by spatial configuration of buildings and mosquito populations are highly clustered in key buildings, little research has focused on the influence of the local built environment in dengue transmission models. We developed an agent-based model of dengue transmission in a village setting to test the importance of using realistic environments in individual-level models of dengue transmission. The results from one-way ANOVA analysis of simulations indicated that the differences between scenarios in terms of infection rates as well as serotype-specific dominance are statistically significant. Specifically, the infection rates in scenarios of a realistic environment are more variable than those of a synthetic spatial configuration. With respect to dengue serotype-specific cases, we found that a single dengue serotype is more often dominant in realistic environments than in synthetic environments. An agent-based approach allows a fine-scaled analysis of simulated dengue incidence patterns. The results provide a better understanding of the influence of spatial heterogeneity on dengue transmission at a local scale.

  13. Quantum Locality, Rings a Bell?: Bell's Inequality Meets Local Reality and True Determinism

    NASA Astrophysics Data System (ADS)

    Sánchez-Kuntz, Natalia; Nahmad-Achar, Eduardo

    2018-01-01

    By assuming a deterministic evolution of quantum systems and taking realism into account, we carefully build a hidden variable theory for Quantum Mechanics (QM) based on the notion of ontological states proposed by 't Hooft (The cellular automaton interpretation of quantum mechanics, arXiv:1405.1548v3, 2015; Springer Open 185, https://doi.org/10.1007/978-3-319-41285-6, 2016). We view these ontological states as the ones embedded with realism and compare them to the (usual) quantum states that represent superpositions, viewing the latter as mere information of the system they describe. Such a deterministic model puts forward conditions for the applicability of Bell's inequality: the usual inequality cannot be applied to the usual experiments. We build a Bell-like inequality that can be applied to the EPR scenario and show that this inequality is always satisfied by QM. In this way we show that QM can indeed have a local interpretation, and thus meet with the causal structure imposed by the Theory of Special Relativity in a satisfying way.

  14. Detection prospects for high energy neutrino sources from the anisotropic matter distribution in the local Universe

    NASA Astrophysics Data System (ADS)

    Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene

    2017-03-01

    Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe. Assuming that the distribution of the neutrino sources follows that of matter, we look for correlations between ``warm'' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance (including that of IceCube-Gen2), we demonstrate that sources with local density exceeding 10-6 Mpc-3 and neutrino luminosity Lν lesssim 1042 erg s-1 (1041 erg s-1) will be efficiently revealed by our method using IceCube (IceCube-Gen2). At low luminosities such as will be probed by IceCube-Gen2, the sensitivity of this analysis is superior to requiring statistically significant direct observation of a point source.

  15. An Emerging Allee Effect Is Critical for Tumor Initiation and Persistence

    PubMed Central

    Böttger, Katrin; Hatzikirou, Haralambos; Voss-Böhme, Anja; Cavalcanti-Adam, Elisabetta Ada; Herrero, Miguel A.; Deutsch, Andreas

    2015-01-01

    Tumor cells develop different strategies to cope with changing microenvironmental conditions. A prominent example is the adaptive phenotypic switching between cell migration and proliferation. While it has been shown that the migration-proliferation plasticity influences tumor spread, it remains unclear how this particular phenotypic plasticity affects overall tumor growth, in particular initiation and persistence. To address this problem, we formulate and study a mathematical model of spatio-temporal tumor dynamics which incorporates the microenvironmental influence through a local cell density dependence. Our analysis reveals that two dynamic regimes can be distinguished. If cell motility is allowed to increase with local cell density, any tumor cell population will persist in time, irrespective of its initial size. On the contrary, if cell motility is assumed to decrease with respect to local cell density, any tumor population below a certain size threshold will eventually extinguish, a fact usually termed as Allee effect in ecology. These results suggest that strategies aimed at modulating migration are worth to be explored as alternatives to those mainly focused at keeping tumor proliferation under control. PMID:26335202

  16. Identifying the most influential spreaders in complex networks by an Extended Local K-Shell Sum

    NASA Astrophysics Data System (ADS)

    Yang, Fan; Zhang, Ruisheng; Yang, Zhao; Hu, Rongjing; Li, Mengtian; Yuan, Yongna; Li, Keqin

    Identifying influential spreaders is crucial for developing strategies to control the spreading process on complex networks. Following the well-known K-Shell (KS) decomposition, several improved measures are proposed. However, these measures cannot identify the most influential spreaders accurately. In this paper, we define a Local K-Shell Sum (LKSS) by calculating the sum of the K-Shell indices of the neighbors within 2-hops of a given node. Based on the LKSS, we propose an Extended Local K-Shell Sum (ELKSS) centrality to rank spreaders. The ELKSS is defined as the sum of the LKSS of the nearest neighbors of a given node. By assuming that the spreading process on networks follows the Susceptible-Infectious-Recovered (SIR) model, we perform extensive simulations on a series of real networks to compare the performance between the ELKSS centrality and other six measures. The results show that the ELKSS centrality has a better performance than the six measures to distinguish the spreading ability of nodes and to identify the most influential spreaders accurately.

  17. Functional Concept of a Multipurpose Actuator: Design and Analysis

    NASA Astrophysics Data System (ADS)

    Krivka, Vladimir

    2018-05-01

    The principles of operation (dynamic characteristics) of electromagnetic devices are discussed using a threephase multifunctional actuator as an example, whose major limitations are associated with the magnetic field nonlinearity and control over the magnetic forces affecting the moving element. The investigation is carried out using the methods of physico-mathematical modeling and a full-scale experiment. A physico-mathematical model is proposed, which is based on acceptable approximations and simplifications, the replacement of a nonlinear (but periodic) magnetic field in a quasi-stationary state by a harmonic magnetic field being the most important among them. The magnetic permeability in every cell of the discretization grid is assumed to be constant and corresponds to the local magnetic flux density. The features and characteristics obtained through this modeling are quite consistent with the observed behavior and measured values. It is shown that the dependence of friction coefficient on its velocity exhibits a hysteresis.

  18. Machine Vision Within The Framework Of Collective Neural Assemblies

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Knopf, George K.

    1990-03-01

    The proposed mechanism for designing a robust machine vision system is based on the dynamic activity generated by the various neural populations embedded in nervous tissue. It is postulated that a hierarchy of anatomically distinct tissue regions are involved in visual sensory information processing. Each region may be represented as a planar sheet of densely interconnected neural circuits. Spatially localized aggregates of these circuits represent collective neural assemblies. Four dynamically coupled neural populations are assumed to exist within each assembly. In this paper we present a state-variable model for a tissue sheet derived from empirical studies of population dynamics. Each population is modelled as a nonlinear second-order system. It is possible to emulate certain observed physiological and psychophysiological phenomena of biological vision by properly programming the interconnective gains . Important early visual phenomena such as temporal and spatial noise insensitivity, contrast sensitivity and edge enhancement will be discussed for a one-dimensional tissue model.

  19. Interaction of anisotropic dark energy fluid with perfect fluid in the presence of cosmological term Λ

    NASA Astrophysics Data System (ADS)

    Singh, S. Surendra

    2018-05-01

    Considering the locally rotationally symmetric (LRS) Bianchi type-I metric with cosmological constant Λ, Einstein’s field equations are discussed based on the background of anisotropic fluid. We assumed the condition A = B 1 m for the metric potentials A and B, where m is a positive constant to obtain the viable model of the Universe. It is found that Λ(t) is positive and inversely proportional to time. The values of matter-energy density Ωm, dark energy density ΩΛ and deceleration parameter q are found to be consistent with the values of WMAP observations. State finder parameters and anisotropic deviation parameter are also investigated. It is also observed that the derived model is an accelerating, shearing and non-rotating Universe. Some of the asymptotic and geometrical behaviors of the derived models are investigated with the age of the Universe.

  20. Stability of Lobed Balloons

    NASA Technical Reports Server (NTRS)

    Ball, Danny (Technical Monitor); Pagitz, M.; Pellegrino, Xu S.

    2004-01-01

    This paper presents a computational study of the stability of simple lobed balloon structures. Two approaches are presented, one based on a wrinkled material model and one based on a variable Poisson s ratio model that eliminates compressive stresses iteratively. The first approach is used to investigate the stability of both a single isotensoid and a stack of four isotensoids, for perturbations of in.nitesimally small amplitude. It is found that both structures are stable for global deformation modes, but unstable for local modes at su.ciently large pressure. Both structures are stable if an isotropic model is assumed. The second approach is used to investigate the stability of the isotensoid stack for large shape perturbations, taking into account contact between di.erent surfaces. For this structure a distorted, stable configuration is found. It is also found that the volume enclosed by this con.guration is smaller than that enclosed by the undistorted structure.

  1. Rational group decision making: A random field Ising model at T = 0

    NASA Astrophysics Data System (ADS)

    Galam, Serge

    1997-02-01

    A modified version of a finite random field Ising ferromagnetic model in an external magnetic field at zero temperature is presented to describe group decision making. Fields may have a non-zero average. A postulate of minimum inter-individual conflicts is assumed. Interactions then produce a group polarization along one very choice which is however randomly selected. A small external social pressure is shown to have a drastic effect on the polarization. Individual bias related to personal backgrounds, cultural values and past experiences are introduced via quenched local competing fields. They are shown to be instrumental in generating a larger spectrum of collective new choices beyond initial ones. In particular, compromise is found to results from the existence of individual competing bias. Conflict is shown to weaken group polarization. The model yields new psychosociological insights about consensus and compromise in groups.

  2. Cracking on anisotropic neutron stars

    NASA Astrophysics Data System (ADS)

    Setiawan, A. M.; Sulaksono, A.

    2017-07-01

    We study the effect of cracking of a local anisotropic neutron star (NS) due to small density fluctuations. It is assumed that the neutron star core consists of leptons, nucleons and hyperons. The relativistic mean field model is used to describe the core of equation of state (EOS). For the crust, we use the EOS introduced by Miyatsu et al. [1]. Furthermore, two models are used to describe pressure anisotropic in neutron star matter. One is proposed by Doneva-Yazadjiev (DY) [2] and the other is proposed by Herrera-Barreto (HB) [3]. The anisotropic parameter of DY and HB models are adjusted in order the predicted maximum mass compatible to the mass of PSR J1614-2230 [4] and PSR J0348+0432 [5]. We have found that cracking can potentially present in the region close to the neutron star surface. The instability due cracking is quite sensitive to the NS mass and anisotropic parameter used.

  3. A two-phase micromorphic model for compressible granular materials

    NASA Astrophysics Data System (ADS)

    Paolucci, Samuel; Li, Weiming; Powers, Joseph

    2009-11-01

    We introduce a new two-phase continuum model for compressible granular material based on micromorphic theory and treat it as a two-phase mixture with inner structure. By taking an appropriate number of moments of the local micro scale balance equations, the average phase balance equations result from a systematic averaging procedure. In addition to equations for mass, momentum and energy, the balance equations also include evolution equations for microinertia and microspin tensors. The latter equations combine to yield a general form of a compaction equation when the material is assumed to be isotropic. When non-linear and inertial effects are neglected, the generalized compaction equation reduces to that originally proposed by Bear and Nunziato. We use the generalized compaction equation to numerically model a mixture of granular high explosive and interstitial gas. One-dimensional shock tube and piston-driven solutions are presented and compared with experimental results and other known solutions.

  4. On localizing a capsule endoscope using magnetic sensors.

    PubMed

    Moussakhani, Babak; Ramstad, Tor; Flåm, John T; Balasingham, Ilangko

    2012-01-01

    In this work, localizing a capsule endoscope within the gastrointestinal tract is addressed. It is assumed that the capsule is equipped with a magnet, and that a magnetic sensor network measures the flux from this magnet. We assume no prior knowledge on the source location, and that the measurements collected by the sensors are corrupted by thermal Gaussian noise only. Under these assumptions, we focus on determining the Cramer-Rao Lower Bound (CRLB) for the location of the endoscope. Thus, we are not studying specific estimators, but rather the theoretical performance of an optimal one. It is demonstrated that the CRLB is a function of the distance and angle between the sensor network and the magnet. By studying the CRLB with respect to different sensor array constellations, we are able to indicate favorable constellations.

  5. Prediction of the structure of fuel sprays in gas turbine combustors

    NASA Technical Reports Server (NTRS)

    Shuen, J. S.

    1985-01-01

    The structure of fuel sprays in a combustion chamber is theoretically investigated using computer models of current interest. Three representative spray models are considered: (1) a locally homogeneous flow (LHF) model, which assumes infinitely fast interphase transport rates; (2) a deterministic separated flow (DSF) model, which considers finite rates of interphase transport but ignores effects of droplet/turbulence interactions; and (3) a stochastic separated flow (SSF) model, which considers droplet/turbulence interactions using random sampling for turbulence properties in conjunction with random-walk computations for droplet motion and transport. Two flow conditions are studied to investigate the influence of swirl on droplet life histories and the effects of droplet/turbulence interactions on flow properties. Comparison of computed results with the experimental data show that general features of the flow structure can be predicted with reasonable accuracy using the two separated flow models. In contrast, the LHF model overpredicts the rate of development of the flow. While the SSF model provides better agreement with measurements than the DSF model, definitive evaluation of the significance of droplet/turbulence interaction is not achieved due to uncertainties in the spray initial conditions.

  6. On the modelling of non-reactive and reactive turbulent combustor flows

    NASA Technical Reports Server (NTRS)

    Nikjooy, Mohammad; So, Ronald M. C.

    1987-01-01

    A study of non-reactive and reactive axisymmetric combustor flows with and without swirl is presented. Closure of the Reynolds equations is achieved by three models: kappa-epsilon, algebraic stress and Reynolds stress closure. Performance of two locally nonequilibrium and one equilibrium algebraic stress models is analyzed assuming four pressure strain models. A comparison is also made of the performance of a high and a low Reynolds number model for combustor flow calculations using Reynolds stress closures. Effects of diffusion and pressure-strain models on these closures are also investigated. Two models for the scalar transport are presented. One employs the second-moment closure which solves the transport equations for the scalar fluxes, while the other solves the algebraic equations for the scalar fluxes. In addition, two cases of non-premixed and one case of premixed combustion are considered. Fast- and finite-rate chemistry models are applied to non-premixed combustion. Both show promise for application in gas turbine combustors. However, finite rate chemistry models need to be examined to establish a suitable coupling of the heat release effects on turbulence field and rate constants.

  7. Modelling of hydrogen conditioning, retention and release in Tore Supra

    NASA Astrophysics Data System (ADS)

    Grisolia, C.; Horton, L. D.; Ehrenberg, J. K.

    1995-04-01

    A model based on a local mixing model has been previously developed at JET to explain the recovery of tritium after the first PTE experiment. This model is extended by a 0D plasma particle balance model and is applied to data from Tore Supra wall saturation experiments. With only two free parameters, representing the diffusion of hydrogen atoms and the volume recombination process between hydrogen atoms into molecules, the model can reproduce experimental data. The time evolution of the after-shot outgassing and the integral amount of particles recovered after the shot (assuming 13 m 2 of interacting surfaces between plasma and walls) are in good agreement with the experimental observations. The same set of parameters allows the model to simulate after-shot outgassing of five consecutive discharges. However, the model fails to predict the observed saturation of the walls by the plasma. Results from helium glow discharge (HeGD) can only be partially described. Good agreement with the experimental hydrogen release and its time evolution during HeGD is observed, but the model fails to describe the stability of a saturated graphite wall.

  8. Trust Revision for Conflicting Sources

    DTIC Science & Technology

    2017-02-01

    visiting a foreign country Alice is looking for a restaurant where the locals go, because she would like to avoid places overrun by tourists. She meets a...local called Bob who tells her that restaurant Xylo is the favourite place for locals. Assume that Bob is a stranger to Alice. Then a priori her trust...will derive a strong opinion about the restaurant Xylo based on Bob’s advice. 554 V. TRUST REVISION We continue the example from the previous section

  9. Impact of Basal Conditions on Grounding-Line Retreat

    NASA Astrophysics Data System (ADS)

    Koellner, S. J.; Parizek, B. R.; Alley, R. B.; Muto, A.; Holschuh, N.; Nowicki, S.

    2017-12-01

    An often-made assumption included in ice-sheet models used for sea-level projections is that basal rheology is constant throughout the domain of the simulation. The justification in support of this assumption is that physical data for determining basal rheology is limited and a constant basal flow law can adequately approximate current as well as past behavior of an ice-sheet. Prior studies indicate that beneath Thwaites Glacier (TG) there is a ridge-and-valley bedrock structure which likely promotes deformation of soft tills within the troughs and sliding, more akin to creep, over the harder peaks; giving rise to a spatially variable basal flow law. Furthermore, it has been shown that the stability of an outlet glacier varies with the assumed basal rheology, so accurate projections almost certainly need to account for basal conditions. To test the impact of basal conditions on grounding-line evolution forced by ice-shelf perturbations, we modified the PSU 2-D flowline model to enable the inclusion of spatially variable basal rheology along an idealized bedrock profile akin to TG. Synthetic outlet glacier "data" were first generated under steady-state conditions assuming a constant basal flow law and a constant basal friction coefficient field on either a linear or bumpy sloping bed. In following standard procedures, a suite of models were then initialized by assuming different basal rheologies and then determining the basal friction coefficients that produce surface velocities matching those from the synthetic "data". After running each of these to steady state, the standard and full suite of models were forced by drastically reducing ice-shelf buttressing through side-shear and prescribed basal-melting perturbations. In agreement with previous findings, results suggest a more plastic basal flow law enhances stability in response to ice-shelf perturbations by flushing ice from farther upstream to sustain the grounding-zone mass balance required to prolong the current grounding-line position. Mixed rheology beds tend to mimic the retreat of the higher-exponent bed, a behavior enhanced over bumps as the stabilizing ridges tap into ice from local valleys. Thus, accounting for variable basal conditions in ice-sheet model projections is critical for improving both the timing and magnitude of retreat.

  10. Geomorphically based predictive mapping of soil thickness in upland watersheds

    NASA Astrophysics Data System (ADS)

    Pelletier, Jon D.; Rasmussen, Craig

    2009-09-01

    The hydrologic response of upland watersheds is strongly controlled by soil (regolith) thickness. Despite the need to quantify soil thickness for input into hydrologic models, there is currently no widely used, geomorphically based method for doing so. In this paper we describe and illustrate a new method for predictive mapping of soil thicknesses using high-resolution topographic data, numerical modeling, and field-based calibration. The model framework works directly with input digital elevation model data to predict soil thicknesses assuming a long-term balance between soil production and erosion. Erosion rates in the model are quantified using one of three geomorphically based sediment transport models: nonlinear slope-dependent transport, nonlinear area- and slope-dependent transport, and nonlinear depth- and slope-dependent transport. The model balances soil production and erosion locally to predict a family of solutions corresponding to a range of values of two unconstrained model parameters. A small number of field-based soil thickness measurements can then be used to calibrate the local value of those unconstrained parameters, thereby constraining which solution is applicable at a particular study site. As an illustration, the model is used to predictively map soil thicknesses in two small, ˜0.1 km2, drainage basins in the Marshall Gulch watershed, a semiarid drainage basin in the Santa Catalina Mountains of Pima County, Arizona. Field observations and calibration data indicate that the nonlinear depth- and slope-dependent sediment transport model is the most appropriate transport model for this site. The resulting framework provides a generally applicable, geomorphically based tool for predictive mapping of soil thickness using high-resolution topographic data sets.

  11. InN/GaN quantum dot superlattices: Charge-carrier states and surface electronic structure

    NASA Astrophysics Data System (ADS)

    Kanouni, F.; Brezini, A.; Djenane, M.; Zou, Q.

    2018-03-01

    We have theoretically investigated the electron energy spectra and surface states energy in the three dimensionally ordered quantum dot superlattices (QDSLs) made of InN and GaN semiconductors. The QDSL is assumed in this model to be a matrix of GaN containing cubic dots of InN of the same size and uniformly distributed. For the miniband’s structure calculation, the resolution of the effective mass Schrödinger equation is done by decoupling it in the three directions within the framework of Kronig-Penney model. We found that the electrons minibands in infinite ODSLs are clearly different from those in the conventional quantum-well superlattices. The electrons localization and charge-carrier states are very dependent on the quasicrystallographic directions, the size and the shape of the dots which play a role of the artificial atoms in such QD supracrystal. The energy spectrum of the electron states localized at the surface of InN/GaN QDSL is represented by Kronig-Penney like-model, calculated via direct matching procedure. The calculation results show that the substrate breaks symmetrical shape of QDSL on which some localized electronic surface states can be produced in minigap regions. Furthermore, we have noticed that the surface states degeneracy is achieved in like very thin bands located in the minigaps, identified by different quantum numbers nx, ny, nz. Moreover, the surface energy bands split due to the reduction of the symmetry of the QDSL in z-direction.

  12. Least squares collocation applied to local gravimetric solutions from satellite gravity gradiometry data

    NASA Technical Reports Server (NTRS)

    Robbins, J. W.

    1985-01-01

    An autonomous spaceborne gravity gradiometer mission is being considered as a post Geopotential Research Mission project. The introduction of satellite diometry data to geodesy is expected to improve solid earth gravity models. The possibility of utilizing gradiometer data for the determination of pertinent gravimetric quantities on a local basis is explored. The analytical technique of least squares collocation is investigated for its usefulness in local solutions of this type. It is assumed, in the error analysis, that the vertical gravity gradient component of the gradient tensor is used as the raw data signal from which the corresponding reference gradients are removed to create the centered observations required in the collocation solution. The reference gradients are computed from a high degree and order geopotential model. The solution can be made in terms of mean or point gravity anomalies, height anomalies, or other useful gravimetric quantities depending on the choice of covariance types. Selected for this study were 30 x 30 foot mean gravity and height anomalies. Existing software and new software are utilized to implement the collocation technique. It was determined that satellite gradiometry data at an altitude of 200 km can be used successfully for the determination of 30 x 30 foot mean gravity anomalies to an accuracy of 9.2 mgal from this algorithm. It is shown that the resulting accuracy estimates are sensitive to gravity model coefficient uncertainties, data reduction assumptions and satellite mission parameters.

  13. Constraining the mass of the Local Group

    NASA Astrophysics Data System (ADS)

    Carlesi, Edoardo; Hoffman, Yehuda; Sorce, Jenny G.; Gottlöber, Stefan

    2017-03-01

    The mass of the Local Group (LG) is a crucial parameter for galaxy formation theories. However, its observational determination is challenging - its mass budget is dominated by dark matter that cannot be directly observed. To meet this end, the posterior distributions of the LG and its massive constituents have been constructed by means of constrained and random cosmological simulations. Two priors are assumed - the Λ cold dark matter model that is used to set up the simulations, and an LG model that encodes the observational knowledge of the LG and is used to select LG-like objects from the simulations. The constrained simulations are designed to reproduce the local cosmography as it is imprinted on to the Cosmicflows-2 data base of velocities. Several prescriptions are used to define the LG model, focusing in particular on different recent estimates of the tangential velocity of M31. It is found that (a) different vtan choices affect the peak mass values up to a factor of 2, and change mass ratios of MM31 to MMW by up to 20 per cent; (b) constrained simulations yield more sharply peaked posterior distributions compared with the random ones; (c) LG mass estimates are found to be smaller than those found using the timing argument; (d) preferred Milky Way masses lie in the range of (0.6-0.8) × 1012 M⊙; whereas (e) MM31 is found to vary between (1.0-2.0) × 1012 M⊙, with a strong dependence on the vtan values used.

  14. A pressure relaxation closure model for one-dimensional, two-material Lagrangian hydrodynamics based on the Riemann problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamm, James R; Shashkov, Mikhail J

    2009-01-01

    Despite decades of development, Lagrangian hydrodynamics of strengthfree materials presents numerous open issues, even in one dimension. We focus on the problem of closing a system of equations for a two-material cell under the assumption of a single velocity model. There are several existing models and approaches, each possessing different levels of fidelity to the underlying physics and each exhibiting unique features in the computed solutions. We consider the case in which the change in heat in the constituent materials in the mixed cell is assumed equal. An instantaneous pressure equilibration model for a mixed cell can be cast asmore » four equations in four unknowns, comprised of the updated values of the specific internal energy and the specific volume for each of the two materials in the mixed cell. The unique contribution of our approach is a physics-inspired, geometry-based model in which the updated values of the sub-cell, relaxing-toward-equilibrium constituent pressures are related to a local Riemann problem through an optimization principle. This approach couples the modeling problem of assigning sub-cell pressures to the physics associated with the local, dynamic evolution. We package our approach in the framework of a standard predictor-corrector time integration scheme. We evaluate our model using idealized, two material problems using either ideal-gas or stiffened-gas equations of state and compare these results to those computed with the method of Tipton and with corresponding pure-material calculations.« less

  15. Reliability Analysis of the Gradual Degradation of Semiconductor Devices.

    DTIC Science & Technology

    1983-07-20

    under the heading of linear models or linear statistical models . 3 ,4 We have not used this material in this report. Assuming catastrophic failure when...assuming a catastrophic model . In this treatment we first modify our system loss formula and then proceed to the actual analysis. II. ANALYSIS OF...Failure Time 1 Ti Ti 2 T2 T2 n Tn n and are easily analyzed by simple linear regression. Since we have assumed a log normal/Arrhenius activation

  16. Two-dimensional steady bow waves in water of finite depth

    NASA Astrophysics Data System (ADS)

    Kao, John

    1998-12-01

    In this study, the two-dimensional steady bow flow in water of arbitrary finite depth has been investigated. The two-dimensional bow is assumed to consist of an inclined flat plate connected downstream to a horizontal semi-infinite draft plate. The bottom of the channel is assumed to be a horizontal plate; the fluid is assumed to be inviscid, incompressible; and the flow irrotational. For the angle of incidence α (held by the bow plate) lying between 0o and 60o, the local flow analysis near the stagnation point shows that the angle lying between the free surface and the inclined plate, β, must always be equal to 120o, otherwise no solution can exist. Moreover, we further find that the local flow solution does not exist if /alpha > 60o, and that on the inclined plate there exists a negative pressure region adjacent to the stagnation point for /alpha < 30o. Singularities at the stagnation point and the upstream infinity are found to have multiple branch-point singularities of irrational orders. A fully nonlinear theoretical model has been developed in this study for evaluating the incompressible irrotational flow satisfying the free-surface conditions and two constraint equations. To solve the bow flow problem, successive conformal mappings are first used to transform the flow domain into the interior of a unit semi-circle in which the unknowns can be represented as the coefficients of an infinite series. A total error function equivalent to satisfying the Bernoulli equation is defined and solved by minimizing the error function and applying the method of Lagrange's multiplier. Smooth solutions with monotonic free surface profiles have been found and presented here for the range of 35o < /alpha < 60o, a draft Froude number Frd less than 0.5, and a water-depth Froude number Frh less than 0.4. The dependence of the solution on these key parameters is examined. Our results may be useful in designing the optimum bow shape.

  17. The Combined Effect of Periodic Signals and Noise on the Dilution of Precision of GNSS Station Velocity Uncertainties

    NASA Astrophysics Data System (ADS)

    Klos, Anna; Olivares, German; Teferle, Felix Norman; Bogusz, Janusz

    2016-04-01

    Station velocity uncertainties determined from a series of Global Navigation Satellite System (GNSS) position estimates depend on both the deterministic and stochastic models applied to the time series. While the deterministic model generally includes parameters for a linear and several periodic terms the stochastic model is a representation of the noise character of the time series in form of a power-law process. For both of these models the optimal model may vary from one time series to another while the models also depend, to some degree, on each other. In the past various power-law processes have been shown to fit the time series and the sources for the apparent temporally-correlated noise were attributed to, for example, mismodelling of satellites orbits, antenna phase centre variations, troposphere, Earth Orientation Parameters, mass loading effects and monument instabilities. Blewitt and Lavallée (2002) demonstrated how improperly modelled seasonal signals affected the estimates of station velocity uncertainties. However, in their study they assumed that the time series followed a white noise process with no consideration of additional temporally-correlated noise. Bos et al. (2010) empirically showed for a small number of stations that the noise character was much more important for the reliable estimation of station velocity uncertainties than the seasonal signals. In this presentation we pick up from Blewitt and Lavallée (2002) and Bos et al. (2010), and have derived formulas for the computation of the General Dilution of Precision (GDP) under presence of periodic signals and temporally-correlated noise in the time series. We show, based on simulated and real time series from globally distributed IGS (International GNSS Service) stations processed by the Jet Propulsion Laboratory (JPL), that periodic signals dominate the effect on the velocity uncertainties at short time scales while for those beyond four years, the type of noise becomes much more important. In other words, for time series long enough, the assumed periodic signals do not affect the velocity uncertainties as much as the assumed noise model. We calculated the GDP to be the ratio between two errors of velocity: without and with inclusion of seasonal terms of periods equal to one year and its overtones till 3rd. To all these cases power-law processes of white, flicker and random-walk noise were added separately. Few oscillations in GDP can be noticed for integer years, which arise from periodic terms added. Their amplitudes in GDP increase along with the increasing spectral index. Strong peaks of oscillations in GDP are indicated for short time scales, especially for random-walk processes. This means that badly monumented stations are affected the most. Local minima and maxima in GDP are also enlarged as the noise approaches random walk. We noticed that the semi-annual signal increased the local GDP minimum for white noise. This suggests that adding power-law noise to a deterministic model with annual term or adding a semi-annual term to white noise causes an increased velocity uncertainty even at the points, where determined velocity is not biased.

  18. Analysis of non-equilibrium phenomena in inductively coupled plasma generators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, W.; Panesi, M., E-mail: mpanesi@illinois.edu; Lani, A.

    This work addresses the modeling of non-equilibrium phenomena in inductively coupled plasma discharges. In the proposed computational model, the electromagnetic induction equation is solved together with the set of Navier-Stokes equations in order to compute the electromagnetic and flow fields, accounting for their mutual interaction. Semi-classical statistical thermodynamics is used to determine the plasma thermodynamic properties, while transport properties are obtained from kinetic principles, with the method of Chapman and Enskog. Particle ambipolar diffusive fluxes are found by solving the Stefan-Maxwell equations with a simple iterative method. Two physico-mathematical formulations are used to model the chemical reaction processes: (1) Amore » Local Thermodynamics Equilibrium (LTE) formulation and (2) a thermo-chemical non-equilibrium (TCNEQ) formulation. In the TCNEQ model, thermal non-equilibrium between the translational energy mode of the gas and the vibrational energy mode of individual molecules is accounted for. The electronic states of the chemical species are assumed in equilibrium with the vibrational temperature, whereas the rotational energy mode is assumed to be equilibrated with translation. Three different physical models are used to account for the coupling of chemistry and energy transfer processes. Numerical simulations obtained with the LTE and TCNEQ formulations are used to characterize the extent of non-equilibrium of the flow inside the Plasmatron facility at the von Karman Institute. Each model was tested using different kinetic mechanisms to assess the sensitivity of the results to variations in the reaction parameters. A comparison of temperatures and composition profiles at the outlet of the torch demonstrates that the flow is in non-equilibrium for operating conditions characterized by pressures below 30 000 Pa, frequency 0.37 MHz, input power 80 kW, and mass flow 8 g/s.« less

  19. Analysis of non-equilibrium phenomena in inductively coupled plasma generators

    NASA Astrophysics Data System (ADS)

    Zhang, W.; Lani, A.; Panesi, M.

    2016-07-01

    This work addresses the modeling of non-equilibrium phenomena in inductively coupled plasma discharges. In the proposed computational model, the electromagnetic induction equation is solved together with the set of Navier-Stokes equations in order to compute the electromagnetic and flow fields, accounting for their mutual interaction. Semi-classical statistical thermodynamics is used to determine the plasma thermodynamic properties, while transport properties are obtained from kinetic principles, with the method of Chapman and Enskog. Particle ambipolar diffusive fluxes are found by solving the Stefan-Maxwell equations with a simple iterative method. Two physico-mathematical formulations are used to model the chemical reaction processes: (1) A Local Thermodynamics Equilibrium (LTE) formulation and (2) a thermo-chemical non-equilibrium (TCNEQ) formulation. In the TCNEQ model, thermal non-equilibrium between the translational energy mode of the gas and the vibrational energy mode of individual molecules is accounted for. The electronic states of the chemical species are assumed in equilibrium with the vibrational temperature, whereas the rotational energy mode is assumed to be equilibrated with translation. Three different physical models are used to account for the coupling of chemistry and energy transfer processes. Numerical simulations obtained with the LTE and TCNEQ formulations are used to characterize the extent of non-equilibrium of the flow inside the Plasmatron facility at the von Karman Institute. Each model was tested using different kinetic mechanisms to assess the sensitivity of the results to variations in the reaction parameters. A comparison of temperatures and composition profiles at the outlet of the torch demonstrates that the flow is in non-equilibrium for operating conditions characterized by pressures below 30 000 Pa, frequency 0.37 MHz, input power 80 kW, and mass flow 8 g/s.

  20. Analysis of electrolyte transport through charged nanopores.

    PubMed

    Peters, P B; van Roij, R; Bazant, M Z; Biesheuvel, P M

    2016-05-01

    We revisit the classical problem of flow of electrolyte solutions through charged capillary nanopores or nanotubes as described by the capillary pore model (also called "space charge" theory). This theory assumes very long and thin pores and uses a one-dimensional flux-force formalism which relates fluxes (electrical current, salt flux, and fluid velocity) and driving forces (difference in electric potential, salt concentration, and pressure). We analyze the general case with overlapping electric double layers in the pore and a nonzero axial salt concentration gradient. The 3×3 matrix relating these quantities exhibits Onsager symmetry and we report a significant new simplification for the diagonal element relating axial salt flux to the gradient in chemical potential. We prove that Onsager symmetry is preserved under changes of variables, which we illustrate by transformation to a different flux-force matrix given by Gross and Osterle [J. Chem. Phys. 49, 228 (1968)JCPSA60021-960610.1063/1.1669814]. The capillary pore model is well suited to describe the nonlinear response of charged membranes or nanofluidic devices for electrokinetic energy conversion and water desalination, as long as the transverse ion profiles remain in local quasiequilibrium. As an example, we evaluate electrical power production from a salt concentration difference by reverse electrodialysis, using an efficiency versus power diagram. We show that since the capillary pore model allows for axial gradients in salt concentration, partial loops in current, salt flux, or fluid flow can develop in the pore. Predictions for macroscopic transport properties using a reduced model, where the potential and concentration are assumed to be invariant with radial coordinate ("uniform potential" or "fine capillary pore" model), are close to results of the full model.

  1. Elastic and viscoelastic calculations of stresses in sedimentary basins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warpinski, N.R.

    This study presents a method for estimating the stress state within reservoirs at depth using a time-history approach for both elastic and viscoelastic rock behavior. Two features of this model are particularly significant for stress calculations. The first is the time-history approach, where we assume that the present in situ stress is a result of the entire history of the rock mass, rather than due only to the present conditions. The model can incorporate: (1) changes in pore pressure due to gas generation; (2) temperature gradients and local thermal episodes; (3) consolidation and diagenesis through time-varying material properties; and (4)more » varying tectonic episodes. The second feature is the use of a new viscoelastic model. Rather than assume a form of the relaxation function, a complete viscoelastic solution is obtained from the elastic solution through the viscoelastic correspondence principal. Simple rate models are then applied to obtain the final rock behavior. Example calculations for some simple cases are presented that show the contribution of individual stress or strain components. Finally, a complete example of the stress history of rocks in the Piceance basin is attempted. This calculation compares favorably with present-day stress data in this location. This model serves as a predictor for natural fracture genesis and expected rock fracturing from the model is compared with actual fractures observed in this region. These results show that most current estimates of in situ stress at depth do not incorporate all of the important mechanisms and a more complete formulation, such as this study, is required for acceptable stress calculations. The method presented here is general and is applicable to any basin having a relatively simple geologic history. 25 refs., 18 figs.« less

  2. Coherency strain and its effect on ionic conductivity and diffusion in solid electrolytes--an improved model for nanocrystalline thin films and a review of experimental data.

    PubMed

    Korte, C; Keppner, J; Peters, A; Schichtel, N; Aydin, H; Janek, J

    2014-11-28

    A phenomenological and analytical model for the influence of strain effects on atomic transport in columnar thin films is presented. A model system consisting of two types of crystalline thin films with coherent interfaces is assumed. Biaxial mechanical strain ε0 is caused by lattice misfit of the two phases. The conjoined films consist of columnar crystallites with a small diameter l. Strain relaxation by local elastic deformation, parallel to the hetero-interface, is possible along the columnar grain boundaries. The spatial extent δ0 of the strained hetero-interface regions can be calculated, assuming an exponential decay of the deformation-forces. The effect of the strain field on the local ionic transport in a thin film is then calculated by using the thermodynamic relation between (isostatic) pressure and free activation enthalpy ΔG(#). An expression describing the total ionic transport relative to bulk transport of a thin film or a multilayer as a function of the layer thickness is obtained as an integral average over strained and unstrained regions. The expression depends only on known material constants such as Young modulus Y, Poisson ratio ν and activation volume ΔV(#), which can be combined as dimensionless parameters. The model is successfully used to describe own experimental data from conductivity and diffusion studies. In the second part of the paper a comprehensive literature overview of experimental studies on (fast) ion transport in thin films and multilayers along solid-solid hetero-interfaces is presented. By comparing and reviewing the data the observed interface effects can be classified into three groups: (i) transport along interfaces between extrinsic ionic conductors (and insulator), (ii) transport along an open surface of an extrinsic ionic conductor and (iii) transport along interfaces between intrinsic ionic conductors. The observed effects in these groups differ by about five orders of magnitude in a very consistent way. The modified interface transport in group (i) is most probably caused by strain effects, misfit dislocations or disordered transition regions.

  3. Analysis of the localization of Michelson interferometer fringes using Fourier optics and temporal coherence

    NASA Astrophysics Data System (ADS)

    Narayanamurthy, C. S.

    2009-01-01

    Fringes formed in a Michelson interferometer never localize in any plane, in the detector plane and in the localization plane. Instead, the fringes are assumed to localize at infinity. Except for some explanation in Principles of Optics by Born and Wolf (1964 (New York: Macmillan)), the fringe localization phenomena of Michelson's interferometer have never been analysed seriously in any book. Because Michelson's interferometer is one of the important and fundamental optical experiments taught at both undergraduate and graduate levels, it would be appropriate to explain the localization of these fringes. In this paper, we analyse the localization of Michelson interferometer fringes using Fourier optics and temporal coherence, and show that they never localize at any plane even at infinity.

  4. Impact of Future Climate on Radial Growth of Four Major Boreal Tree Species in the Eastern Canadian Boreal Forest

    PubMed Central

    Huang, Jian-Guo; Bergeron, Yves; Berninger, Frank; Zhai, Lihong; Tardif, Jacques C.; Denneler, Bernhard

    2013-01-01

    Immediate phenotypic variation and the lagged effect of evolutionary adaptation to climate change appear to be two key processes in tree responses to climate warming. This study examines these components in two types of growth models for predicting the 2010–2099 diameter growth change of four major boreal species Betula papyrifera, Pinus banksiana, Picea mariana, and Populus tremuloides along a broad latitudinal gradient in eastern Canada under future climate projections. Climate-growth response models for 34 stands over nine latitudes were calibrated and cross-validated. An adaptive response model (A-model), in which the climate-growth relationship varies over time, and a fixed response model (F-model), in which the relationship is constant over time, were constructed to predict future growth. For the former, we examined how future growth of stands in northern latitudes could be forecasted using growth-climate equations derived from stands currently growing in southern latitudes assuming that current climate in southern locations provide an analogue for future conditions in the north. For the latter, we tested if future growth of stands would be maximally predicted using the growth-climate equation obtained from the given local stand assuming a lagged response to climate due to genetic constraints. Both models predicted a large growth increase in northern stands due to more benign temperatures, whereas there was a minimal growth change in southern stands due to potentially warm-temperature induced drought-stress. The A-model demonstrates a changing environment whereas the F-model highlights a constant growth response to future warming. As time elapses we can predict a gradual transition between a response to climate associated with the current conditions (F-model) to a more adapted response to future climate (A-model). Our modeling approach provides a template to predict tree growth response to climate warming at mid-high latitudes of the Northern Hemisphere. PMID:23468879

  5. Impact of future climate on radial growth of four major boreal tree species in the Eastern Canadian boreal forest.

    PubMed

    Huang, Jian-Guo; Bergeron, Yves; Berninger, Frank; Zhai, Lihong; Tardif, Jacques C; Denneler, Bernhard

    2013-01-01

    Immediate phenotypic variation and the lagged effect of evolutionary adaptation to climate change appear to be two key processes in tree responses to climate warming. This study examines these components in two types of growth models for predicting the 2010-2099 diameter growth change of four major boreal species Betula papyrifera, Pinus banksiana, Picea mariana, and Populus tremuloides along a broad latitudinal gradient in eastern Canada under future climate projections. Climate-growth response models for 34 stands over nine latitudes were calibrated and cross-validated. An adaptive response model (A-model), in which the climate-growth relationship varies over time, and a fixed response model (F-model), in which the relationship is constant over time, were constructed to predict future growth. For the former, we examined how future growth of stands in northern latitudes could be forecasted using growth-climate equations derived from stands currently growing in southern latitudes assuming that current climate in southern locations provide an analogue for future conditions in the north. For the latter, we tested if future growth of stands would be maximally predicted using the growth-climate equation obtained from the given local stand assuming a lagged response to climate due to genetic constraints. Both models predicted a large growth increase in northern stands due to more benign temperatures, whereas there was a minimal growth change in southern stands due to potentially warm-temperature induced drought-stress. The A-model demonstrates a changing environment whereas the F-model highlights a constant growth response to future warming. As time elapses we can predict a gradual transition between a response to climate associated with the current conditions (F-model) to a more adapted response to future climate (A-model). Our modeling approach provides a template to predict tree growth response to climate warming at mid-high latitudes of the Northern Hemisphere.

  6. Estimating peer density effects on oral health for community-based older adults.

    PubMed

    Chakraborty, Bibhas; Widener, Michael J; Mirzaei Salehabadi, Sedigheh; Northridge, Mary E; Kum, Susan S; Jin, Zhu; Kunzel, Carol; Palmer, Harvey D; Metcalf, Sara S

    2017-12-29

    As part of a long-standing line of research regarding how peer density affects health, researchers have sought to understand the multifaceted ways that the density of contemporaries living and interacting in proximity to one another influence social networks and knowledge diffusion, and subsequently health and well-being. This study examined peer density effects on oral health for racial/ethnic minority older adults living in northern Manhattan and the Bronx, New York, NY. Peer age-group density was estimated by smoothing US Census data with 4 kernel bandwidths ranging from 0.25 to 1.50 mile. Logistic regression models were developed using these spatial measures and data from the ElderSmile oral and general health screening program that serves predominantly racial/ethnic minority older adults at community centers in northern Manhattan and the Bronx. The oral health outcomes modeled as dependent variables were ordinal dentition status and binary self-rated oral health. After construction of kernel density surfaces and multiple imputation of missing data, logistic regression analyses were performed to estimate the effects of peer density and other sociodemographic characteristics on the oral health outcomes of dentition status and self-rated oral health. Overall, higher peer density was associated with better oral health for older adults when estimated using smaller bandwidths (0.25 and 0.50 mile). That is, statistically significant relationships (p < 0.01) between peer density and improved dentition status were found when peer density was measured assuming a more local social network. As with dentition status, a positive significant association was found between peer density and fair or better self-rated oral health when peer density was measured assuming a more local social network. This study provides novel evidence that the oral health of community-based older adults is affected by peer density in an urban environment. To the extent that peer density signifies the potential for social interaction and support, the positive significant effects of peer density on improved oral health point to the importance of place in promoting social interaction as a component of healthy aging. Proximity to peers and their knowledge of local resources may facilitate utilization of community-based oral health care.

  7. Free energy decomposition of protein-protein interactions.

    PubMed

    Noskov, S Y; Lim, C

    2001-08-01

    A free energy decomposition scheme has been developed and tested on antibody-antigen and protease-inhibitor binding for which accurate experimental structures were available for both free and bound proteins. Using the x-ray coordinates of the free and bound proteins, the absolute binding free energy was computed assuming additivity of three well-defined, physical processes: desolvation of the x-ray structures, isomerization of the x-ray conformation to a nearby local minimum in the gas-phase, and subsequent noncovalent complex formation in the gas phase. This free energy scheme, together with the Generalized Born model for computing the electrostatic solvation free energy, yielded binding free energies in remarkable agreement with experimental data. Two assumptions commonly used in theoretical treatments; viz., the rigid-binding approximation (which assumes no conformational change upon complexation) and the neglect of vdW interactions, were found to yield large errors in the binding free energy. Protein-protein vdW and electrostatic interactions between complementary surfaces over a relatively large area (1400--1700 A(2)) were found to drive antibody-antigen and protease-inhibitor binding.

  8. Harbour surveillance with cameras calibrated with AIS data

    NASA Astrophysics Data System (ADS)

    Palmieri, F. A. N.; Castaldo, F.; Marino, G.

    The inexpensive availability of surveillance cameras, easily connected in network configurations, suggests the deployment of this additional sensor modality in port surveillance. Vessels appearing within cameras fields of view can be recognized and localized providing to fusion centers information that can be added to data coming from Radar, Lidar, AIS, etc. Camera systems, that are used as localizers however, must be properly calibrated in changing scenarios where often there is limited choice on the position on which they are deployed. Automatic Identification System (AIS) data, that includes position, course and vessel's identity, freely available through inexpensive receivers, for some of the vessels appearing within the field of view, provide the opportunity to achieve proper camera calibration to be used for the localization of vessels not equipped with AIS transponders. In this paper we assume a pinhole model for camera geometry and propose perspective matrices computation using AIS positional data. Images obtained from calibrated cameras are then matched and pixel association is utilized for other vessel's localization. We report preliminary experimental results of calibration and localization using two cameras deployed on the Gulf of Naples coastline. The two cameras overlook a section of the harbour and record short video sequences that are synchronized offline with AIS positional information of easily-identified passenger ships. Other small vessels, not equipped with AIS transponders, are localized using camera matrices and pixel matching. Localization accuracy is experimentally evaluated as a function of target distance from the sensors.

  9. Local Control: Fear or Fantasy. A Report of the New Jersey Education Reform Project.

    ERIC Educational Resources Information Center

    Fuhrman, Susan H.

    Today local control over education seems to face the most serious challenge in its history. The movement to reform school finance raises the specter of the State assuming its formal consitutional powers and removing autonomy from the communities. Hence, it is argued, as the State takes over control of taxation and expenditures it will want to…

  10. Transgressive Local Act: Tackling Domestic Violence with Forum and Popular Theatre in "Sisterhood Bound as Yuan Ze Flowers"

    ERIC Educational Resources Information Center

    Wang, Wan-Jung

    2010-01-01

    This paper examines a community theatre project in Kaohsiung County, Taiwan that aimed to tackle domestic violence through a collaboration between local community female elders and the facilitator. The paper investigates how an outside facilitator could unfix the assumed community identities which tend to exclude outsiders or sub-groups, in this…

  11. "A False Dilemma": Should Decisions about Education Resource Use Be Made at the State or Local Level?

    ERIC Educational Resources Information Center

    Timar, Thomas B.; Roza, Marguerite

    2010-01-01

    Over the past 30 years, states have assumed a greater role in financing education. The presumption of local control has been superseded by systems of state control. This shift in authority raises several critical questions. Chief among them is, "What effect has centralization of education financing had on the capacity of school districts to…

  12. Being Close: The Quality of Social Relationships in a Local Organic Cereal and Bread Network in Lower Austria

    ERIC Educational Resources Information Center

    Milestad, Rebecka; Bartel-Kratochvil, Ruth; Leitner, Heidrun; Axmann, Paul

    2010-01-01

    Experience of the drawbacks of a globalised and industrialised food system has generated interest in localised food systems. Local food networks are regarded as more sustainable food provision systems since they are assumed to have high levels of social embeddedness and relations of regard. This paper explores the social relations between food…

  13. Spatial heterogeneity of the relationships between environmental characteristics and active commuting: towards a locally varying social ecological model.

    PubMed

    Feuillet, Thierry; Charreire, Hélène; Menai, Mehdi; Salze, Paul; Simon, Chantal; Dugas, Julien; Hercberg, Serge; Andreeva, Valentina A; Enaux, Christophe; Weber, Christiane; Oppert, Jean-Michel

    2015-03-25

    According to the social ecological model of health-related behaviors, it is now well accepted that environmental factors influence habitual physical activity. Most previous studies on physical activity determinants have assumed spatial homogeneity across the study area, i.e. that the association between the environment and physical activity is the same whatever the location. The main novelty of our study was to explore geographical variation in the relationships between active commuting (walking and cycling to/from work) and residential environmental characteristics. 4,164 adults from the ongoing Nutrinet-Santé web-cohort, residing in and around Paris, France, were studied using a geographically weighted Poisson regression (GWPR) model. Objective environmental variables, including both the built and the socio-economic characteristics around the place of residence of individuals, were assessed by GIS-based measures. Perceived environmental factors (index including safety, aesthetics, and pollution) were reported by questionnaires. Our results show that the influence of the overall neighborhood environment appeared to be more pronounced in the suburban southern part of the study area (Val-de-Marne) compared to Paris inner city, whereas more complex patterns were found elsewhere. Active commuting was positively associated with the built environment only in the southern and northeastern parts of the study area, whereas positive associations with the socio-economic environment were found only in some specific locations in the southern and northern parts of the study area. Similar local variations were observed for the perceived environmental variables. These results suggest that: (i) when applied to active commuting, the social ecological conceptual framework should be locally nuanced, and (ii) local rather than global targeting of public health policies might be more efficient in promoting active commuting.

  14. Pollution source localization in an urban water supply network based on dynamic water demand.

    PubMed

    Yan, Xuesong; Zhu, Zhixin; Li, Tian

    2017-10-27

    Urban water supply networks are susceptible to intentional, accidental chemical, and biological pollution, which pose a threat to the health of consumers. In recent years, drinking-water pollution incidents have occurred frequently, seriously endangering social stability and security. The real-time monitoring for water quality can be effectively implemented by placing sensors in the water supply network. However, locating the source of pollution through the data detection obtained by water quality sensors is a challenging problem. The difficulty lies in the limited number of sensors, large number of water supply network nodes, and dynamic user demand for water, which leads the pollution source localization problem to an uncertainty, large-scale, and dynamic optimization problem. In this paper, we mainly study the dynamics of the pollution source localization problem. Previous studies of pollution source localization assume that hydraulic inputs (e.g., water demand of consumers) are known. However, because of the inherent variability of urban water demand, the problem is essentially a fluctuating dynamic problem of consumer's water demand. In this paper, the water demand is considered to be stochastic in nature and can be described using Gaussian model or autoregressive model. On this basis, an optimization algorithm is proposed based on these two dynamic water demand change models to locate the pollution source. The objective of the proposed algorithm is to find the locations and concentrations of pollution sources that meet the minimum between the analogue and detection values of the sensor. Simulation experiments were conducted using two different sizes of urban water supply network data, and the experimental results were compared with those of the standard genetic algorithm.

  15. Local Burn-Up Effects in the NBSR Fuel Element

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown N. R.; Hanson A.; Diamond, D.

    2013-01-31

    This study addresses the over-prediction of local power when the burn-up distribution in each half-element of the NBSR is assumed to be uniform. A single-element model was utilized to quantify the impact of axial and plate-wise burn-up on the power distribution within the NBSR fuel elements for both high-enriched uranium (HEU) and low-enriched uranium (LEU) fuel. To validate this approach, key parameters in the single-element model were compared to parameters from an equilibrium core model, including neutron energy spectrum, power distribution, and integral U-235 vector. The power distribution changes significantly when incorporating local burn-up effects and has lower power peakingmore » relative to the uniform burn-up case. In the uniform burn-up case, the axial relative power peaking is over-predicted by as much as 59% in the HEU single-element and 46% in the LEU single-element with uniform burn-up. In the uniform burn-up case, the plate-wise power peaking is over-predicted by as much as 23% in the HEU single-element and 18% in the LEU single-element. The degree of over-prediction increases as a function of burn-up cycle, with the greatest over-prediction at the end of Cycle 8. The thermal flux peak is always in the mid-plane gap; this causes the local cumulative burn-up near the mid-plane gap to be significantly higher than the fuel element average. Uniform burn-up distribution throughout a half-element also causes a bias in fuel element reactivity worth, due primarily to the neutronic importance of the fissile inventory in the mid-plane gap region.« less

  16. Theory of exciton transfer and diffusion in conjugated polymers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barford, William, E-mail: william.barford@chem.ox.ac.uk; Tozer, Oliver Robert; University College, University of Oxford, Oxford OX1 4BH

    We describe a theory of Förster-type exciton transfer between conjugated polymers. The theory is built on three assumptions. First, we assume that the low-lying excited states of conjugated polymers are Frenkel excitons coupled to local normal modes, and described by the Frenkel-Holstein model. Second, we assume that the relevant parameter regime is ℏω < J, i.e., the adiabatic regime, and thus the Born-Oppenheimer factorization of the electronic and nuclear degrees of freedom is generally applicable. Finally, we assume that the Condon approximation is valid, i.e., the exciton-polaron wavefunction is essentially independent of the normal modes. The resulting expression for themore » exciton transfer rate has a familiar form, being a function of the exciton transfer integral and the effective Franck-Condon factors. The effective Franck-Condon factors are functions of the effective Huang-Rhys parameters, which are inversely proportional to the chromophore size. The Born-Oppenheimer expressions were checked against DMRG calculations, and are found to be within 10% of the exact value for a tiny fraction of the computational cost. This theory of exciton transfer is then applied to model exciton migration in conformationally disordered poly(p-phenylene vinylene). Key to this modeling is the assumption that the donor and acceptor chromophores are defined by local exciton ground states (LEGSs). Since LEGSs are readily determined by the exciton center-of-mass wavefunction, this theory provides a quantitative link between polymer conformation and exciton migration. Our Monte Carlo simulations indicate that the exciton diffusion length depends weakly on the conformation of the polymer, with the diffusion length increasing slightly as the chromophores became straighter and longer. This is largely a geometrical effect: longer and straighter chromophores extend over larger distances. The calculated diffusion lengths of ∼10 nm are in good agreement with experiment. The spectral properties of the migrating excitons are also investigated. The emission intensity ratio of the 0-0 and 0-1 vibronic peaks is related to the effective Huang-Rhys parameter of the emitting state, which in turn is related to the chromophore size. The intensity ratios calculated from the effective Huang-Rhys parameters are in agreement with experimental spectra, and the time-resolved trend for the intensity ratio to decrease with time was also reproduced as the excitation migrates to shorter, lower energy chromophores as a function of time. In addition, the energy of the exciton state shows a logarithmic decrease with time, in agreement with experimental observations.« less

  17. Theory of exciton transfer and diffusion in conjugated polymers.

    PubMed

    Barford, William; Tozer, Oliver Robert

    2014-10-28

    We describe a theory of Förster-type exciton transfer between conjugated polymers. The theory is built on three assumptions. First, we assume that the low-lying excited states of conjugated polymers are Frenkel excitons coupled to local normal modes, and described by the Frenkel-Holstein model. Second, we assume that the relevant parameter regime is ℏω < J, i.e., the adiabatic regime, and thus the Born-Oppenheimer factorization of the electronic and nuclear degrees of freedom is generally applicable. Finally, we assume that the Condon approximation is valid, i.e., the exciton-polaron wavefunction is essentially independent of the normal modes. The resulting expression for the exciton transfer rate has a familiar form, being a function of the exciton transfer integral and the effective Franck-Condon factors. The effective Franck-Condon factors are functions of the effective Huang-Rhys parameters, which are inversely proportional to the chromophore size. The Born-Oppenheimer expressions were checked against DMRG calculations, and are found to be within 10% of the exact value for a tiny fraction of the computational cost. This theory of exciton transfer is then applied to model exciton migration in conformationally disordered poly(p-phenylene vinylene). Key to this modeling is the assumption that the donor and acceptor chromophores are defined by local exciton ground states (LEGSs). Since LEGSs are readily determined by the exciton center-of-mass wavefunction, this theory provides a quantitative link between polymer conformation and exciton migration. Our Monte Carlo simulations indicate that the exciton diffusion length depends weakly on the conformation of the polymer, with the diffusion length increasing slightly as the chromophores became straighter and longer. This is largely a geometrical effect: longer and straighter chromophores extend over larger distances. The calculated diffusion lengths of ~10 nm are in good agreement with experiment. The spectral properties of the migrating excitons are also investigated. The emission intensity ratio of the 0-0 and 0-1 vibronic peaks is related to the effective Huang-Rhys parameter of the emitting state, which in turn is related to the chromophore size. The intensity ratios calculated from the effective Huang-Rhys parameters are in agreement with experimental spectra, and the time-resolved trend for the intensity ratio to decrease with time was also reproduced as the excitation migrates to shorter, lower energy chromophores as a function of time. In addition, the energy of the exciton state shows a logarithmic decrease with time, in agreement with experimental observations.

  18. Abstraction and Assume-Guarantee Reasoning for Automated Software Verification

    NASA Technical Reports Server (NTRS)

    Chaki, S.; Clarke, E.; Giannakopoulou, D.; Pasareanu, C. S.

    2004-01-01

    Compositional verification and abstraction are the key techniques to address the state explosion problem associated with model checking of concurrent software. A promising compositional approach is to prove properties of a system by checking properties of its components in an assume-guarantee style. This article proposes a framework for performing abstraction and assume-guarantee reasoning of concurrent C code in an incremental and fully automated fashion. The framework uses predicate abstraction to extract and refine finite state models of software and it uses an automata learning algorithm to incrementally construct assumptions for the compositional verification of the abstract models. The framework can be instantiated with different assume-guarantee rules. We have implemented our approach in the COMFORT reasoning framework and we show how COMFORT out-performs several previous software model checking approaches when checking safety properties of non-trivial concurrent programs.

  19. A hierarchical Bayesian GEV model for improving local and regional flood quantile estimates

    NASA Astrophysics Data System (ADS)

    Lima, Carlos H. R.; Lall, Upmanu; Troy, Tara; Devineni, Naresh

    2016-10-01

    We estimate local and regional Generalized Extreme Value (GEV) distribution parameters for flood frequency analysis in a multilevel, hierarchical Bayesian framework, to explicitly model and reduce uncertainties. As prior information for the model, we assume that the GEV location and scale parameters for each site come from independent log-normal distributions, whose mean parameter scales with the drainage area. From empirical and theoretical arguments, the shape parameter for each site is shrunk towards a common mean. Non-informative prior distributions are assumed for the hyperparameters and the MCMC method is used to sample from the joint posterior distribution. The model is tested using annual maximum series from 20 streamflow gauges located in an 83,000 km2 flood prone basin in Southeast Brazil. The results show a significant reduction of uncertainty estimates of flood quantile estimates over the traditional GEV model, particularly for sites with shorter records. For return periods within the range of the data (around 50 years), the Bayesian credible intervals for the flood quantiles tend to be narrower than the classical confidence limits based on the delta method. As the return period increases beyond the range of the data, the confidence limits from the delta method become unreliable and the Bayesian credible intervals provide a way to estimate satisfactory confidence bands for the flood quantiles considering parameter uncertainties and regional information. In order to evaluate the applicability of the proposed hierarchical Bayesian model for regional flood frequency analysis, we estimate flood quantiles for three randomly chosen out-of-sample sites and compare with classical estimates using the index flood method. The posterior distributions of the scaling law coefficients are used to define the predictive distributions of the GEV location and scale parameters for the out-of-sample sites given only their drainage areas and the posterior distribution of the average shape parameter is taken as the regional predictive distribution for this parameter. While the index flood method does not provide a straightforward way to consider the uncertainties in the index flood and in the regional parameters, the results obtained here show that the proposed Bayesian method is able to produce adequate credible intervals for flood quantiles that are in accordance with empirical estimates.

  20. Modeling Error Distributions of Growth Curve Models through Bayesian Methods

    ERIC Educational Resources Information Center

    Zhang, Zhiyong

    2016-01-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is…

Top