Sample records for mean-field based model

  1. Modeling mesoscopic cortical dynamics using a mean-field model of conductance-based networks of adaptive exponential integrate-and-fire neurons.

    PubMed

    Zerlaut, Yann; Chemla, Sandrine; Chavane, Frederic; Destexhe, Alain

    2018-02-01

    Voltage-sensitive dye imaging (VSDi) has revealed fundamental properties of neocortical processing at macroscopic scales. Since for each pixel VSDi signals report the average membrane potential over hundreds of neurons, it seems natural to use a mean-field formalism to model such signals. Here, we present a mean-field model of networks of Adaptive Exponential (AdEx) integrate-and-fire neurons, with conductance-based synaptic interactions. We study a network of regular-spiking (RS) excitatory neurons and fast-spiking (FS) inhibitory neurons. We use a Master Equation formalism, together with a semi-analytic approach to the transfer function of AdEx neurons to describe the average dynamics of the coupled populations. We compare the predictions of this mean-field model to simulated networks of RS-FS cells, first at the level of the spontaneous activity of the network, which is well predicted by the analytical description. Second, we investigate the response of the network to time-varying external input, and show that the mean-field model predicts the response time course of the population. Finally, to model VSDi signals, we consider a one-dimensional ring model made of interconnected RS-FS mean-field units. We found that this model can reproduce the spatio-temporal patterns seen in VSDi of awake monkey visual cortex as a response to local and transient visual stimuli. Conversely, we show that the model allows one to infer physiological parameters from the experimentally-recorded spatio-temporal patterns.

  2. Modeling asset price processes based on mean-field framework

    NASA Astrophysics Data System (ADS)

    Ieda, Masashi; Shiino, Masatoshi

    2011-12-01

    We propose a model of the dynamics of financial assets based on the mean-field framework. This framework allows us to construct a model which includes the interaction among the financial assets reflecting the market structure. Our study is on the cutting edge in the sense of a microscopic approach to modeling the financial market. To demonstrate the effectiveness of our model concretely, we provide a case study, which is the pricing problem of the European call option with short-time memory noise.

  3. Structural versus dynamical origins of mean-field behavior in a self-organized critical model of neuronal avalanches

    NASA Astrophysics Data System (ADS)

    Moosavi, S. Amin; Montakhab, Afshin

    2015-11-01

    Critical dynamics of cortical neurons have been intensively studied over the past decade. Neuronal avalanches provide the main experimental as well as theoretical tools to consider criticality in such systems. Experimental studies show that critical neuronal avalanches show mean-field behavior. There are structural as well as recently proposed [Phys. Rev. E 89, 052139 (2014), 10.1103/PhysRevE.89.052139] dynamical mechanisms that can lead to mean-field behavior. In this work we consider a simple model of neuronal dynamics based on threshold self-organized critical models with synaptic noise. We investigate the role of high-average connectivity, random long-range connections, as well as synaptic noise in achieving mean-field behavior. We employ finite-size scaling in order to extract critical exponents with good accuracy. We conclude that relevant structural mechanisms responsible for mean-field behavior cannot be justified in realistic models of the cortex. However, strong dynamical noise, which can have realistic justifications, always leads to mean-field behavior regardless of the underlying structure. Our work provides a different (dynamical) origin than the conventionally accepted (structural) mechanisms for mean-field behavior in neuronal avalanches.

  4. Role of ion hydration for the differential capacitance of an electric double layer.

    PubMed

    Caetano, Daniel L Z; Bossa, Guilherme V; de Oliveira, Vinicius M; Brown, Matthew A; de Carvalho, Sidney J; May, Sylvio

    2016-10-12

    The influence of soft, hydration-mediated ion-ion and ion-surface interactions on the differential capacitance of an electric double layer is investigated using Monte Carlo simulations and compared to various mean-field models. We focus on a planar electrode surface at physiological concentration of monovalent ions in a uniform dielectric background. Hydration-mediated interactions are modeled on the basis of Yukawa potentials that add to the Coulomb and excluded volume interactions between ions. We present a mean-field model that includes hydration-mediated anion-anion, anion-cation, and cation-cation interactions of arbitrary strengths. In addition, finite ion sizes are accounted for through excluded volume interactions, described either on the basis of the Carnahan-Starling equation of state or using a lattice gas model. Both our Monte Carlo simulations and mean-field approaches predict a characteristic double-peak (the so-called camel shape) of the differential capacitance; its decrease reflects the packing of the counterions near the electrode surface. The presence of hydration-mediated ion-surface repulsion causes a thin charge-depleted region close to the surface, which is reminiscent of a Stern layer. We analyze the interplay between excluded volume and hydration-mediated interactions on the differential capacitance and demonstrate that for small surface charge density our mean-field model based on the Carnahan-Starling equation is able to capture the Monte Carlo simulation results. In contrast, for large surface charge density the mean-field approach based on the lattice gas model is preferable.

  5. Toward Development of a Stochastic Wake Model: Validation Using LES and Turbine Loads

    DOE PAGES

    Moon, Jae; Manuel, Lance; Churchfield, Matthew; ...

    2017-12-28

    Wind turbines within an array do not experience free-stream undisturbed flow fields. Rather, the flow fields on internal turbines are influenced by wakes generated by upwind unit and exhibit different dynamic characteristics relative to the free stream. The International Electrotechnical Commission (IEC) standard 61400-1 for the design of wind turbines only considers a deterministic wake model for the design of a wind plant. This study is focused on the development of a stochastic model for waked wind fields. First, high-fidelity physics-based waked wind velocity fields are generated using Large-Eddy Simulation (LES). Stochastic characteristics of these LES waked wind velocity field,more » including mean and turbulence components, are analyzed. Wake-related mean and turbulence field-related parameters are then estimated for use with a stochastic model, using Multivariate Multiple Linear Regression (MMLR) with the LES data. To validate the simulated wind fields based on the stochastic model, wind turbine tower and blade loads are generated using aeroelastic simulation for utility-scale wind turbine models and compared with those based directly on the LES inflow. The study's overall objective is to offer efficient and validated stochastic approaches that are computationally tractable for assessing the performance and loads of turbines operating in wakes.« less

  6. Toward Development of a Stochastic Wake Model: Validation Using LES and Turbine Loads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moon, Jae; Manuel, Lance; Churchfield, Matthew

    Wind turbines within an array do not experience free-stream undisturbed flow fields. Rather, the flow fields on internal turbines are influenced by wakes generated by upwind unit and exhibit different dynamic characteristics relative to the free stream. The International Electrotechnical Commission (IEC) standard 61400-1 for the design of wind turbines only considers a deterministic wake model for the design of a wind plant. This study is focused on the development of a stochastic model for waked wind fields. First, high-fidelity physics-based waked wind velocity fields are generated using Large-Eddy Simulation (LES). Stochastic characteristics of these LES waked wind velocity field,more » including mean and turbulence components, are analyzed. Wake-related mean and turbulence field-related parameters are then estimated for use with a stochastic model, using Multivariate Multiple Linear Regression (MMLR) with the LES data. To validate the simulated wind fields based on the stochastic model, wind turbine tower and blade loads are generated using aeroelastic simulation for utility-scale wind turbine models and compared with those based directly on the LES inflow. The study's overall objective is to offer efficient and validated stochastic approaches that are computationally tractable for assessing the performance and loads of turbines operating in wakes.« less

  7. Effects of field plot size on prediction accuracy of aboveground biomass in airborne laser scanning-assisted inventories in tropical rain forests of Tanzania.

    PubMed

    Mauya, Ernest William; Hansen, Endre Hofstad; Gobakken, Terje; Bollandsås, Ole Martin; Malimbwi, Rogers Ernest; Næsset, Erik

    2015-12-01

    Airborne laser scanning (ALS) has recently emerged as a promising tool to acquire auxiliary information for improving aboveground biomass (AGB) estimation in sample-based forest inventories. Under design-based and model-assisted inferential frameworks, the estimation relies on a model that relates the auxiliary ALS metrics to AGB estimated on ground plots. The size of the field plots has been identified as one source of model uncertainty because of the so-called boundary effects which increases with decreasing plot size. Recent research in tropical forests has aimed to quantify the boundary effects on model prediction accuracy, but evidence of the consequences for the final AGB estimates is lacking. In this study we analyzed the effect of field plot size on model prediction accuracy and its implication when used in a model-assisted inferential framework. The results showed that the prediction accuracy of the model improved as the plot size increased. The adjusted R 2 increased from 0.35 to 0.74 while the relative root mean square error decreased from 63.6 to 29.2%. Indicators of boundary effects were identified and confirmed to have significant effects on the model residuals. Variance estimates of model-assisted mean AGB relative to corresponding variance estimates of pure field-based AGB, decreased with increasing plot size in the range from 200 to 3000 m 2 . The variance ratio of field-based estimates relative to model-assisted variance ranged from 1.7 to 7.7. This study showed that the relative improvement in precision of AGB estimation when increasing field-plot size, was greater for an ALS-assisted inventory compared to that of a pure field-based inventory.

  8. Polymer-induced forces at interfaces

    NASA Astrophysics Data System (ADS)

    Rangarajan, Murali

    This dissertation concerns studies of forces generated by confined and physisorbed flexible polymers using lattice mean-field theories, and those generated by confined and clamped semiflexible polymers modeled as slender elastic rods. Lattice mean-field theories have been used in understanding and predicting the behavior of polymeric interfacial systems. In order to efficiently tailor such systems for various applications of interest, one has to understand the forces generated in the interface due to the polymer molecules. The present work examines the abilities and limitations of lattice mean-field theories in predicting the structure of physisorbed polymer layers and the resultant forces. Within the lattice mean-field theory, a definition of normal force of compression as the negative derivative of the partition-function-based excess free energy with surface separation gives misleading results because the theory does not explicitly account for the normal stresses involved in the system. Correct expressions for normal and tangential forces are obtained from a continuum-mechanics-based formulation. Preliminary comparisons with lattice Monte Carlo simulations show that mean-field theories fail to predict significant attractive forces when the surfaces are undersaturated, as one would expect. The corrections to the excluded volume (non-reversal chains) and the mean-field (anisotropic field) approximations improve the predictions of layer structure, but not the forces. Bending of semiflexible polymer chains (elastic rods) is considered for two boundary conditions---where the chain is hinged on both ends and where the chain is clamped on one end and hinged on the other. For the former case, the compressive forces and chain shapes obtained are consistent with the inflexional elastica published by Love. For the latter, multiple and higher-order solutions are observed for the hinged-end position for a given force. Preliminary studies are conducted on actin-based motility of Listeria monocytogenes by treating actin filaments as elastic rods, using the actoclampin model. The results show qualitative agreement with calculations where the filaments are modeled as Hookean springs. The feasibility of the actoclampin model to address long length-scale rotation of Listeria during actin-based motility is addressed.

  9. Stochastic kinetic mean field model

    NASA Astrophysics Data System (ADS)

    Erdélyi, Zoltán; Pasichnyy, Mykola; Bezpalchuk, Volodymyr; Tomán, János J.; Gajdics, Bence; Gusak, Andriy M.

    2016-07-01

    This paper introduces a new model for calculating the change in time of three-dimensional atomic configurations. The model is based on the kinetic mean field (KMF) approach, however we have transformed that model into a stochastic approach by introducing dynamic Langevin noise. The result is a stochastic kinetic mean field model (SKMF) which produces results similar to the lattice kinetic Monte Carlo (KMC). SKMF is, however, far more cost-effective and easier to implement the algorithm (open source program code is provided on http://skmf.eu website). We will show that the result of one SKMF run may correspond to the average of several KMC runs. The number of KMC runs is inversely proportional to the amplitude square of the noise in SKMF. This makes SKMF an ideal tool also for statistical purposes.

  10. Epidemic spreading in weighted networks: an edge-based mean-field solution.

    PubMed

    Yang, Zimo; Zhou, Tao

    2012-05-01

    Weight distribution greatly impacts the epidemic spreading taking place on top of networks. This paper presents a study of a susceptible-infected-susceptible model on regular random networks with different kinds of weight distributions. Simulation results show that the more homogeneous weight distribution leads to higher epidemic prevalence, which, unfortunately, could not be captured by the traditional mean-field approximation. This paper gives an edge-based mean-field solution for general weight distribution, which can quantitatively reproduce the simulation results. This method could be applied to characterize the nonequilibrium steady states of dynamical processes on weighted networks.

  11. Quantum Monte Carlo study of the transverse-field quantum Ising model on infinite-dimensional structures

    NASA Astrophysics Data System (ADS)

    Baek, Seung Ki; Um, Jaegon; Yi, Su Do; Kim, Beom Jun

    2011-11-01

    In a number of classical statistical-physical models, there exists a characteristic dimensionality called the upper critical dimension above which one observes the mean-field critical behavior. Instead of constructing high-dimensional lattices, however, one can also consider infinite-dimensional structures, and the question is whether this mean-field character extends to quantum-mechanical cases as well. We therefore investigate the transverse-field quantum Ising model on the globally coupled network and on the Watts-Strogatz small-world network by means of quantum Monte Carlo simulations and the finite-size scaling analysis. We confirm that both of the structures exhibit critical behavior consistent with the mean-field description. In particular, we show that the existing cumulant method has difficulty in estimating the correct dynamic critical exponent and suggest that an order parameter based on the quantum-mechanical expectation value can be a practically useful numerical observable to determine critical behavior when there is no well-defined dimensionality.

  12. A model of geomagnetic secular variation for 1980-1983

    USGS Publications Warehouse

    Peddie, N.W.; Zunde, A.K.

    1987-01-01

    We developed an updated model of the secular variation of the main geomagnetic field during 1980 through 1983 based on annual mean values for that interval from 148 worldwide magnetic observatories. The model consists of a series of 80 spherical harmonics, up to and including those of degree and order 8. We used it to form a proposal for the 1985 revision of the International Geomagnetic Reference Field (IGRF). Comparison of the new model, whose mean epoch is approximately 1982.0, with the Provisional Geomagnetic Reference Field for 1975-1980 (PGRF 1975), indicates that the moment of the centered-dipole part of the geomagnetic field is now decreasing faster than it was 5 years ago. The rate (in field units) indicated by PGRF 1975 was about -25 nT a-1, while for the new model it is -28 nT a-1. ?? 1987.

  13. Evaluation of candidate geomagnetic field models for IGRF-11

    NASA Astrophysics Data System (ADS)

    Finlay, C. C.; Maus, S.; Beggan, C. D.; Hamoudi, M.; Lowes, F. J.; Olsen, N.; Thébault, E.

    2010-10-01

    The eleventh generation of the International Geomagnetic Reference Field (IGRF) was agreed in December 2009 by a task force appointed by the International Association of Geomagnetism and Aeronomy (IAGA) Division V Working Group V-MOD. New spherical harmonic main field models for epochs 2005.0 (DGRF-2005) and 2010.0 (IGRF-2010), and predictive linear secular variation for the interval 2010.0-2015.0 (SV-2010-2015) were derived from weighted averages of candidate models submitted by teams led by DTU Space, Denmark (team A); NOAA/NGDC, U.S.A. (team B); BGS, U.K. (team C); IZMIRAN, Russia (team D); EOST, France (team E); IPGP, France (team F); GFZ, Germany (team G) and NASA-GSFC, U.S.A. (team H). Here, we report the evaluations of candidate models carried out by the IGRF-11 task force during October/November 2009 and describe the weightings used to derive the new IGRF-11 model. The evaluations include calculations of root mean square vector field differences between the candidates, comparisons of the power spectra, and degree correlations between the candidates and a mean model. Coefficient by coefficient analysis including determination of weighting factors used in a robust estimation of mean coefficients is also reported. Maps of differences in the vertical field intensity at Earth's surface between the candidates and weighted mean models are presented. Candidates with anomalous aspects are identified and efforts made to pinpoint both troublesome coefficients and geographical regions where large variations between candidates originate. A retrospective analysis of IGRF-10 main field candidates for epoch 2005.0 and predictive secular variation candidates for 2005.0-2010.0 using the new IGRF-11 models as a reference is also reported. The high quality and consistency of main field models derived using vector satellite data is demonstrated; based on internal consistency DGRF-2005 has a formal root mean square vector field error over Earth's surface of 1.0 nT. Difficulties nevertheless remain in accurately forecasting field evolution only five years into the future.

  14. Competing phases in a model of Pr-based cobaltites

    NASA Astrophysics Data System (ADS)

    Sotnikov, A.; Kuneš, J.

    2017-12-01

    Motivated by the physics of Pr-based cobaltites, we study the effect of the external magnetic field in the hole-doped two-band Hubbard model close to instabilities toward the excitonic condensation and ferromagnetic ordering. Using the dynamical mean-field theory we observe a field-driven suppression of the excitonic condensate. The onset of a magnetically ordered phase at the fixed chemical potential is accompanied by a sizable change of the electron density. This leads us to predict that Pr3 + abundance increases on the high-field side of the transition.

  15. Experiments in monthly mean simulation of the atmosphere with a coarse-mesh general circulation model

    NASA Technical Reports Server (NTRS)

    Lutz, R. J.; Spar, J.

    1978-01-01

    The Hansen atmospheric model was used to compute five monthly forecasts (October 1976 through February 1977). The comparison is based on an energetics analysis, meridional and vertical profiles, error statistics, and prognostic and observed mean maps. The monthly mean model simulations suffer from several defects. There is, in general, no skill in the simulation of the monthly mean sea-level pressure field, and only marginal skill is indicated for the 850 mb temperatures and 500 mb heights. The coarse-mesh model appears to generate a less satisfactory monthly mean simulation than the finer mesh GISS model.

  16. Anomalous diffusion in the evolution of soccer championship scores: Real data, mean-field analysis, and an agent-based model

    NASA Astrophysics Data System (ADS)

    da Silva, Roberto; Vainstein, Mendeli H.; Gonçalves, Sebastián; Paula, Felipe S. F.

    2013-08-01

    Statistics of soccer tournament scores based on the double round robin system of several countries are studied. Exploring the dynamics of team scoring during tournament seasons from recent years we find evidences of superdiffusion. A mean-field analysis results in a drift velocity equal to that of real data but in a different diffusion coefficient. Along with the analysis of real data we present the results of simulations of soccer tournaments obtained by an agent-based model which successfully describes the final scoring distribution [da Silva , Comput. Phys. Commun.CPHCBZ0010-465510.1016/j.cpc.2012.10.030 184, 661 (2013)]. Such model yields random walks of scores over time with the same anomalous diffusion as observed in real data.

  17. Mean-field model of the von Kármán sodium dynamo experiment using soft iron impellers.

    PubMed

    Nore, C; Léorat, J; Guermond, J-L; Giesecke, A

    2015-01-01

    It has been observed that dynamo action occurs in the von-Kármán-Sodium (VKS) experiment only when the rotating disks and the blades are made of soft iron. The purpose of this paper is to numerically investigate the role of soft iron in the VKS dynamo scenario. This is done by using a mean-field model based on an axisymmetric mean flow, a localized permeability distribution, and a localized α effect modeling the action of the small velocity scales between the blades. The action of the rotating blades is modeled by an axisymmetric effective permeability field. Key properties of the flow giving to the numerical magnetic field a geometric structure similar to that observed experimentally are identified. Depending on the permeability of the disks and the effective permeability of the blades, the dynamo that is obtained is either oscillatory or stationary. Our numerical results confirm the leading role played by the ferromagnetic impellers. A scenario for the VKS dynamo is proposed.

  18. The solar dynamo and prediction of sunspot cycles

    NASA Astrophysics Data System (ADS)

    Dikpati, Mausumi

    2012-07-01

    Much progress has been made in understanding the solar dynamo since Parker first developed the concepts of dynamo waves and magnetic buoyancy around 1955, and the German school first formulated the solar dynamo using the mean-field formalism. The essential ingredients of these mean-field dynamos are turbulent magnetic diffusivity, a source of lifting of flux, or 'alpha-effect', and differential rotation. With the advent of helioseismic and other observations at the Sun's photosphere and interior, as well as theoretical understanding of solar interior dynamics, solar dynamo models have evolved both in the realm of mean-field and beyond mean-field models. After briefly discussing the status of these models, I will focus on a class of mean-field model, called flux-transport dynamos, which include meridional circulation as an essential additional ingredient. Flux-transport dynamos have been successful in simulating many global solar cycle features, and have reached the stage that they can be used for making solar cycle predictions. Meridional circulation works in these models like a conveyor-belt, carrying a memory of the magnetic fields from 5 to 20 years back in past. The lower is the magnetic diffusivity, the longer is the model's memory. In the terrestrial system, the great-ocean conveyor-belt in oceanic models and Hadley, polar and Ferrel circulation cells in the troposphere, carry signatures from the past climatological events and influence the determination of future events. Analogously, the memory provided by the Sun's meridional circulation creates the potential for flux-transport dynamos to predict future solar cycle properties. Various groups in the world have built flux-transport dynamo-based predictive tools, which nudge the Sun's surface magnetic data and integrated forward in time to forecast the amplitude of the currently ascending cycle 24. Due to different initial conditions and different choices of unknown model-ingredients, predictions can vary; so it is for their cycle 24 forecasts. We all await the peak of cycle 24. I will close by discussing the prospects of improving dynamo-based predictive tools using more sophisticated data-assimilation techniques, such as the Ensemble Kalman Filter method and variational approaches.

  19. Spatially-partitioned many-body vortices

    NASA Astrophysics Data System (ADS)

    Klaiman, S.; Alon, O. E.

    2016-02-01

    A vortex in Bose-Einstein condensates is a localized object which looks much like a tiny tornado storm. It is well described by mean-field theory. In the present work we go beyond the current paradigm and introduce many-body vortices. These are made of spatially- partitioned clouds, carry definite total angular momentum, and are fragmented rather than condensed objects which can only be described beyond mean-field theory. A phase diagram based on a mean-field model assists in predicting the parameters where many-body vortices occur. Implications are briefly discussed.

  20. Optimization of Analytical Potentials for Coarse-Grained Biopolymer Models.

    PubMed

    Mereghetti, Paolo; Maccari, Giuseppe; Spampinato, Giulia Lia Beatrice; Tozzini, Valentina

    2016-08-25

    The increasing trend in the recent literature on coarse grained (CG) models testifies their impact in the study of complex systems. However, the CG model landscape is variegated: even considering a given resolution level, the force fields are very heterogeneous and optimized with very different parametrization procedures. Along the road for standardization of CG models for biopolymers, here we describe a strategy to aid building and optimization of statistics based analytical force fields and its implementation in the software package AsParaGS (Assisted Parameterization platform for coarse Grained modelS). Our method is based on the use and optimization of analytical potentials, optimized by targeting internal variables statistical distributions by means of the combination of different algorithms (i.e., relative entropy driven stochastic exploration of the parameter space and iterative Boltzmann inversion). This allows designing a custom model that endows the force field terms with a physically sound meaning. Furthermore, the level of transferability and accuracy can be tuned through the choice of statistical data set composition. The method-illustrated by means of applications to helical polypeptides-also involves the analysis of two and three variable distributions, and allows handling issues related to the FF term correlations. AsParaGS is interfaced with general-purpose molecular dynamics codes and currently implements the "minimalist" subclass of CG models (i.e., one bead per amino acid, Cα based). Extensions to nucleic acids and different levels of coarse graining are in the course.

  1. A stochastically forced time delay solar dynamo model: Self-consistent recovery from a maunder-like grand minimum necessitates a mean-field alpha effect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hazra, Soumitra; Nandy, Dibyendu; Passos, Dário, E-mail: s.hazra@iiserkol.ac.in, E-mail: dariopassos@ist.utl.pt, E-mail: dnandi@iiserkol.ac.in

    Fluctuations in the Sun's magnetic activity, including episodes of grand minima such as the Maunder minimum have important consequences for space and planetary environments. However, the underlying dynamics of such extreme fluctuations remain ill-understood. Here, we use a novel mathematical model based on stochastically forced, non-linear delay differential equations to study solar cycle fluctuations in which time delays capture the physics of magnetic flux transport between spatially segregated dynamo source regions in the solar interior. Using this model, we explicitly demonstrate that the Babcock-Leighton poloidal field source based on dispersal of tilted bipolar sunspot flux, alone, cannot recover the sunspotmore » cycle from a grand minimum. We find that an additional poloidal field source effective on weak fields—e.g., the mean-field α effect driven by helical turbulence—is necessary for self-consistent recovery of the sunspot cycle from grand minima episodes.« less

  2. The mean field theory in EM procedures for blind Markov random field image restoration.

    PubMed

    Zhang, J

    1993-01-01

    A Markov random field (MRF) model-based EM (expectation-maximization) procedure for simultaneously estimating the degradation model and restoring the image is described. The MRF is a coupled one which provides continuity (inside regions of smooth gray tones) and discontinuity (at region boundaries) constraints for the restoration problem which is, in general, ill posed. The computational difficulty associated with the EM procedure for MRFs is resolved by using the mean field theory from statistical mechanics. An orthonormal blur decomposition is used to reduce the chances of undesirable locally optimal estimates. Experimental results on synthetic and real-world images show that this approach provides good blur estimates and restored images. The restored images are comparable to those obtained by a Wiener filter in mean-square error, but are most visually pleasing.

  3. A numerical model for aggregations formation and magnetic driving of spherical particles based on OpenFOAM®.

    PubMed

    Karvelas, E G; Lampropoulos, N K; Sarris, I E

    2017-04-01

    This work presents a numerical model for the formation of particle aggregations under the influence of a permanent constant magnetic field and their driving process under a gradient magnetic field, suitably created by a Magnetic Resonance Imaging (MRI) device. The model is developed in the OpenFOAM platform and it is successfully compared to the existing experimental and numerical results in terms of aggregates size and their motion in water solutions. Furthermore, several series of simulations are performed for two common types of particles of different diameter in order to verify their aggregation and flow behaviour, under various constant and gradient magnetic fields in the usual MRI working range. Moreover, the numerical model is used to measure the mean length of aggregations, the total time needed to form and their mean velocity under different permanent and gradient magnetic fields. The present model is found to predict successfully the size, velocity and distribution of aggregates. In addition, our simulations showed that the mean length of aggregations is proportional to the permanent magnetic field magnitude and particle diameter according to the relation : l¯ a =7.5B 0 d i 3/2 . The mean velocity of the aggregations is proportional to the magnetic gradient, according to : u¯ a =6.63G˜B 0 and seems to reach a steady condition after a certain period of time. The mean time needed for particles to aggregate is proportional to permanent magnetic field magnitude, scaled by the relationship : t¯ a ∝7B 0 . A numerical model to predict the motion of magnetic particles for medical application is developed. This model is found suitable to predict the formation of aggregations and their motion under the influence of permanent and gradient magnetic fields, respectively, that are produced by an MRI device. The magnitude of the external constant magnetic field is the most important parameter for the aggregations formation and their driving. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Exact solution of mean-field plus an extended T = 1 nuclear pairing Hamiltonian in the seniority-zero symmetric subspace

    NASA Astrophysics Data System (ADS)

    Pan, Feng; Ding, Xiaoxue; Launey, Kristina D.; Dai, Lianrong; Draayer, Jerry P.

    2018-05-01

    An extended pairing Hamiltonian that describes multi-pair interactions among isospin T = 1 and angular momentum J = 0 neutron-neutron, proton-proton, and neutron-proton pairs in a spherical mean field, such as the spherical shell model, is proposed based on the standard T = 1 pairing formalism. The advantage of the model lies in the fact that numerical solutions within the seniority-zero symmetric subspace can be obtained more easily and with less computational time than those calculated from the mean-field plus standard T = 1 pairing model. Thus, large-scale calculations within the seniority-zero symmetric subspace of the model is feasible. As an example of the application, the average neutron-proton interaction in even-even N ∼ Z nuclei that can be suitably described in the f5 pg9 shell is estimated in the present model, with a focus on the role of np-pairing correlations.

  5. Residential magnetic fields predicted from wiring configurations: I. Exposure model.

    PubMed

    Bowman, J D; Thomas, D C; Jiang, L; Jiang, F; Peters, J M

    1999-10-01

    A physically based model for residential magnetic fields from electric transmission and distribution wiring was developed to reanalyze the Los Angeles study of childhood leukemia by London et al. For this exposure model, magnetic field measurements were fitted to a function of wire configuration attributes that was derived from a multipole expansion of the Law of Biot and Savart. The model parameters were determined by nonlinear regression techniques, using wiring data, distances, and the geometric mean of the ELF magnetic field magnitude from 24-h bedroom measurements taken at 288 homes during the epidemiologic study. The best fit to the measurement data was obtained with separate models for the two major utilities serving Los Angeles County. This model's predictions produced a correlation of 0.40 with the measured fields, an improvement on the 0.27 correlation obtained with the Wertheimer-Leeper (WL) wire code. For the leukemia risk analysis in a companion paper, the regression model predicts exposures to the 24-h geometric mean of the ELF magnetic fields in Los Angeles homes where only wiring data and distances have been obtained. Since these input parameters for the exposure model usually do not change for many years, the predicted magnetic fields will be stable over long time periods, just like the WL code. If the geometric mean is not the exposure metric associated with cancer, this regression technique could be used to estimate long-term exposures to temporal variability metrics and other characteristics of the ELF magnetic field which may be cancer risk factors.

  6. Variational and perturbative formulations of quantum mechanical/molecular mechanical free energy with mean-field embedding and its analytical gradients.

    PubMed

    Yamamoto, Takeshi

    2008-12-28

    Conventional quantum chemical solvation theories are based on the mean-field embedding approximation. That is, the electronic wavefunction is calculated in the presence of the mean field of the environment. In this paper a direct quantum mechanical/molecular mechanical (QM/MM) analog of such a mean-field theory is formulated based on variational and perturbative frameworks. In the variational framework, an appropriate QM/MM free energy functional is defined and is minimized in terms of the trial wavefunction that best approximates the true QM wavefunction in a statistically averaged sense. Analytical free energy gradient is obtained, which takes the form of the gradient of effective QM energy calculated in the averaged MM potential. In the perturbative framework, the above variational procedure is shown to be equivalent to the first-order expansion of the QM energy (in the exact free energy expression) about the self-consistent reference field. This helps understand the relation between the variational procedure and the exact QM/MM free energy as well as existing QM/MM theories. Based on this, several ways are discussed for evaluating non-mean-field effects (i.e., statistical fluctuations of the QM wavefunction) that are neglected in the mean-field calculation. As an illustration, the method is applied to an S(N)2 Menshutkin reaction in water, NH(3)+CH(3)Cl-->NH(3)CH(3) (+)+Cl(-), for which free energy profiles are obtained at the Hartree-Fock, MP2, B3LYP, and BHHLYP levels by integrating the free energy gradient. Non-mean-field effects are evaluated to be <0.5 kcal/mol using a Gaussian fluctuation model for the environment, which suggests that those effects are rather small for the present reaction in water.

  7. An assessment of the near-surface accuracy of the international geomagnetic reference field 1980 model of the main geomagnetic field

    USGS Publications Warehouse

    Peddie, N.W.; Zunde, A.K.

    1985-01-01

    The new International Geomagnetic Reference Field (IGRF) model of the main geomagnetic field for 1980 is based heavily on measurements from the MAGSAT satellite survey. Assessment of the accuracy of the new model, as a description of the main field near the Earth's surface, is important because the accuracy of models derived from satellite data can be adversely affected by the magnetic field of electric currents in the ionosphere and the auroral zones. Until now, statements about its accuracy have been based on the 6 published assessments of the 2 proposed models from which it was derived. However, those assessments were either regional in scope or were based mainly on preliminary or extrapolated data. Here we assess the near-surface accuracy of the new model by comparing it with values for 1980 derived from annual means from 69 magnetic observatories, and by comparing it with WC80, a model derived from near-surface data. The comparison with observatory-derived data shows that the new model describes the field at the 69 observatories about as accurately as would a model derived solely from near-surface data. The comparison with WC80 shows that the 2 models agree closely in their description of D and I near the surface. These comparisons support the proposition that the new IGRF 1980 main-field model is a generally accurate description of the main field near the Earth's surface in 1980. ?? 1985.

  8. On the nature of a supposed water model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heckmann, Lotta, E-mail: lotta@fkp.tu-darmstadt.de; Drossel, Barbara

    2014-08-15

    A cell model that has been proposed by Stanley and Franzese in 2002 for modeling water is based on Potts variables that represent the possible orientations of bonds between water molecules. We show that in the liquid phase, where all cells are occupied by a molecule, the Hamiltonian of the cell model can be rewritten as a Hamiltonian of a conventional Potts model, albeit with two types of coupling constants. We argue that such a model, while having a first-order phase transition, cannot display the critical end point that is postulated for the phase transition between a high- and low-densitymore » liquid. A closer look at the mean-field calculations that claim to find such an end point in the cell model reveals that the mean-field theory is constructed such that the symmetry constraints on the order parameter are violated. This is equivalent to introducing an external field. The introduction of such a field can be given a physical justification due to the fact that water does not have the type of long-range order occurring in the Potts model.« less

  9. Continuous Time Finite State Mean Field Games

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gomes, Diogo A., E-mail: dgomes@math.ist.utl.pt; Mohr, Joana, E-mail: joana.mohr@ufrgs.br; Souza, Rafael Rigao, E-mail: rafars@mat.ufrgs.br

    In this paper we consider symmetric games where a large number of players can be in any one of d states. We derive a limiting mean field model and characterize its main properties. This mean field limit is a system of coupled ordinary differential equations with initial-terminal data. For this mean field problem we prove a trend to equilibrium theorem, that is convergence, in an appropriate limit, to stationary solutions. Then we study an N+1-player problem, which the mean field model attempts to approximate. Our main result is the convergence as N{yields}{infinity} of the mean field model and an estimatemore » of the rate of convergence. We end the paper with some further examples for potential mean field games.« less

  10. Modeling of coherent ultrafast magneto-optical experiments: Light-induced molecular mean-field model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hinschberger, Y.; Hervieux, P.-A.

    2015-12-28

    We present calculations which aim to describe coherent ultrafast magneto-optical effects observed in time-resolved pump-probe experiments. Our approach is based on a nonlinear semi-classical Drude-Voigt model and is used to interpret experiments performed on nickel ferromagnetic thin film. Within this framework, a phenomenological light-induced coherent molecular mean-field depending on the polarizations of the pump and probe pulses is proposed whose microscopic origin is related to a spin-orbit coupling involving the electron spins of the material sample and the electric field of the laser pulses. Theoretical predictions are compared to available experimental data. The model successfully reproduces the observed experimental trendsmore » and gives meaningful insight into the understanding of magneto-optical rotation behavior in the ultrafast regime. Theoretical predictions for further experimental studies are also proposed.« less

  11. Unbiased mean direction of paleomagnetic data and better estimate of paleolatitude

    NASA Astrophysics Data System (ADS)

    Hatakeyama, T.; Shibuya, H.

    2010-12-01

    In paleomagnetism, when we obtain only paleodirection data without paleointensities we calculate Fisher-mean directions (I, D) and Fisher-mean VGP positions as the description of the mean field. However, Kono (1997) and Hatakeyama and Kono (2001) indicated that these averaged directions does not show the unbiased estimated mean directions derived from the time-averaged field (TAF). Hatakeyama and Kono (2002) calculated the TAF and paleosecular variation (PSV) models for the past 5My with considering the biases due to the averaging of the nonlinear functions such as the summation of the unit vectors in the Fisher statistics process. Here we will show a zonal TAF model based on the Hatakeyama and Kono TAF model. Moreover, we will introduce the biased angles due to the PSV in the mean direction and a method for determining true paleolatitudes, which represents the TAF, from paleodirections. This method will helps tectonics studies, especially in the estimation of the accurate paleolatitude in the middle latitude regions.

  12. A cooperation and competition based simple cell receptive field model and study of feed-forward linear and nonlinear contributions to orientation selectivity.

    PubMed

    Bhaumik, Basabi; Mathur, Mona

    2003-01-01

    We present a model for development of orientation selectivity in layer IV simple cells. Receptive field (RF) development in the model, is determined by diffusive cooperation and resource limited competition guided axonal growth and retraction in geniculocortical pathway. The simulated cortical RFs resemble experimental RFs. The receptive field model is incorporated in a three-layer visual pathway model consisting of retina, LGN and cortex. We have studied the effect of activity dependent synaptic scaling on orientation tuning of cortical cells. The mean value of hwhh (half width at half the height of maximum response) in simulated cortical cells is 58 degrees when we consider only the linear excitatory contribution from LGN. We observe a mean improvement of 22.8 degrees in tuning response due to the non-linear spiking mechanisms that include effects of threshold voltage and synaptic scaling factor.

  13. The Thermodynamic Limit in Mean Field Spin Glass Models

    NASA Astrophysics Data System (ADS)

    Guerra, Francesco; Toninelli, Fabio Lucio

    We present a simple strategy in order to show the existence and uniqueness of the infinite volume limit of thermodynamic quantities, for a large class of mean field disordered models, as for example the Sherrington-Kirkpatrick model, and the Derrida p-spin model. The main argument is based on a smooth interpolation between a large system, made of N spin sites, and two similar but independent subsystems, made of N1 and N2 sites, respectively, with N1+N2=N. The quenched average of the free energy turns out to be subadditive with respect to the size of the system. This gives immediately convergence of the free energy per site, in the infinite volume limit. Moreover, a simple argument, based on concentration of measure, gives the almost sure convergence, with respect to the external noise. Similar results hold also for the ground state energy per site.

  14. Choosing a Transformation in Analyses of Insect Counts from Contagious Distributions with Low Means

    Treesearch

    W.D. Pepper; S.J. Zarnoch; G.L. DeBarr; P. de Groot; C.D. Tangren

    1997-01-01

    Guidelines based on computer simulation are suggested for choosing a transformation of insect counts from negative binomial distributions with low mean counts and high levels of contagion. Typical values and ranges of negative binomial model parameters were determined by fitting the model to data from 19 entomological field studies. Random sampling of negative binomial...

  15. The International Geomagnetic Reference Field, 2005

    USGS Publications Warehouse

    Rukstales, Kenneth S.; Love, Jeffrey J.

    2007-01-01

    This is a set of five world charts showing the declination, inclination, horizontal intensity, vertical component, and total intensity of the Earth's magnetic field at mean sea level at the beginning of 2005. The charts are based on the International Geomagnetic Reference Field (IGRF) main model for 2005 and secular change model for 2005-2010. The IGRF is referenced to the World Geodetic System 1984 ellipsoid. Additional information about the USGS geomagnetism program is available at: http://geomag.usgs.gov/

  16. Vertical structure of mean cross-shore currents across a barred surf zone

    USGS Publications Warehouse

    Haines, John W.; Sallenger, Asbury H.

    1994-01-01

    Mean cross-shore currents observed across a barred surf zone are compared to model predictions. The model is based on a simplified momentum balance with a turbulent boundary layer at the bed. Turbulent exchange is parameterized by an eddy viscosity formulation, with the eddy viscosity Aυ independent of time and the vertical coordinate. Mean currents result from gradients due to wave breaking and shoaling, and the presence of a mean setup of the free surface. Descriptions of the wave field are provided by the wave transformation model of Thornton and Guza [1983]. The wave transformation model adequately reproduces the observed wave heights across the surf zone. The mean current model successfully reproduces the observed cross-shore flows. Both observations and predictions show predominantly offshore flow with onshore flow restricted to a relatively thin surface layer. Successful application of the mean flow model requires an eddy viscosity which varies horizontally across the surf zone. Attempts are made to parameterize this variation with some success. The data does not discriminate between alternative parameterizations proposed. The overall variability in eddy viscosity suggested by the model fitting should be resolvable by field measurements of the turbulent stresses. Consistent shortcomings of the parameterizations, and the overall modeling effort, suggest avenues for further development and data collection.

  17. The application of mean field theory to image motion estimation.

    PubMed

    Zhang, J; Hanauer, G G

    1995-01-01

    Previously, Markov random field (MRF) model-based techniques have been proposed for image motion estimation. Since motion estimation is usually an ill-posed problem, various constraints are needed to obtain a unique and stable solution. The main advantage of the MRF approach is its capacity to incorporate such constraints, for instance, motion continuity within an object and motion discontinuity at the boundaries between objects. In the MRF approach, motion estimation is often formulated as an optimization problem, and two frequently used optimization methods are simulated annealing (SA) and iterative-conditional mode (ICM). Although the SA is theoretically optimal in the sense of finding the global optimum, it usually takes many iterations to converge. The ICM, on the other hand, converges quickly, but its results are often unsatisfactory due to its "hard decision" nature. Previously, the authors have applied the mean field theory to image segmentation and image restoration problems. It provides results nearly as good as SA but with much faster convergence. The present paper shows how the mean field theory can be applied to MRF model-based motion estimation. This approach is demonstrated on both synthetic and real-world images, where it produced good motion estimates.

  18. Clustering, randomness and regularity in cloud fields. I - Theoretical considerations. II - Cumulus cloud fields

    NASA Technical Reports Server (NTRS)

    Weger, R. C.; Lee, J.; Zhu, Tianri; Welch, R. M.

    1992-01-01

    The current controversy existing in reference to the regularity vs. clustering in cloud fields is examined by means of analysis and simulation studies based upon nearest-neighbor cumulative distribution statistics. It is shown that the Poisson representation of random point processes is superior to pseudorandom-number-generated models and that pseudorandom-number-generated models bias the observed nearest-neighbor statistics towards regularity. Interpretation of this nearest-neighbor statistics is discussed for many cases of superpositions of clustering, randomness, and regularity. A detailed analysis is carried out of cumulus cloud field spatial distributions based upon Landsat, AVHRR, and Skylab data, showing that, when both large and small clouds are included in the cloud field distributions, the cloud field always has a strong clustering signal.

  19. Quantitative kinetic theory of active matter

    NASA Astrophysics Data System (ADS)

    Ihle, Thomas; Chou, Yen-Liang

    2014-03-01

    Models of self-driven agents similar to the Vicsek model [Phys. Rev. Lett. 75 (1995) 1226] are studied by means of kinetic theory. In these models, particles try to align their travel directions with the average direction of their neighbours. At strong alignment a globally ordered state of collective motion forms. An Enskog-like kinetic theory is derived from the exact Chapman-Kolmogorov equation in phase space using Boltzmann's mean-field approximation of molecular chaos. The kinetic equation is solved numerically by a nonlocal Lattice-Boltzmann-like algorithm. Steep soliton-like waves are observed that lead to an abrupt jump of the global order parameter if the noise level is changed. The shape of the wave is shown to follow a novel scaling law and to quantitatively agree within 3 % with agent-based simulations at large particle speeds. This provides a mean-field mechanism to change the second-order character of the flocking transition to first order. Diagrammatic techniques are used to investigate small particle speeds, where the mean-field assumption of Molecular Chaos is invalid and where correlation effects need to be included.

  20. Asymptotically inspired moment-closure approximation for adaptive networks

    NASA Astrophysics Data System (ADS)

    Shkarayev, Maxim; Shaw, Leah

    2012-02-01

    Adaptive social networks, in which nodes and network structure co-evolve, are often described using a mean-field system of equations for the density of node and link types. These equations constitute an open system due to dependence on higher order topological structures. We propose a moment-closure approximation based on the analytical description of the system in an asymptotic regime. We apply the proposed approach to two examples of adaptive networks: recruitment to a cause model and epidemic spread model. We show a good agreement between the improved mean-field prediction and simulations of the full network system.

  1. Asymptotically inspired moment-closure approximation for adaptive networks

    NASA Astrophysics Data System (ADS)

    Shkarayev, Maxim

    2013-03-01

    Dynamics of adaptive social networks, in which nodes and network structure co-evolve, are often described using a mean-field system of equations for the density of node and link types. These equations constitute an open system due to dependence on higher order topological structures. We propose a systematic approach to moment closure approximation based on the analytical description of the system in an asymptotic regime. We apply the proposed approach to two examples of adaptive networks: recruitment to a cause model and adaptive epidemic model. We show a good agreement between the mean-field prediction and simulations of the full network system.

  2. Asymptotically inspired moment-closure approximation for adaptive networks

    NASA Astrophysics Data System (ADS)

    Shkarayev, Maxim S.; Shaw, Leah B.

    2013-11-01

    Adaptive social networks, in which nodes and network structure coevolve, are often described using a mean-field system of equations for the density of node and link types. These equations constitute an open system due to dependence on higher-order topological structures. We propose a new approach to moment closure based on the analytical description of the system in an asymptotic regime. We apply the proposed approach to two examples of adaptive networks: recruitment to a cause model and adaptive epidemic model. We show a good agreement between the improved mean-field prediction and simulations of the full network system.

  3. Unsupervised segmentation of lung fields in chest radiographs using multiresolution fractal feature vector and deformable models.

    PubMed

    Lee, Wen-Li; Chang, Koyin; Hsieh, Kai-Sheng

    2016-09-01

    Segmenting lung fields in a chest radiograph is essential for automatically analyzing an image. We present an unsupervised method based on multiresolution fractal feature vector. The feature vector characterizes the lung field region effectively. A fuzzy c-means clustering algorithm is then applied to obtain a satisfactory initial contour. The final contour is obtained by deformable models. The results show the feasibility and high performance of the proposed method. Furthermore, based on the segmentation of lung fields, the cardiothoracic ratio (CTR) can be measured. The CTR is a simple index for evaluating cardiac hypertrophy. After identifying a suspicious symptom based on the estimated CTR, a physician can suggest that the patient undergoes additional extensive tests before a treatment plan is finalized.

  4. Field Line Random Walk in Isotropic Magnetic Turbulence up to Infinite Kubo Number

    NASA Astrophysics Data System (ADS)

    Sonsrettee, W.; Wongpan, P.; Ruffolo, D. J.; Matthaeus, W. H.; Chuychai, P.; Rowlands, G.

    2013-12-01

    In astrophysical plasmas, the magnetic field line random walk (FLRW) plays a key role in the transport of energetic particles. In the present, we consider isotropic magnetic turbulence, which is a reasonable model for interstellar space. Theoretical conceptions of the FLRW have been strongly influenced by studies of the limit of weak fluctuations (or a strong mean field) (e.g, Isichenko 1991a, b). In this case, the behavior of FLRW can be characterized by the Kubo number R = (b/B0)(l_∥ /l_ \\bot ) , where l∥ and l_ \\bot are turbulence coherence scales parallel and perpendicular to the mean field, respectively, and b is the root mean squared fluctuation field. In the 2D limit (R ≫ 1), there has been an apparent conflict between concepts of Bohm diffusion, which is based on the Corrsin's independence hypothesis, and percolative diffusion. Here we have used three non-perturbative analytic techniques based on Corrsin's independence hypothesis for B0 = 0 (R = ∞ ): diffusive decorrelation (DD), random ballistic decorrelation (RBD) and a general ordinary differential equation (ODE), and compared them with direct computer simulations. All the analytical models and computer simulations agree that isotropic turbulence for R = ∞ has a field line diffusion coefficient that is consistent with Bohm diffusion. Partially supported by the Thailand Research Fund, NASA, and NSF.

  5. [Some comments on ecological field].

    PubMed

    Wang, D

    2000-06-01

    Based on the data of plant ecological field studies, this paper reviewed the conception of ecological field, field eigenfunctions, graphs of ecological field and its application of ecological field theory in explaining plant interactions. It is suggested that the basic character of ecological field is material, and based on the current research level, it is not sure whether ecological field is a kind of specific field different from general physical field. The author gave some comments on the formula and estimation of parameters of basic field function-ecological potential model on ecological field. Both models have their own characteristics and advantages in specific conditions. The author emphasized that ecological field had even more meaning of ecological methodology, and applying ecological field theory in describing the types and processes of plant interactions had three characteristics: quantitative, synthetic and intuitionistic. Field graphing might provide a new way to ecological studies, especially applying the ecological field theory might give an appropriate quantitative explanation for the dynamic process of plant populations (coexistence and interference competition).

  6. A compound reconstructed prediction model for nonstationary climate processes

    NASA Astrophysics Data System (ADS)

    Wang, Geli; Yang, Peicai

    2005-07-01

    Based on the idea of climate hierarchy and the theory of state space reconstruction, a local approximation prediction model with the compound structure is built for predicting some nonstationary climate process. By means of this model and the data sets consisting of north Indian Ocean sea-surface temperature, Asian zonal circulation index and monthly mean precipitation anomaly from 37 observation stations in the Inner Mongolia area of China (IMC), a regional prediction experiment for the winter precipitation of IMC is also carried out. When using the same sign ratio R between the prediction field and the actual field to measure the prediction accuracy, an averaged R of 63% given by 10 predictions samples is reached.

  7. Analytical and numerical solutions of the potential and electric field generated by different electrode arrays in a tumor tissue under electrotherapy.

    PubMed

    Bergues Pupo, Ana E; Reyes, Juan Bory; Bergues Cabrales, Luis E; Bergues Cabrales, Jesús M

    2011-09-24

    Electrotherapy is a relatively well established and efficient method of tumor treatment. In this paper we focus on analytical and numerical calculations of the potential and electric field distributions inside a tumor tissue in a two-dimensional model (2D-model) generated by means of electrode arrays with shapes of different conic sections (ellipse, parabola and hyperbola). Analytical calculations of the potential and electric field distributions based on 2D-models for different electrode arrays are performed by solving the Laplace equation, meanwhile the numerical solution is solved by means of finite element method in two dimensions. Both analytical and numerical solutions reveal significant differences between the electric field distributions generated by electrode arrays with shapes of circle and different conic sections (elliptic, parabolic and hyperbolic). Electrode arrays with circular, elliptical and hyperbolic shapes have the advantage of concentrating the electric field lines in the tumor. The mathematical approach presented in this study provides a useful tool for the design of electrode arrays with different shapes of conic sections by means of the use of the unifying principle. At the same time, we verify the good correspondence between the analytical and numerical solutions for the potential and electric field distributions generated by the electrode array with different conic sections.

  8. Field-Scale Evaluation of Infiltration Parameters From Soil Texture for Hydrologic Analysis

    NASA Astrophysics Data System (ADS)

    Springer, Everett P.; Cundy, Terrance W.

    1987-02-01

    Recent interest in predicting soil hydraulic properties from simple physical properties such as texture has major implications in the parameterization of physically based models of surface runoff. This study was undertaken to (1) compare, on a field scale, soil hydraulic parameters predicted from texture to those derived from field measurements and (2) compare simulated overland flow response using these two parameter sets. The parameters for the Green-Ampt infiltration equation were obtained from field measurements and using texture-based predictors for two agricultural fields, which were mapped as single soil units. Results of the analyses were that (1) the mean and variance of the field-based parameters were not preserved by the texture-based estimates, (2) spatial and cross correlations between parameters were induced by the texture-based estimation procedures, (3) the overland flow simulations using texture-based parameters were significantly different than those from field-based parameters, and (4) simulations using field-measured hydraulic conductivities and texture-based storage parameters were very close to simulations using only field-based parameters.

  9. Investigation of head group behaviour of lamellar liquid crystals

    NASA Astrophysics Data System (ADS)

    Delikatny, E. J.; Burnell, E. E.

    A mean field equilibrium statistical mechanical model, based on the Samulski inertial frame model, was developed to simulate experimental dipolar and quadrupolar nmr couplings of isotopically substituted potassium palmitates. An isolated four spin system was synthesized (2,2,3,3,-H4-palmitic acid-d27) and in conjunction with data presented in a previous paper on perdeuterated and carbon 13 labelled soaps, the head group behaviour of the molecule was investigated. Two interactions were considered in the modelling procedure: a mean field steric interaction characterized by a constraining cylinder, and a head group interaction characterized by a mass on the end of a rod of variable length. The rod lies along the first C-C bond direction and accounts for the interaction between polar head group and water via its effect on the moment of inertia of the molecule. In potassium palmitate mean field steric repulsive forces remain constant over the entire temperature range studied. In contrast, electrostatic interactions between polar head group and water, approximately constant at higher temperatures, increase dramatically as the phase transition is approached. This evidence supports a previously proposed model of lipidwater interaction.

  10. Mean Energy Density of Photogenerated Magnetic Fields Throughout the EoR

    NASA Astrophysics Data System (ADS)

    Durrive, Jean-Baptiste; Tashiro, Hiroyuki; Langer, Mathieu; Sugiyama, Naoshi

    2018-05-01

    There seems to be magnetic fields at all scales and epochs in our Universe, but their origin at large scales remains an important open question of cosmology. In this work we focus on the generation of magnetic fields in the intergalactic medium due to the photoionizations by the first galaxies, all along the Epoch of Reionization. Based on previous studies which considered only isolated sources, we develop an analytical model to estimate the mean magnetic energy density accumulated in the Universe by this process. In our model, without considering any amplification process, the Universe is globally magnetized by this mechanism to the order of, at least, several 10-18 G during the Epoch of Reionization (i.e. a few 10-20 G comoving).

  11. The standard mean-field treatment of inter-particle attraction in classical DFT is better than one might expect

    NASA Astrophysics Data System (ADS)

    Archer, Andrew J.; Chacko, Blesson; Evans, Robert

    2017-07-01

    In classical density functional theory (DFT), the part of the Helmholtz free energy functional arising from attractive inter-particle interactions is often treated in a mean-field or van der Waals approximation. On the face of it, this is a somewhat crude treatment as the resulting functional generates the simple random phase approximation (RPA) for the bulk fluid pair direct correlation function. We explain why using standard mean-field DFT to describe inhomogeneous fluid structure and thermodynamics is more accurate than one might expect based on this observation. By considering the pair correlation function g(x) and structure factor S(k) of a one-dimensional model fluid, for which exact results are available, we show that the mean-field DFT, employed within the test-particle procedure, yields results much superior to those from the RPA closure of the bulk Ornstein-Zernike equation. We argue that one should not judge the quality of a DFT based solely on the approximation it generates for the bulk pair direct correlation function.

  12. Nonlinear wave chaos: statistics of second harmonic fields.

    PubMed

    Zhou, Min; Ott, Edward; Antonsen, Thomas M; Anlage, Steven M

    2017-10-01

    Concepts from the field of wave chaos have been shown to successfully predict the statistical properties of linear electromagnetic fields in electrically large enclosures. The Random Coupling Model (RCM) describes these properties by incorporating both universal features described by Random Matrix Theory and the system-specific features of particular system realizations. In an effort to extend this approach to the nonlinear domain, we add an active nonlinear frequency-doubling circuit to an otherwise linear wave chaotic system, and we measure the statistical properties of the resulting second harmonic fields. We develop an RCM-based model of this system as two linear chaotic cavities coupled by means of a nonlinear transfer function. The harmonic field strengths are predicted to be the product of two statistical quantities and the nonlinearity characteristics. Statistical results from measurement-based calculation, RCM-based simulation, and direct experimental measurements are compared and show good agreement over many decades of power.

  13. Pressure prediction in non-uniaxial settings based on field data and geomechanical modeling: a well example

    NASA Astrophysics Data System (ADS)

    Lockhart, L. P.; Flemings, P. B.; Nikolinakou, M. A.; Heidari, M.

    2016-12-01

    We apply a new pressure prediction approach that couples sonic velocity data, geomechanical modeling, and a critical state soil model to estimate pore pressure from wellbore data adjacent to a salt body where the stress field is complex. Specifically, we study pressure and stress in front of the Mad Dog salt body, in the Gulf of Mexico. Because of the loading from the salt, stresses are not uniaxial; the horizontal stress is elevated, leading to higher mean and shear stresses. For the Mad Dog field, we develop a relationship between velocity and equivalent effective stress, in order to account for both the mean and shear stress effect on pore pressure. We obtain this equivalent effective stress using a geomechanical model of the Mad Dog field. We show that the new approach improves pressure prediction in areas near salt where mean and shear stress are different than the control well. Our methodology and results show that pore pressure is driven by a combination of mean stress and shear stress, and highlight the importance of shear-induced pore pressures. Furthermore, the impact of our study extends beyond salt bodies; the methodology and gained insights are applicable to geological environments around the world with a complex geologic history, where the stress state is not uniaxial (fault zones, anticlines, synclines, continental margins, etc.).

  14. Dynamic phase transitions and dynamic phase diagrams of the Blume-Emery-Griffiths model in an oscillating field: the effective-field theory based on the Glauber-type stochastic dynamics

    NASA Astrophysics Data System (ADS)

    Ertaş, Mehmet; Keskin, Mustafa

    2015-06-01

    Using the effective-field theory based on the Glauber-type stochastic dynamics (DEFT), we investigate dynamic phase transitions and dynamic phase diagrams of the Blume-Emery-Griffiths model under an oscillating magnetic field. We presented the dynamic phase diagrams in (T/J, h0/J), (D/J, T/J) and (K/J, T/J) planes, where T, h0, D, K and z are the temperature, magnetic field amplitude, crystal-field interaction, biquadratic interaction and the coordination number. The dynamic phase diagrams exhibit several ordered phases, coexistence phase regions and special critical points, as well as re-entrant behavior depending on interaction parameters. We also compare and discuss the results with the results of the same system within the mean-field theory based on the Glauber-type stochastic dynamics and find that some of the dynamic first-order phase lines and special dynamic critical points disappeared in the DEFT calculation.

  15. Reduction of initial shock in decadal predictions using a new initialization strategy

    NASA Astrophysics Data System (ADS)

    He, Yujun; Wang, Bin; Liu, Mimi; Liu, Li; Yu, Yongqiang; Liu, Juanjuan; Li, Ruizhe; Zhang, Cheng; Xu, Shiming; Huang, Wenyu; Liu, Qun; Wang, Yong; Li, Feifei

    2017-08-01

    A novel full-field initialization strategy based on the dimension-reduced projection four-dimensional variational data assimilation (DRP-4DVar) is proposed to alleviate the well-known initial shock occurring in the early years of decadal predictions. It generates consistent initial conditions, which best fit the monthly mean oceanic analysis data along the coupled model trajectory in 1 month windows. Three indices to measure the initial shock intensity are also proposed. Results indicate that this method does reduce the initial shock in decadal predictions by Flexible Global Ocean-Atmosphere-Land System model, Grid-point version 2 (FGOALS-g2) compared with the three-dimensional variational data assimilation-based nudging full-field initialization for the same model and is comparable to or even better than the different initialization strategies for other fifth phase of the Coupled Model Intercomparison Project (CMIP5) models. Better hindcasts of global mean surface air temperature anomalies can be obtained than in other FGOALS-g2 experiments. Due to the good model response to external forcing and the reduction of initial shock, higher decadal prediction skill is achieved than in other CMIP5 models.

  16. Fluctuation-controlled front propagation

    NASA Astrophysics Data System (ADS)

    Ridgway, Douglas Thacher

    1997-09-01

    A number of fundamental pattern-forming systems are controlled by fluctuations at the front. These problems involve the interaction of an infinite dimensional probability distribution with a strongly nonlinear, spatially extended pattern-forming system. We have examined fluctuation-controlled growth in the context of the specific problems of diffusion-limited growth and biological evolution. Mean field theory of diffusion-limited growth exhibits a finite time singularity. Near the leading edge of a diffusion-limited front, this leads to acceleration and blowup. This may be resolved, in an ad hoc manner, by introducing a cutoff below which growth is weakened or eliminated (8). This model, referred to as the BLT model, captures a number of qualitative features of global pattern formation in diffusion-limited aggregation: contours of the mean field match contours of averaged particle density in simulation, and the modified mean field theory can form dendritic features not possible in the naive mean field theory. The morphology transition between dendritic and non-dendritic global patterns requires that BLT fronts have a Mullins-Sekerka instability of the wavefront shape, in order to form concave patterns. We compute the stability of BLT fronts numerically, and compare the results to fronts without a cutoff. A significant morphological instability of the BLT fronts exists, with a dominant wavenumber on the scale of the front width. For standard mean field fronts, no instability is found. The naive and ad hoc mean field theories are continuum-deterministic models intended to capture the behavior of a discrete stochastic system. A transformation which maps discrete systems into a continuum model with a singular multiplicative noise is known, however numerical simulations of the continuum stochastic system often give mean field behavior instead of the critical behavior of the discrete system. We have found a new interpretation of the singular noise, based on maintaining the symmetry of the absorbing state, but which is unsuccessful at capturing the behavior of diffusion-limited growth. In an effort to find a simpler model system, we turned to modelling fitness increases in evolution. The work was motivated by an experiment on vesicular stomatitis virus, a short (˜9600bp) single-stranded RNA virus. A highly bottlenecked viral population increases in fitness rapidly until a certain point, after which the fitness increases at a slower rate. This is well modeled by a constant population reproducing and mutating on a smooth fitness landscape. Mean field theory of this system displays the same infinite propagation velocity blowup as mean field diffusion-limited aggregation. However, we have been able to make progress on a number of fronts. One is solving systems of moment equations, where a hierarchy of moments is truncated arbitrarily at some level. Good results for front propagation velocity are found with just two moments, corresponding to inclusion of the basic finite population clustering effect ignored by mean field theory. In addition, for small mutation rates, most of the population will be entirely on a single site or two adjacent sites, and the density of these cases can be described and solved. (Abstract shortened by UMI.)

  17. Mean field dynamics of some open quantum systems

    NASA Astrophysics Data System (ADS)

    Merkli, Marco; Rafiyi, Alireza

    2018-04-01

    We consider a large number N of quantum particles coupled via a mean field interaction to another quantum system (reservoir). Our main result is an expansion for the averages of observables, both of the particles and of the reservoir, in inverse powers of √{N }. The analysis is based directly on the Dyson series expansion of the propagator. We analyse the dynamics, in the limit N →∞ , of observables of a fixed number n of particles, of extensive particle observables and their fluctuations, as well as of reservoir observables. We illustrate our results on the infinite mode Dicke model and on various energy-conserving models.

  18. Mean field dynamics of some open quantum systems.

    PubMed

    Merkli, Marco; Rafiyi, Alireza

    2018-04-01

    We consider a large number N of quantum particles coupled via a mean field interaction to another quantum system (reservoir). Our main result is an expansion for the averages of observables, both of the particles and of the reservoir, in inverse powers of [Formula: see text]. The analysis is based directly on the Dyson series expansion of the propagator. We analyse the dynamics, in the limit [Formula: see text], of observables of a fixed number n of particles, of extensive particle observables and their fluctuations, as well as of reservoir observables. We illustrate our results on the infinite mode Dicke model and on various energy-conserving models.

  19. Shape coexistence and β decay of 70Br within a beyond-mean-field approach

    NASA Astrophysics Data System (ADS)

    Petrovici, A.

    2018-02-01

    β -decay properties of the odd-odd N =Z 70Br nucleus are self-consistently explored within the beyond-mean-field complex excited vampir variational model using an effective interaction obtained from a nuclear matter G -matrix based on the charge-dependent Bonn CD potential and an adequate model space. Results on superallowed Fermi β decay of the ground state and Gamow-Teller decay of the 9+ isomer in 70Br correlated with the shape coexistence and mixing effects on the structure and electromagnetic properties of the populated states in the daughter nucleus 70Se are presented and compared with available data.

  20. Reducing RANS Model Error Using Random Forest

    NASA Astrophysics Data System (ADS)

    Wang, Jian-Xun; Wu, Jin-Long; Xiao, Heng; Ling, Julia

    2016-11-01

    Reynolds-Averaged Navier-Stokes (RANS) models are still the work-horse tools in the turbulence modeling of industrial flows. However, the model discrepancy due to the inadequacy of modeled Reynolds stresses largely diminishes the reliability of simulation results. In this work we use a physics-informed machine learning approach to improve the RANS modeled Reynolds stresses and propagate them to obtain the mean velocity field. Specifically, the functional forms of Reynolds stress discrepancies with respect to mean flow features are trained based on an offline database of flows with similar characteristics. The random forest model is used to predict Reynolds stress discrepancies in new flows. Then the improved Reynolds stresses are propagated to the velocity field via RANS equations. The effects of expanding the feature space through the use of a complete basis of Galilean tensor invariants are also studied. The flow in a square duct, which is challenging for standard RANS models, is investigated to demonstrate the merit of the proposed approach. The results show that both the Reynolds stresses and the propagated velocity field are improved over the baseline RANS predictions. SAND Number: SAND2016-7437 A

  1. On the characteristics of a residual external signal seen in coefficients of main geomagnetic field models

    NASA Astrophysics Data System (ADS)

    Stefan, Cristiana; Demetrescu, Crisan; Dobrica, Venera

    2014-05-01

    Several recently developed main geomagnetic field models, based on both observatory and satellite data (e.g. IGRF, CHAOS, GRIMM, COV-OBS), as well as the historical model gufm1, have been designed to describe only the internal part of the field, except for the COV-OBS that also accounts for the external dipole. In this paper we analyze data and coefficients from two main field models, namely gufm1 (Jackson et al., 2000) and COV-OBS (Gillet et al., 2013), by means of low pass filters with a cutoff period of 11-year, to evidence a residual signal with seemingly external sources, superimposed on the internal part of the field. The characteristics of the residual signal in the dipole and non-dipole coefficients are discussed.

  2. Mean-Lagrangian formalism and covariance of fluid turbulence.

    PubMed

    Ariki, Taketo

    2017-05-01

    Mean-field-based Lagrangian framework is developed for the fluid turbulence theory, which enables physically objective discussions, especially, of the history effect. Mean flow serves as a purely geometrical object of Lie group theory, providing useful operations to measure the objective rate and history integration of the general tensor field. The proposed framework is applied, on the one hand, to one-point closure model, yielding an objective expression of the turbulence viscoelastic effect. Application to two-point closure, on the other hand, is also discussed, where natural extension of known Lagrangian correlation is discovered on the basis of an extended covariance group.

  3. Dynamical systems approach to the study of a sociophysics agent-based model

    NASA Astrophysics Data System (ADS)

    Timpanaro, André M.; Prado, Carmen P. C.

    2011-03-01

    The Sznajd model is a Potts-like model that has been studied in the context of sociophysics [1,2] (where spins are interpreted as opinions). In a recent work [3], we generalized the Sznajd model to include assymetric interactions between the spins (interpreted as biases towards opinions) and used dynamical systems techniques to tackle its mean-field version, given by the flow: ησ = ∑ σ' = 1Mησησ'(ησρσ'→σ-σ'ρσ→σ'). Where hs is the proportion of agents with opinion (spin) σ', M is the number of opinions and σ'→σ' is the probability weight for an agent with opinion σ being convinced by another agent with opinion σ'. We made Monte Carlo simulations of the model in a complex network (using Barabási-Albert networks [4]) and they displayed the same attractors than the mean-field. Using linear stability analysis, we were able to determine the mean-field attractor structure analytically and to show that it has connections with well known graph theory problems (maximal independent sets and positive fluxes in directed graphs). Our dynamical systems approach is quite simple and can be used also in other models, like the voter model.

  4. Dynamical systems approach to the study of a sociophysics agent-based model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timpanaro, Andre M.; Prado, Carmen P. C.

    2011-03-24

    The Sznajd model is a Potts-like model that has been studied in the context of sociophysics [1,2](where spins are interpreted as opinions). In a recent work [3], we generalized the Sznajd model to include assymetric interactions between the spins (interpreted as biases towards opinions) and used dynamical systems techniques to tackle its mean-field version, given by the flow: {eta}{sub {sigma}} = {Sigma}{sub {sigma}}'{sup M} = 1{eta}{sub {sigma}}{eta}{sigma}'({eta}{sub {sigma}}{rho}{sigma}'{yields}{sigma}-{sigma}'{rho}{sigma}{yields}{sigma}').Where hs is the proportion of agents with opinion (spin){sigma}', M is the number of opinions and {sigma}'{yields}{sigma}' is the probability weight for an agent with opinion {sigma} being convinced by another agentmore » with opinion {sigma}'. We made Monte Carlo simulations of the model in a complex network (using Barabasi-Albert networks [4]) and they displayed the same attractors than the mean-field. Using linear stability analysis, we were able to determine the mean-field attractor structure analytically and to show that it has connections with well known graph theory problems (maximal independent sets and positive fluxes in directed graphs). Our dynamical systems approach is quite simple and can be used also in other models, like the voter model.« less

  5. Crop Model Improvement Reduces the Uncertainty of the Response to Temperature of Multi-Model Ensembles

    NASA Technical Reports Server (NTRS)

    Maiorano, Andrea; Martre, Pierre; Asseng, Senthold; Ewert, Frank; Mueller, Christoph; Roetter, Reimund P.; Ruane, Alex C.; Semenov, Mikhail A.; Wallach, Daniel; Wang, Enli

    2016-01-01

    To improve climate change impact estimates and to quantify their uncertainty, multi-model ensembles (MMEs) have been suggested. Model improvements can improve the accuracy of simulations and reduce the uncertainty of climate change impact assessments. Furthermore, they can reduce the number of models needed in a MME. Herein, 15 wheat growth models of a larger MME were improved through re-parameterization and/or incorporating or modifying heat stress effects on phenology, leaf growth and senescence, biomass growth, and grain number and size using detailed field experimental data from the USDA Hot Serial Cereal experiment (calibration data set). Simulation results from before and after model improvement were then evaluated with independent field experiments from a CIMMYT worldwide field trial network (evaluation data set). Model improvements decreased the variation (10th to 90th model ensemble percentile range) of grain yields simulated by the MME on average by 39% in the calibration data set and by 26% in the independent evaluation data set for crops grown in mean seasonal temperatures greater than 24 C. MME mean squared error in simulating grain yield decreased by 37%. A reduction in MME uncertainty range by 27% increased MME prediction skills by 47%. Results suggest that the mean level of variation observed in field experiments and used as a benchmark can be reached with half the number of models in the MME. Improving crop models is therefore important to increase the certainty of model-based impact assessments and allow more practical, i.e. smaller MMEs to be used effectively.

  6. Mean-field analysis of an inductive reasoning game: Application to influenza vaccination

    NASA Astrophysics Data System (ADS)

    Breban, Romulus; Vardavas, Raffaele; Blower, Sally

    2007-09-01

    Recently we have introduced an inductive reasoning game of voluntary yearly vaccination to establish whether or not a population of individuals acting in their own self-interest would be able to prevent influenza epidemics. Here, we analyze our model to describe the dynamics of the collective yearly vaccination uptake. We discuss the mean-field equations of our model and first order effects of fluctuations. We explain why our model predicts that severe epidemics are periodically expected even without the introduction of pandemic strains. We find that fluctuations in the collective yearly vaccination uptake induce severe epidemics with an expected periodicity that depends on the number of independent decision makers in the population. The mean-field dynamics also reveal that there are conditions for which the dynamics become robust to the fluctuations. However, the transition between fluctuation-sensitive and fluctuation-robust dynamics occurs for biologically implausible parameters. We also analyze our model when incentive-based vaccination programs are offered. When a family-based incentive is offered, the expected periodicity of severe epidemics is increased. This results from the fact that the number of independent decision makers is reduced, increasing the effect of the fluctuations. However, incentives based on the number of years of prepayment of vaccination may yield fluctuation-robust dynamics where severe epidemics are prevented. In this case, depending on prepayment, the transition between fluctuation-sensitive and fluctuation-robust dynamics may occur for biologically plausible parameters. Our analysis provides a practical method for identifying how many years of free vaccination should be provided in order to successfully ameliorate influenza epidemics.

  7. Mean-field analysis of an inductive reasoning game: application to influenza vaccination.

    PubMed

    Breban, Romulus; Vardavas, Raffaele; Blower, Sally

    2007-09-01

    Recently we have introduced an inductive reasoning game of voluntary yearly vaccination to establish whether or not a population of individuals acting in their own self-interest would be able to prevent influenza epidemics. Here, we analyze our model to describe the dynamics of the collective yearly vaccination uptake. We discuss the mean-field equations of our model and first order effects of fluctuations. We explain why our model predicts that severe epidemics are periodically expected even without the introduction of pandemic strains. We find that fluctuations in the collective yearly vaccination uptake induce severe epidemics with an expected periodicity that depends on the number of independent decision makers in the population. The mean-field dynamics also reveal that there are conditions for which the dynamics become robust to the fluctuations. However, the transition between fluctuation-sensitive and fluctuation-robust dynamics occurs for biologically implausible parameters. We also analyze our model when incentive-based vaccination programs are offered. When a family-based incentive is offered, the expected periodicity of severe epidemics is increased. This results from the fact that the number of independent decision makers is reduced, increasing the effect of the fluctuations. However, incentives based on the number of years of prepayment of vaccination may yield fluctuation-robust dynamics where severe epidemics are prevented. In this case, depending on prepayment, the transition between fluctuation-sensitive and fluctuation-robust dynamics may occur for biologically plausible parameters. Our analysis provides a practical method for identifying how many years of free vaccination should be provided in order to successfully ameliorate influenza epidemics.

  8. Stochastic foundations in nonlinear density-regulation growth

    NASA Astrophysics Data System (ADS)

    Méndez, Vicenç; Assaf, Michael; Horsthemke, Werner; Campos, Daniel

    2017-08-01

    In this work we construct individual-based models that give rise to the generalized logistic model at the mean-field deterministic level and that allow us to interpret the parameters of these models in terms of individual interactions. We also study the effect of internal fluctuations on the long-time dynamics for the different models that have been widely used in the literature, such as the theta-logistic and Savageau models. In particular, we determine the conditions for population extinction and calculate the mean time to extinction. If the population does not become extinct, we obtain analytical expressions for the population abundance distribution. Our theoretical results are based on WKB theory and the probability generating function formalism and are verified by numerical simulations.

  9. Analytical and numerical solutions of the potential and electric field generated by different electrode arrays in a tumor tissue under electrotherapy

    PubMed Central

    2011-01-01

    Background Electrotherapy is a relatively well established and efficient method of tumor treatment. In this paper we focus on analytical and numerical calculations of the potential and electric field distributions inside a tumor tissue in a two-dimensional model (2D-model) generated by means of electrode arrays with shapes of different conic sections (ellipse, parabola and hyperbola). Methods Analytical calculations of the potential and electric field distributions based on 2D-models for different electrode arrays are performed by solving the Laplace equation, meanwhile the numerical solution is solved by means of finite element method in two dimensions. Results Both analytical and numerical solutions reveal significant differences between the electric field distributions generated by electrode arrays with shapes of circle and different conic sections (elliptic, parabolic and hyperbolic). Electrode arrays with circular, elliptical and hyperbolic shapes have the advantage of concentrating the electric field lines in the tumor. Conclusion The mathematical approach presented in this study provides a useful tool for the design of electrode arrays with different shapes of conic sections by means of the use of the unifying principle. At the same time, we verify the good correspondence between the analytical and numerical solutions for the potential and electric field distributions generated by the electrode array with different conic sections. PMID:21943385

  10. Analytical model for out-of-field dose in photon craniospinal irradiation

    NASA Astrophysics Data System (ADS)

    Taddei, Phillip J.; Jalbout, Wassim; Howell, Rebecca M.; Khater, Nabil; Geara, Fady; Homann, Kenneth; Newhauser, Wayne D.

    2013-11-01

    The prediction of late effects after radiotherapy in organs outside a treatment field requires accurate estimations of out-of-field dose. However, out-of-field dose is not calculated accurately by commercial treatment planning systems (TPSs). The purpose of this study was to develop and test an analytical model for out-of-field dose during craniospinal irradiation (CSI) from photon beams produced by a linear accelerator. In two separate evaluations of the model, we measured absorbed dose for a 6 MV CSI using thermoluminescent dosimeters placed throughout an anthropomorphic phantom and fit the measured data to an analytical model of absorbed dose versus distance outside of the composite field edge. These measurements were performed in two separate clinics—the University of Texas MD Anderson Cancer Center (MD Anderson) and the American University of Beirut Medical Center (AUBMC)—using the same phantom but different linear accelerators and TPSs commissioned for patient treatments. The measurement at AUBMC also included in-field locations. Measured dose values were compared to those predicted by TPSs and parameters were fit to the model in each setting. In each clinic, 95% of the measured data were contained within a factor of 0.2 and one root mean square deviation of the model-based values. The root mean square deviations of the mathematical model were 0.91 cGy Gy-1 and 1.67 cGy Gy-1 in the MD Anderson and AUBMC clinics, respectively. The TPS predictions agreed poorly with measurements in regions of sharp dose gradient, e.g., near the field edge. At distances greater than 1 cm from the field edge, the TPS underestimated the dose by an average of 14% ± 24% and 44% ± 19% in the MD Anderson and AUBMC clinics, respectively. The in-field measured dose values of the measurement at AUBMC matched the dose values calculated by the TPS to within 2%. Dose algorithms in TPSs systematically underestimated the actual out-of-field dose. Therefore, it is important to use an improved model based on measurements when estimating out-of-field dose. The model proposed in this study performed well for this purpose in two clinics and may be applicable in other clinics with similar treatment field configurations.

  11. Update to the conventional model for rotational deformation

    NASA Astrophysics Data System (ADS)

    Ries, J. C.; Desai, S.

    2017-12-01

    Rotational deformation (also called the "pole tide") is the deformation resulting from the centrifugal effect of polar motion on the solid earth and ocean, which manifests itself as variations in ocean heights, in the gravity field and in surface displacements. The model for rotational deformation assumes a primarily elastic response of the Earth to the centrifugal potential at the annual and Chandler periods and applies body tide Love numbers to the polar motion after removing the mean pole. The original model was conceived when the mean pole was moving (more or less) linearly, largely in response to glacial isostatic adjustment. In light of the significant variations in the mean pole due to present-day ice mass losses, an `appropriately' filtered mean pole was adopted for the conventional model, so that the longer period variations in the mean pole were not included in the rotational deformation model. However, the elastic Love numbers should be applicable to longer period variations as well, and only the secular (i.e. linear) mean pole should be removed. A model for the linear mean pole is recommended based on a linear fit to the IERS C01 time series spanning 1900 to 2015: in milliarcsec, Xp = 55.0+1.677*dt and Yp = 320.5+3.460*dt where dt=(t-t0), t0=2000.0 and assuming a year=365.25 days. The consequences of an updated model for rotational deformation for site motion and the gravity field are illustrated.

  12. Multiagent model and mean field theory of complex auction dynamics

    NASA Astrophysics Data System (ADS)

    Chen, Qinghua; Huang, Zi-Gang; Wang, Yougui; Lai, Ying-Cheng

    2015-09-01

    Recent years have witnessed a growing interest in analyzing a variety of socio-economic phenomena using methods from statistical and nonlinear physics. We study a class of complex systems arising from economics, the lowest unique bid auction (LUBA) systems, which is a recently emerged class of online auction game systems. Through analyzing large, empirical data sets of LUBA, we identify a general feature of the bid price distribution: an inverted J-shaped function with exponential decay in the large bid price region. To account for the distribution, we propose a multi-agent model in which each agent bids stochastically in the field of winner’s attractiveness, and develop a theoretical framework to obtain analytic solutions of the model based on mean field analysis. The theory produces bid-price distributions that are in excellent agreement with those from the real data. Our model and theory capture the essential features of human behaviors in the competitive environment as exemplified by LUBA, and may provide significant quantitative insights into complex socio-economic phenomena.

  13. Stochastic lattice model of synaptic membrane protein domains.

    PubMed

    Li, Yiwei; Kahraman, Osman; Haselwandter, Christoph A

    2017-05-01

    Neurotransmitter receptor molecules, concentrated in synaptic membrane domains along with scaffolds and other kinds of proteins, are crucial for signal transmission across chemical synapses. In common with other membrane protein domains, synaptic domains are characterized by low protein copy numbers and protein crowding, with rapid stochastic turnover of individual molecules. We study here in detail a stochastic lattice model of the receptor-scaffold reaction-diffusion dynamics at synaptic domains that was found previously to capture, at the mean-field level, the self-assembly, stability, and characteristic size of synaptic domains observed in experiments. We show that our stochastic lattice model yields quantitative agreement with mean-field models of nonlinear diffusion in crowded membranes. Through a combination of analytic and numerical solutions of the master equation governing the reaction dynamics at synaptic domains, together with kinetic Monte Carlo simulations, we find substantial discrepancies between mean-field and stochastic models for the reaction dynamics at synaptic domains. Based on the reaction and diffusion properties of synaptic receptors and scaffolds suggested by previous experiments and mean-field calculations, we show that the stochastic reaction-diffusion dynamics of synaptic receptors and scaffolds provide a simple physical mechanism for collective fluctuations in synaptic domains, the molecular turnover observed at synaptic domains, key features of the observed single-molecule trajectories, and spatial heterogeneity in the effective rates at which receptors and scaffolds are recycled at the cell membrane. Our work sheds light on the physical mechanisms and principles linking the collective properties of membrane protein domains to the stochastic dynamics that rule their molecular components.

  14. Irrigation water demand: A meta-analysis of price elasticities

    NASA Astrophysics Data System (ADS)

    Scheierling, Susanne M.; Loomis, John B.; Young, Robert A.

    2006-01-01

    Metaregression models are estimated to investigate sources of variation in empirical estimates of the price elasticity of irrigation water demand. Elasticity estimates are drawn from 24 studies reported in the United States since 1963, including mathematical programming, field experiments, and econometric studies. The mean price elasticity is 0.48. Long-run elasticities, those that are most useful for policy purposes, are likely larger than the mean estimate. Empirical results suggest that estimates may be more elastic if they are derived from mathematical programming or econometric studies and calculated at a higher irrigation water price. Less elastic estimates are found to be derived from models based on field experiments and in the presence of high-valued crops.

  15. Fluid Flow and Solidification Under Combined Action of Magnetic Fields and Microgravity

    NASA Technical Reports Server (NTRS)

    Li, B. Q.; Shu, Y.; Li, K.; deGroh, H. C.

    2002-01-01

    Mathematical models, both 2-D and 3-D, are developed to represent g-jitter induced fluid flows and their effects on solidification under combined action of magnetic fields and microgravity. The numerical model development is based on the finite element solution of governing equations describing the transient g-jitter driven fluid flows, heat transfer and solutal transport during crystal growth with and without an applied magnetic field in space vehicles. To validate the model predictions, a ground-based g-jitter simulator is developed using the oscillating wall temperatures where timely oscillating fluid flows are measured using a laser PIV system. The measurements are compared well with numerical results obtained from the numerical models. Results show that a combined action derived from magnetic damping and microgravity can be an effective means to control the melt flow and solutal transport in space single crystal growth systems.

  16. Nuclear Deformation at Finite Temperature

    NASA Astrophysics Data System (ADS)

    Alhassid, Y.; Gilbreth, C. N.; Bertsch, G. F.

    2014-12-01

    Deformation, a key concept in our understanding of heavy nuclei, is based on a mean-field description that breaks the rotational invariance of the nuclear many-body Hamiltonian. We present a method to analyze nuclear deformations at finite temperature in a framework that preserves rotational invariance. The auxiliary-field Monte Carlo method is used to generate a statistical ensemble and calculate the probability distribution associated with the quadrupole operator. Applying the technique to nuclei in the rare-earth region, we identify model-independent signatures of deformation and find that deformation effects persist to temperatures higher than the spherical-to-deformed shape phase-transition temperature of mean-field theory.

  17. New models of Saturn's magnetic field using Pioneer 11 Vector Helium Magnetometer data

    NASA Technical Reports Server (NTRS)

    Davis, L., Jr.; Smith, E. J.

    1986-01-01

    In a reanalysis of the Vector Helium Magnetometer data taken by Pioneer 11 during its Saturn encounter in 1979, using improvements in the data set and in the procedures, studies are made of a variety of models. The best is the P(11)84 model, an axisymmetric spherical harmonic model of Saturn's magnetic field within 8 Saturn radii of the planet. The appropriately weighted root mean square average of the difference between the observed and the modeled field is 1.13 percent. For the Voyager-based Z3 model of Connerney, Acuna, and Ness, this average difference from the Pioneer 11 data is 1.81 percent. The external source currents in the magnetopause, tail, bow shock, and perhaps ring currents vary with time and can only be crudely modeled. An algebraic formula is derived for calculating the L shells on which energetic charged particles drift in axisymmetric fields.

  18. Topological phases in the Haldane model with spin–spin on-site interactions

    NASA Astrophysics Data System (ADS)

    Rubio-García, A.; García-Ripoll, J. J.

    2018-04-01

    Ultracold atom experiments allow the study of topological insulators, such as the non-interacting Haldane model. In this work we study a generalization of the Haldane model with spin–spin on-site interactions that can be implemented on such experiments. We focus on measuring the winding number, a topological invariant, of the ground state, which we compute using a mean-field calculation that effectively captures long-range correlations and a matrix product state computation in a lattice with 64 sites. Our main result is that we show how the topological phases present in the non-interacting model survive until the interactions are comparable to the kinetic energy. We also demonstrate the accuracy of our mean-field approach in efficiently capturing long-range correlations. Based on state-of-the-art ultracold atom experiments, we propose an implementation of our model that can give information about the topological phases.

  19. Analytical mesoscale modeling of aeolian sand transport

    NASA Astrophysics Data System (ADS)

    Lämmel, Marc; Kroy, Klaus

    2017-11-01

    The mesoscale structure of aeolian sand transport determines a variety of natural phenomena studied in planetary and Earth science. We analyze it theoretically beyond the mean-field level, based on the grain-scale transport kinetics and splash statistics. A coarse-grained analytical model is proposed and verified by numerical simulations resolving individual grain trajectories. The predicted height-resolved sand flux and other important characteristics of the aeolian transport layer agree remarkably well with a comprehensive compilation of field and wind-tunnel data, suggesting that the model robustly captures the essential mesoscale physics. By comparing the predicted saturation length with field data for the minimum sand-dune size, we elucidate the importance of intermittent turbulent wind fluctuations for field measurements and reconcile conflicting previous models for this most enigmatic emergent aeolian scale.

  20. Revisiting the cape cod bacteria injection experiment using a stochastic modeling approach

    USGS Publications Warehouse

    Maxwell, R.M.; Welty, C.; Harvey, R.W.

    2007-01-01

    Bromide and resting-cell bacteria tracer tests conducted in a sandy aquifer at the U.S. Geological Survey Cape Cod site in 1987 were reinterpreted using a three-dimensional stochastic approach. Bacteria transport was coupled to colloid filtration theory through functional dependence of local-scale colloid transport parameters upon hydraulic conductivity and seepage velocity in a stochastic advection - dispersion/attachment - detachment model. Geostatistical information on the hydraulic conductivity (K) field that was unavailable at the time of the original test was utilized as input. Using geostatistical parameters, a groundwater flow and particle-tracking model of conservative solute transport was calibrated to the bromide-tracer breakthrough data. An optimization routine was employed over 100 realizations to adjust the mean and variance ofthe natural-logarithm of hydraulic conductivity (InK) field to achieve best fit of a simulated, average bromide breakthrough curve. A stochastic particle-tracking model for the bacteria was run without adjustments to the local-scale colloid transport parameters. Good predictions of mean bacteria breakthrough were achieved using several approaches for modeling components of the system. Simulations incorporating the recent Tufenkji and Elimelech (Environ. Sci. Technol. 2004, 38, 529-536) correlation equation for estimating single collector efficiency were compared to those using the older Rajagopalan and Tien (AIChE J. 1976, 22, 523-533) model. Both appeared to work equally well at predicting mean bacteria breakthrough using a constant mean bacteria diameter for this set of field conditions. Simulations using a distribution of bacterial cell diameters available from original field notes yielded a slight improvement in the model and data agreement compared to simulations using an average bacterial diameter. The stochastic approach based on estimates of local-scale parameters for the bacteria-transport process reasonably captured the mean bacteria transport behavior and calculated an envelope of uncertainty that bracketed the observations in most simulation cases. ?? 2007 American Chemical Society.

  1. Geospace Environment Modeling 2008-2009 Challenge: Ground Magnetic Field Perturbations

    NASA Technical Reports Server (NTRS)

    Pulkkinen, A.; Kuznetsova, M.; Ridley, A.; Raeder, J.; Vapirev, A.; Weimer, D.; Weigel, R. S.; Wiltberger, M.; Millward, G.; Rastatter, L.; hide

    2011-01-01

    Acquiring quantitative metrics!based knowledge about the performance of various space physics modeling approaches is central for the space weather community. Quantification of the performance helps the users of the modeling products to better understand the capabilities of the models and to choose the approach that best suits their specific needs. Further, metrics!based analyses are important for addressing the differences between various modeling approaches and for measuring and guiding the progress in the field. In this paper, the metrics!based results of the ground magnetic field perturbation part of the Geospace Environment Modeling 2008 2009 Challenge are reported. Predictions made by 14 different models, including an ensemble model, are compared to geomagnetic observatory recordings from 12 different northern hemispheric locations. Five different metrics are used to quantify the model performances for four storm events. It is shown that the ranking of the models is strongly dependent on the type of metric used to evaluate the model performance. None of the models rank near or at the top systematically for all used metrics. Consequently, one cannot pick the absolute winner : the choice for the best model depends on the characteristics of the signal one is interested in. Model performances vary also from event to event. This is particularly clear for root!mean!square difference and utility metric!based analyses. Further, analyses indicate that for some of the models, increasing the global magnetohydrodynamic model spatial resolution and the inclusion of the ring current dynamics improve the models capability to generate more realistic ground magnetic field fluctuations.

  2. Structure of neutron star crusts from new Skyrme effective interactions constrained by chiral effective field theory

    NASA Astrophysics Data System (ADS)

    Lim, Yeunhwan; Holt, Jeremy W.

    2017-06-01

    We investigate the structure of neutron star crusts, including the crust-core boundary, based on new Skyrme mean field models constrained by the bulk-matter equation of state from chiral effective field theory and the ground-state energies of doubly-magic nuclei. Nuclear pasta phases are studied using both the liquid drop model as well as the Thomas-Fermi approximation. We compare the energy per nucleon for each geometry (spherical nuclei, cylindrical nuclei, nuclear slabs, cylindrical holes, and spherical holes) to obtain the ground state phase as a function of density. We find that the size of the Wigner-Seitz cell depends strongly on the model parameters, especially the coefficients of the density gradient interaction terms. We employ also the thermodynamic instability method to check the validity of the numerical solutions based on energy comparisons.

  3. Team Learning to Narrow the Gap between Healthcare Knowledge and Practice

    ERIC Educational Resources Information Center

    Anand, Tejwansh S.

    2014-01-01

    This study explored team-based learning in teams of healthcare professionals working on making meaning of evidence-based clinical guidelines in their field to apply them within their practice setting. The research based team learning models posited by Kasl, Marsick, and Dechant (1997) and Edmondson, Dillon, and Roloff (2007) were used as the…

  4. MRI non-uniformity correction through interleaved bias estimation and B-spline deformation with a template.

    PubMed

    Fletcher, E; Carmichael, O; Decarli, C

    2012-01-01

    We propose a template-based method for correcting field inhomogeneity biases in magnetic resonance images (MRI) of the human brain. At each algorithm iteration, the update of a B-spline deformation between an unbiased template image and the subject image is interleaved with estimation of a bias field based on the current template-to-image alignment. The bias field is modeled using a spatially smooth thin-plate spline interpolation based on ratios of local image patch intensity means between the deformed template and subject images. This is used to iteratively correct subject image intensities which are then used to improve the template-to-image deformation. Experiments on synthetic and real data sets of images with and without Alzheimer's disease suggest that the approach may have advantages over the popular N3 technique for modeling bias fields and narrowing intensity ranges of gray matter, white matter, and cerebrospinal fluid. This bias field correction method has the potential to be more accurate than correction schemes based solely on intrinsic image properties or hypothetical image intensity distributions.

  5. MRI Non-Uniformity Correction Through Interleaved Bias Estimation and B-Spline Deformation with a Template*

    PubMed Central

    Fletcher, E.; Carmichael, O.; DeCarli, C.

    2013-01-01

    We propose a template-based method for correcting field inhomogeneity biases in magnetic resonance images (MRI) of the human brain. At each algorithm iteration, the update of a B-spline deformation between an unbiased template image and the subject image is interleaved with estimation of a bias field based on the current template-to-image alignment. The bias field is modeled using a spatially smooth thin-plate spline interpolation based on ratios of local image patch intensity means between the deformed template and subject images. This is used to iteratively correct subject image intensities which are then used to improve the template-to-image deformation. Experiments on synthetic and real data sets of images with and without Alzheimer’s disease suggest that the approach may have advantages over the popular N3 technique for modeling bias fields and narrowing intensity ranges of gray matter, white matter, and cerebrospinal fluid. This bias field correction method has the potential to be more accurate than correction schemes based solely on intrinsic image properties or hypothetical image intensity distributions. PMID:23365843

  6. Two Populations Mean-Field Monomer-Dimer Model

    NASA Astrophysics Data System (ADS)

    Alberici, Diego; Mingione, Emanuele

    2018-04-01

    A two populations mean-field monomer-dimer model including both hard-core and attractive interactions between dimers is considered. The pressure density in the thermodynamic limit is proved to satisfy a variational principle. A detailed analysis is made in the limit of one population is much smaller than the other and a ferromagnetic mean-field phase transition is found.

  7. The Cold Land Processes Experiment (CLPX-1): Analysis and Modelling of LSOS Data (IOP3 Period)

    NASA Technical Reports Server (NTRS)

    Tedesco, Marco; Kim, Edward J.; Cline, Don; Graf, Tobias; Koike, Toshio; Hardy, Janet; Armstrong, Richard; Brodzik, Mary

    2004-01-01

    Microwave brightness temperatures at 18.7,36.5, and 89 GHz collected at the Local-Scale Observation Site (LSOS) of the NASA Cold-Land Processes Field Experiment in February, 2003 (third Intensive Observation Period) were simulated using a Dense Media Radiative Transfer model (DMRT), based on the Quasi Crystalline Approximation with Coherent Potential (QCA-CP). Inputs to the model were averaged from LSOS snow pit measurements, although different averages were used for the lower frequencies vs. the highest one, due to the different penetration depths and to the stratigraphy of the snowpack. Mean snow particle radius was computed as a best-fit parameter. Results show that the model was able to reproduce satisfactorily brightness temperatures measured by the University of Tokyo s Ground Based Microwave Radiometer system (CBMR-7). The values of the best-fit snow particle radii were found to fall within the range of values obtained by averaging the field-measured mean particle sizes for the three classes of Small, Medium and Large grain sizes measured at the LSOS site.

  8. The Mediterranean surface wave climate inferred from future scenario simulations

    NASA Astrophysics Data System (ADS)

    Lionello, P.; Cogo, S.; Galati, M. B.; Sanna, A.

    2008-09-01

    This study is based on 30-year long simulations of the wind-wave field in the Mediterranean Sea carried out with the WAM model. Wave fields have been computed for the 2071-2100 period of the A2, B2 emission scenarios and for the 1961-1990 period of the present climate (REF). The wave model has been forced by the wind field computed by a regional climate model with 50 km resolution. The mean SWH (Significant Wave Height) field over large fraction of the Mediterranean sea is lower for the A2 scenario than for the present climate during winter, spring and autumn. During summer the A2 mean SWH field is also lower everywhere, except for two areas, those between Greece and Northern Africa and between Spain and Algeria, where it is significantly higher. All these changes are similar, though smaller and less significant, in the B2 scenario, except during winter in the north-western Mediterranean Sea, when the B2 mean SWH field is higher than in the REF simulation. Also extreme SWH values are smaller in future scenarios than in the present climate and such SWH change is larger for the A2 than for the B2 scenario. The only exception is the presence of higher SWH extremes in the central Mediterranean during summer for the A2 scenario. In general, changes of SWH, wind speed and atmospheric circulation are consistent, and results show milder marine storms in future scenarios than in the present climate.

  9. Mean state densities, temperatures and winds during the MAC/SINE and MAC/EPSILON campaigns

    NASA Technical Reports Server (NTRS)

    Luebken, F.-J.; Von Zahn, U.; Manson, A.; Meek, C.; Hoppe, U.-P.; Schmidlin, F. J.

    1990-01-01

    Two field campaigns were conducted, primarily in northern Norway, in the summer and late autumn of 1987; these yielded a total of 41 in situ temperature profiles and 67 in situ wind profiles. Simultaneously, ground-based measurements were conducted of OH temperatures and sodium lidar temperatures for 85 and 104 hours, respectively. The summer campaign's mean temperature profile exhibited major deviations from the CIRA (1986) reference atmosphere; the differences between this model and the observations are less pronounced in the autumn. Both the summer and autumn mean wind profiles were in general agreement with the CIRA model.

  10. The Brownian mean field model

    NASA Astrophysics Data System (ADS)

    Chavanis, Pierre-Henri

    2014-05-01

    We discuss the dynamics and thermodynamics of the Brownian mean field (BMF) model which is a system of N Brownian particles moving on a circle and interacting via a cosine potential. It can be viewed as the canonical version of the Hamiltonian mean field (HMF) model. The BMF model displays a second order phase transition from a homogeneous phase to an inhomogeneous phase below a critical temperature T c = 1 / 2. We first complete the description of this model in the mean field approximation valid for N → +∞. In the strong friction limit, the evolution of the density towards the mean field Boltzmann distribution is governed by the mean field Smoluchowski equation. For T < T c , this equation describes a process of self-organization from a non-magnetized (homogeneous) phase to a magnetized (inhomogeneous) phase. We obtain an analytical expression for the temporal evolution of the magnetization close to T c . Then, we take fluctuations (finite N effects) into account. The evolution of the density is governed by the stochastic Smoluchowski equation. From this equation, we derive a stochastic equation for the magnetization and study its properties both in the homogenous and inhomogeneous phase. We show that the fluctuations diverge at the critical point so that the mean field approximation ceases to be valid. Actually, the limits N → +∞ and T → T c do not commute. The validity of the mean field approximation requires N( T - T c ) → +∞ so that N must be larger and larger as T approaches T c . We show that the direction of the magnetization changes rapidly close to T c while its amplitude takes a long time to relax. We also indicate that, for systems with long-range interactions, the lifetime of metastable states scales as e N except close to a critical point. The BMF model shares many analogies with other systems of Brownian particles with long-range interactions such as self-gravitating Brownian particles, the Keller-Segel model describing the chemotaxis of bacterial populations, the Kuramoto model describing the collective synchronization of coupled oscillators, the Desai-Zwanzig model, and the models describing the collective motion of social organisms such as bird flocks or fish schools.

  11. Unravelling the Gordian knot! Key processes impacting overwintering larval survival and growth: A North Sea herring case study

    NASA Astrophysics Data System (ADS)

    Hufnagl, Marc; Peck, Myron A.; Nash, Richard D. M.; Dickey-Collas, Mark

    2015-11-01

    Unraveling the key processes affecting marine fish recruitment will ultimately require a combination of field, laboratory and modelling studies. We combined analyzes of long-term (30-year) field data on larval fish abundance, distribution and length, and biophysical model simulations of different levels of complexity to identify processes impacting the survival and growth of autumn- and winter-spawned Atlantic herring (Clupea harengus) larvae. Field survey data revealed interannual changes in intensity of utilization of the five major spawning grounds (Orkney/Shetland, Buchan, Banks north, Banks south, and Downs) as well as spatio-temporal variability in the length and abundance of overwintered larvae. The mean length of larvae captured in post-winter surveys was negatively correlated to the proportion of larvae from the southern-most (Downs) winter-spawning component. Furthermore, the mean length of larvae originating from all spawning components has decreased since 1990 suggesting ecosystem-wide changes impacting larval growth potential, most likely due to changes in prey fields. A simple biophysical model assuming temperature-dependent growth and constant mortality underestimated larval growth rates suggesting that larval mortality rates steeply declined with increasing size and/or age during winter as no match with field data could be obtained. In contrast better agreement was found between observed and modelled post-winter abundance for larvae originating from four spawning components when a more complex, physiological-based foraging and growth model was employed using a suite of potential prey field and size-based mortality scenarios. Nonetheless, agreement between field and model-derived estimates was poor for larvae originating from the winter-spawned Downs component. In North Sea herring, the dominant processes impacting larval growth and survival appear to have shifted in time and space highlighting how environmental forcing, ecosystem state and other factors can form a Gordian knot of marine fish recruitment processes. We highlight gaps in process knowledge and recommend specific field, laboratory and modelling studies which, in our opinion, are most likely to unravel the dominant processes and advance predictive capacity of the environmental regulation of recruitment in autumn and winter-spawned fishes in temperate areas such as herring in the North Sea.

  12. Information driving force and its application in agent-based modeling

    NASA Astrophysics Data System (ADS)

    Chen, Ting-Ting; Zheng, Bo; Li, Yan; Jiang, Xiong-Fei

    2018-04-01

    Exploring the scientific impact of online big-data has attracted much attention of researchers from different fields in recent years. Complex financial systems are typical open systems profoundly influenced by the external information. Based on the large-scale data in the public media and stock markets, we first define an information driving force, and analyze how it affects the complex financial system. The information driving force is observed to be asymmetric in the bull and bear market states. As an application, we then propose an agent-based model driven by the information driving force. Especially, all the key parameters are determined from the empirical analysis rather than from statistical fitting of the simulation results. With our model, both the stationary properties and non-stationary dynamic behaviors are simulated. Considering the mean-field effect of the external information, we also propose a few-body model to simulate the financial market in the laboratory.

  13. The noisy voter model on complex networks.

    PubMed

    Carro, Adrián; Toral, Raúl; San Miguel, Maxi

    2016-04-20

    We propose a new analytical method to study stochastic, binary-state models on complex networks. Moving beyond the usual mean-field theories, this alternative approach is based on the introduction of an annealed approximation for uncorrelated networks, allowing to deal with the network structure as parametric heterogeneity. As an illustration, we study the noisy voter model, a modification of the original voter model including random changes of state. The proposed method is able to unfold the dependence of the model not only on the mean degree (the mean-field prediction) but also on more complex averages over the degree distribution. In particular, we find that the degree heterogeneity--variance of the underlying degree distribution--has a strong influence on the location of the critical point of a noise-induced, finite-size transition occurring in the model, on the local ordering of the system, and on the functional form of its temporal correlations. Finally, we show how this latter point opens the possibility of inferring the degree heterogeneity of the underlying network by observing only the aggregate behavior of the system as a whole, an issue of interest for systems where only macroscopic, population level variables can be measured.

  14. Kuramoto model of coupled oscillators with positive and negative coupling parameters: an example of conformist and contrarian oscillators.

    PubMed

    Hong, Hyunsuk; Strogatz, Steven H

    2011-02-04

    We consider a generalization of the Kuramoto model in which the oscillators are coupled to the mean field with random signs. Oscillators with positive coupling are "conformists"; they are attracted to the mean field and tend to synchronize with it. Oscillators with negative coupling are "contrarians"; they are repelled by the mean field and prefer a phase diametrically opposed to it. The model is simple and exactly solvable, yet some of its behavior is surprising. Along with the stationary states one might have expected (a desynchronized state, and a partially-synchronized state, with conformists and contrarians locked in antiphase), it also displays a traveling wave, in which the mean field oscillates at a frequency different from the population's mean natural frequency.

  15. Polarizable Force Field for DNA Based on the Classical Drude Oscillator: I. Refinement Using Quantum Mechanical Base Stacking and Conformational Energetics.

    PubMed

    Lemkul, Justin A; MacKerell, Alexander D

    2017-05-09

    Empirical force fields seek to relate the configuration of a set of atoms to its energy, thus yielding the forces governing its dynamics, using classical physics rather than more expensive quantum mechanical calculations that are computationally intractable for large systems. Most force fields used to simulate biomolecular systems use fixed atomic partial charges, neglecting the influence of electronic polarization, instead making use of a mean-field approximation that may not be transferable across environments. Recent hardware and software developments make polarizable simulations feasible, and to this end, polarizable force fields represent the next generation of molecular dynamics simulation technology. In this work, we describe the refinement of a polarizable force field for DNA based on the classical Drude oscillator model by targeting quantum mechanical interaction energies and conformational energy profiles of model compounds necessary to build a complete DNA force field. The parametrization strategy employed in the present work seeks to correct weak base stacking in A- and B-DNA and the unwinding of Z-DNA observed in the previous version of the force field, called Drude-2013. Refinement of base nonbonded terms and reparametrization of dihedral terms in the glycosidic linkage, deoxyribofuranose rings, and important backbone torsions resulted in improved agreement with quantum mechanical potential energy surfaces. Notably, we expand on previous efforts by explicitly including Z-DNA conformational energetics in the refinement.

  16. Magnetic helicity of the global field in solar cycles 23 and 24

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pipin, V. V.; Pevtsov, A. A.

    2014-07-01

    For the first time we reconstruct the magnetic helicity density of the global axisymmetric field of the Sun using the method proposed by Brandenburg et al. and Pipin et al. To determine the components of the vector potential, we apply a gauge which is typically employed in mean-field dynamo models. This allows for a direct comparison of the reconstructed helicity with the predictions from the mean-field dynamo models. We apply this method to two different data sets: the synoptic maps of the line-of-sight magnetic field from the Michelson Doppler Imager (MDI) on board the Solar and Heliospheric Observatory (SOHO) andmore » vector magnetic field measurements from the Vector Spectromagnetograph (VSM) on the Synoptic Optical Long-term Investigations of the Sun (SOLIS) system. Based on the analysis of the MDI/SOHO data, we find that in solar cycle 23 the global magnetic field had positive (negative) magnetic helicity in the northern (southern) hemisphere. This hemispheric sign asymmetry is opposite to the helicity of the solar active regions, but it is in agreement with the predictions of mean-field dynamo models. The data also suggest that the hemispheric helicity rule may have reversed its sign during the early and late phases of cycle 23. Furthermore, the data indicate an imbalance in magnetic helicity between the northern and southern hemispheres. This imbalance seems to correlate with the total level of activity in each hemisphere in cycle 23. The magnetic helicity for the rising phase of cycle 24 is derived from SOLIS/VSM data, and qualitatively its latitudinal pattern is similar to the pattern derived from SOHO/MDI data for cycle 23.« less

  17. Automated feature extraction and spatial organization of seafloor pockmarks, Belfast Bay, Maine, USA

    USGS Publications Warehouse

    Andrews, Brian D.; Brothers, Laura L.; Barnhardt, Walter A.

    2010-01-01

    Seafloor pockmarks occur worldwide and may represent millions of m3 of continental shelf erosion, but few numerical analyses of their morphology and spatial distribution of pockmarks exist. We introduce a quantitative definition of pockmark morphology and, based on this definition, propose a three-step geomorphometric method to identify and extract pockmarks from high-resolution swath bathymetry. We apply this GIS-implemented approach to 25 km2 of bathymetry collected in the Belfast Bay, Maine USA pockmark field. Our model extracted 1767 pockmarks and found a linear pockmark depth-to-diameter ratio for pockmarks field-wide. Mean pockmark depth is 7.6 m and mean diameter is 84.8 m. Pockmark distribution is non-random, and nearly half of the field's pockmarks occur in chains. The most prominent chains are oriented semi-normal to the steepest gradient in Holocene sediment thickness. A descriptive model yields field-wide spatial statistics indicating that pockmarks are distributed in non-random clusters. Results enable quantitative comparison of pockmarks in fields worldwide as well as similar concave features, such as impact craters, dolines, or salt pools.

  18. A modified acceleration-based monthly gravity field solution from GRACE data

    NASA Astrophysics Data System (ADS)

    Chen, Qiujie; Shen, Yunzhong; Chen, Wu; Zhang, Xingfu; Hsu, Houze; Ju, Xiaolei

    2015-08-01

    This paper describes an alternative acceleration approach for determining GRACE monthly gravity field models. The main differences compared to the traditional acceleration approach can be summarized as: (1) The position errors of GRACE orbits in the functional model are taken into account; (2) The range ambiguity is eliminated via the difference of the range measurements and (3) The mean acceleration equation is formed based on Cowell integration. Using this developed approach, a new time-series of GRACE monthly solution spanning the period January 2003 to December 2010, called Tongji_Acc RL01, has been derived. The annual signals from the Tongji_Acc RL01 time-series agree well with those from the GLDAS model. The performance of Tongji_Acc RL01 shows that this new model is comparable with the RL05 models released by CSR and JPL as well as with the RL05a model released by GFZ.

  19. Linear Quadratic Mean Field Type Control and Mean Field Games with Common Noise, with Application to Production of an Exhaustible Resource

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graber, P. Jameson, E-mail: jameson-graber@baylor.edu

    We study a general linear quadratic mean field type control problem and connect it to mean field games of a similar type. The solution is given both in terms of a forward/backward system of stochastic differential equations and by a pair of Riccati equations. In certain cases, the solution to the mean field type control is also the equilibrium strategy for a class of mean field games. We use this fact to study an economic model of production of exhaustible resources.

  20. Mathematical modeling of PDC bit drilling process based on a single-cutter mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wojtanowicz, A.K.; Kuru, E.

    1993-12-01

    An analytical development of a new mechanistic drilling model for polycrystalline diamond compact (PDC) bits is presented. The derivation accounts for static balance of forces acting on a single PDC cutter and is based on assumed similarity between bit and cutter. The model is fully explicit with physical meanings given to all constants and functions. Three equations constitute the mathematical model: torque, drilling rate, and bit life. The equations comprise cutter`s geometry, rock properties drilling parameters, and four empirical constants. The constants are used to match the model to a PDC drilling process. Also presented are qualitative and predictive verificationsmore » of the model. Qualitative verification shows that the model`s response to drilling process variables is similar to the behavior of full-size PDC bits. However, accuracy of the model`s predictions of PDC bit performance is limited primarily by imprecision of bit-dull evaluation. The verification study is based upon the reported laboratory drilling and field drilling tests as well as field data collected by the authors.« less

  1. Ability of commercially available dairy ration programs to predict duodenal flows of protein and essential amino acids in dairy cows.

    PubMed

    Pacheco, D; Patton, R A; Parys, C; Lapierre, H

    2012-02-01

    The objective of this analysis was to compare the rumen submodel predictions of 4 commonly used dairy ration programs to observed values of duodenal flows of crude protein (CP), protein fractions, and essential AA (EAA). The literature was searched and 40 studies, including 154 diets, were used to compare observed values with those predicted by AminoCow (AC), Agricultural Modeling and Training Systems (AMTS), Cornell-Penn-Miner (CPM), and National Research Council 2001 (NRC) models. The models were evaluated based on their ability to predict the mean, their root mean square prediction error (RMSPE), error bias, and adequacy of regression equations for each protein fraction. The models predicted the mean duodenal CP flow within 5%, with more than 90% of the variation due to random disturbance. The models also predicted within 5% the mean microbial CP flow except CPM, which overestimated it by 27%. Only NRC, however, predicted mean rumen-undegraded protein (RUP) flows within 5%, whereas AC and AMTS underpredicted it by 8 to 9% and CPM by 24%. Regarding duodenal flows of individual AA, across all diets, CPM predicted substantially greater (>10%) mean flows of Arg, His, Ile, Met, and Lys; AMTS predicted greater flow for Arg and Met, whereas AC and NRC estimations were, on average, within 10% of observed values. Overpredictions by the CPM model were mainly related to mean bias, whereas the NRC model had the highest proportion of bias in random disturbance for flows of EAA. Models tended to predict mean flows of EAA more accurately on corn silage and alfalfa diets than on grass-based diets, more accurately on corn grain-based diets than on non-corn-based diets, and finally more accurately in the mid range of diet types. The 4 models were accurate at predicting mean dry matter intake. The AC, AMTS, and NRC models were all sufficiently accurate to be used for balancing EAA in dairy rations under field conditions. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  2. Mean Field Analysis of Large-Scale Interacting Populations of Stochastic Conductance-Based Spiking Neurons Using the Klimontovich Method

    NASA Astrophysics Data System (ADS)

    Gandolfo, Daniel; Rodriguez, Roger; Tuckwell, Henry C.

    2017-03-01

    We investigate the dynamics of large-scale interacting neural populations, composed of conductance based, spiking model neurons with modifiable synaptic connection strengths, which are possibly also subjected to external noisy currents. The network dynamics is controlled by a set of neural population probability distributions (PPD) which are constructed along the same lines as in the Klimontovich approach to the kinetic theory of plasmas. An exact non-closed, nonlinear, system of integro-partial differential equations is derived for the PPDs. As is customary, a closing procedure leads to a mean field limit. The equations we have obtained are of the same type as those which have been recently derived using rigorous techniques of probability theory. The numerical solutions of these so called McKean-Vlasov-Fokker-Planck equations, which are only valid in the limit of infinite size networks, actually shows that the statistical measures as obtained from PPDs are in good agreement with those obtained through direct integration of the stochastic dynamical system for large but finite size networks. Although numerical solutions have been obtained for networks of Fitzhugh-Nagumo model neurons, which are often used to approximate Hodgkin-Huxley model neurons, the theory can be readily applied to networks of general conductance-based model neurons of arbitrary dimension.

  3. Carrier trajectory tracking equations for Simple-band Monte Carlo simulation of avalanche multiplication processes

    NASA Astrophysics Data System (ADS)

    Ong, J. S. L.; Charin, C.; Leong, J. H.

    2017-12-01

    Avalanche photodiodes (APDs) with steep electric field gradients generally have low excess noise that arises from carrier multiplication within the internal gain of the devices, and the Monte Carlo (MC) method is among popular device simulation tools for such devices. However, there are few articles relating to carrier trajectory modeling in MC models for such devices. In this work, a set of electric-field-gradient-dependent carrier trajectory tracking equations are developed and used to update the positions of carriers along the path during Simple-band Monte Carlo (SMC) simulations of APDs with non-uniform electric fields. The mean gain and excess noise results obtained from the SMC model employing these equations show good agreement with the results reported for a series of silicon diodes, including a p+n diode with steep electric field gradients. These results confirm the validity and demonstrate the feasibility of the trajectory tracking equations applied in SMC models for simulating mean gain and excess noise in APDs with non-uniform electric fields. Also, the simulation results of mean gain, excess noise, and carrier ionization positions obtained from the SMC model of this work agree well with those of the conventional SMC model employing the concept of a uniform electric field within a carrier free-flight. These results demonstrate that the electric field variation within a carrier free-flight has an insignificant effect on the predicted mean gain and excess noise results. Therefore, both the SMC model of this work and the conventional SMC model can be used to predict the mean gain and excess noise in APDs with highly non-uniform electric fields.

  4. Macroscopic quantum tunneling escape of Bose-Einstein condensates

    NASA Astrophysics Data System (ADS)

    Zhao, Xinxin; Alcala, Diego A.; McLain, Marie A.; Maeda, Kenji; Potnis, Shreyas; Ramos, Ramon; Steinberg, Aephraim M.; Carr, Lincoln D.

    2017-12-01

    Recent experiments on macroscopic quantum tunneling reveal a nonexponential decay of the number of atoms trapped in a quasibound state behind a potential barrier. Through both experiment and theory, we demonstrate this nonexponential decay results from interactions between atoms. Quantum tunneling of tens of thousands of 87Rb atoms in a Bose-Einstein condensate is modeled by a modified Jeffreys-Wentzel-Kramers-Brillouin model, taking into account the effective time-dependent barrier induced by the mean field. Three-dimensional Gross-Pitaevskii simulations corroborate a mean-field result when compared with experiments. However, with one-dimensional modeling using time-evolving block decimation, we present an effective renormalized mean-field theory that suggests many-body dynamics for which a bare mean-field theory may not apply.

  5. How Meaning Is Born.

    ERIC Educational Resources Information Center

    Hunt, Madgie Mae

    In an effort to create a multilevel, interactive, and hypothesis-based model of the reading comprehension process that bridges interdisciplinary gaps in the theory of learning, this report focuses on descriptions of cognitive processes developed in the fields of cognitive psychology, artificial intelligence, sociolinguistics, linguistics, and…

  6. Field warming experiments shed light on the wheat yield response to temperature in China

    PubMed Central

    Zhao, Chuang; Piao, Shilong; Huang, Yao; Wang, Xuhui; Ciais, Philippe; Huang, Mengtian; Zeng, Zhenzhong; Peng, Shushi

    2016-01-01

    Wheat growth is sensitive to temperature, but the effect of future warming on yield is uncertain. Here, focusing on China, we compiled 46 observations of the sensitivity of wheat yield to temperature change (SY,T, yield change per °C) from field warming experiments and 102 SY,T estimates from local process-based and statistical models. The average SY,T from field warming experiments, local process-based models and statistical models is −0.7±7.8(±s.d.)% per °C, −5.7±6.5% per °C and 0.4±4.4% per °C, respectively. Moreover, SY,T is different across regions and warming experiments indicate positive SY,T values in regions where growing-season mean temperature is low, and water supply is not limiting, and negative values elsewhere. Gridded crop model simulations from the Inter-Sectoral Impact Model Intercomparison Project appear to capture the spatial pattern of SY,T deduced from warming observations. These results from local manipulative experiments could be used to improve crop models in the future. PMID:27853151

  7. Bulalo field, Philippines: Reservoir modeling for prediction of limits to sustainable generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strobel, Calvin J.

    1993-01-28

    The Bulalo geothermal field, located in Laguna province, Philippines, supplies 12% of the electricity on the island of Luzon. The first 110 MWe power plant was on line May 1979; current 330 MWe (gross) installed capacity was reached in 1984. Since then, the field has operated at an average plant factor of 76%. The National Power Corporation plans to add 40 MWe base load and 40 MWe standby in 1995. A numerical simulation model for the Bulalo field has been created that matches historic pressure changes, enthalpy and steam flash trends and cumulative steam production. Gravity modeling provided independent verificationmore » of mass balances and time rate of change of liquid desaturation in the rock matrix. Gravity modeling, in conjunction with reservoir simulation provides a means of predicting matrix dry out and the time to limiting conditions for sustainable levelized steam deliverability and power generation.« less

  8. Focusing behavior of the fractal vector optical fields designed by fractal lattice growth model.

    PubMed

    Gao, Xu-Zhen; Pan, Yue; Zhao, Meng-Dan; Zhang, Guan-Lin; Zhang, Yu; Tu, Chenghou; Li, Yongnan; Wang, Hui-Tian

    2018-01-22

    We introduce a general fractal lattice growth model, significantly expanding the application scope of the fractal in the realm of optics. This model can be applied to construct various kinds of fractal "lattices" and then to achieve the design of a great diversity of fractal vector optical fields (F-VOFs) combinating with various "bases". We also experimentally generate the F-VOFs and explore their universal focusing behaviors. Multiple focal spots can be flexibly enginnered, and the optical tweezers experiment validates the simulated tight focusing fields, which means that this model allows the diversity of the focal patterns to flexibly trap and manipulate micrometer-sized particles. Furthermore, the recovery performance of the F-VOFs is also studied when the input fields and spatial frequency spectrum are obstructed, and the results confirm the robustness of the F-VOFs in both focusing and imaging processes, which is very useful in information transmission.

  9. [Simulation of water and carbon fluxes in harvard forest area based on data assimilation method].

    PubMed

    Zhang, Ting-Long; Sun, Rui; Zhang, Rong-Hua; Zhang, Lei

    2013-10-01

    Model simulation and in situ observation are the two most important means in studying the water and carbon cycles of terrestrial ecosystems, but have their own advantages and shortcomings. To combine these two means would help to reflect the dynamic changes of ecosystem water and carbon fluxes more accurately. Data assimilation provides an effective way to integrate the model simulation and in situ observation. Based on the observation data from the Harvard Forest Environmental Monitoring Site (EMS), and by using ensemble Kalman Filter algorithm, this paper assimilated the field measured LAI and remote sensing LAI into the Biome-BGC model to simulate the water and carbon fluxes in Harvard forest area. As compared with the original model simulated without data assimilation, the improved Biome-BGC model with the assimilation of the field measured LAI in 1998, 1999, and 2006 increased the coefficient of determination R2 between model simulation and flux observation for the net ecosystem exchange (NEE) and evapotranspiration by 8.4% and 10.6%, decreased the sum of absolute error (SAE) and root mean square error (RMSE) of NEE by 17.7% and 21.2%, and decreased the SAE and RMSE of the evapotranspiration by 26. 8% and 28.3%, respectively. After assimilated the MODIS LAI products of 2000-2004 into the improved Biome-BGC model, the R2 between simulated and observed results of NEE and evapotranspiration was increased by 7.8% and 4.7%, the SAE and RMSE of NEE were decreased by 21.9% and 26.3%, and the SAE and RMSE of evapotranspiration were decreased by 24.5% and 25.5%, respectively. It was suggested that the simulation accuracy of ecosystem water and carbon fluxes could be effectively improved if the field measured LAI or remote sensing LAI was integrated into the model.

  10. Are Opinions Based on Science: Modelling Social Response to Scientific Facts

    PubMed Central

    Iñiguez, Gerardo; Tagüeña-Martínez, Julia; Kaski, Kimmo K.; Barrio, Rafael A.

    2012-01-01

    As scientists we like to think that modern societies and their members base their views, opinions and behaviour on scientific facts. This is not necessarily the case, even though we are all (over-) exposed to information flow through various channels of media, i.e. newspapers, television, radio, internet, and web. It is thought that this is mainly due to the conflicting information on the mass media and to the individual attitude (formed by cultural, educational and environmental factors), that is, one external factor and another personal factor. In this paper we will investigate the dynamical development of opinion in a small population of agents by means of a computational model of opinion formation in a co-evolving network of socially linked agents. The personal and external factors are taken into account by assigning an individual attitude parameter to each agent, and by subjecting all to an external but homogeneous field to simulate the effect of the media. We then adjust the field strength in the model by using actual data on scientific perception surveys carried out in two different populations, which allow us to compare two different societies. We interpret the model findings with the aid of simple mean field calculations. Our results suggest that scientifically sound concepts are more difficult to acquire than concepts not validated by science, since opposing individuals organize themselves in close communities that prevent opinion consensus. PMID:22905117

  11. Are opinions based on science: modelling social response to scientific facts.

    PubMed

    Iñiguez, Gerardo; Tagüeña-Martínez, Julia; Kaski, Kimmo K; Barrio, Rafael A

    2012-01-01

    As scientists we like to think that modern societies and their members base their views, opinions and behaviour on scientific facts. This is not necessarily the case, even though we are all (over-) exposed to information flow through various channels of media, i.e. newspapers, television, radio, internet, and web. It is thought that this is mainly due to the conflicting information on the mass media and to the individual attitude (formed by cultural, educational and environmental factors), that is, one external factor and another personal factor. In this paper we will investigate the dynamical development of opinion in a small population of agents by means of a computational model of opinion formation in a co-evolving network of socially linked agents. The personal and external factors are taken into account by assigning an individual attitude parameter to each agent, and by subjecting all to an external but homogeneous field to simulate the effect of the media. We then adjust the field strength in the model by using actual data on scientific perception surveys carried out in two different populations, which allow us to compare two different societies. We interpret the model findings with the aid of simple mean field calculations. Our results suggest that scientifically sound concepts are more difficult to acquire than concepts not validated by science, since opposing individuals organize themselves in close communities that prevent opinion consensus.

  12. A numerical solution of the problem of crown forest fire initiation and spread

    NASA Astrophysics Data System (ADS)

    Marzaeva, S. I.; Galtseva, O. V.

    2018-05-01

    Mathematical model of forest fire was based on an analysis of known experimental data and using concept and methods from reactive media mechanics. The study takes in to account the mutual interaction of the forest fires and three-dimensional atmosphere flows. The research is done by means of mathematical modeling of physical processes. It is based on numerical solution of Reynolds equations for chemical components and equations of energy conservation for gaseous and condensed phases. It is assumed that the forest during a forest fire can be modeled as a two-temperature multiphase non-deformable porous reactive medium. A discrete analog for the system of equations was obtained by means of the control volume method. The developed model of forest fire initiation and spreading would make it possible to obtain a detailed picture of the variation in the velocity, temperature and chemical species concentration fields with time. Mathematical model and the result of the calculation give an opportunity to evaluate critical conditions of the forest fire initiation and spread which allows applying the given model for of means for preventing fires.

  13. Spatial correlations in driven-dissipative photonic lattices

    NASA Astrophysics Data System (ADS)

    Biondi, Matteo; Lienhard, Saskia; Blatter, Gianni; Türeci, Hakan E.; Schmidt, Sebastian

    2017-12-01

    We study the nonequilibrium steady-state of interacting photons in cavity arrays as described by the driven-dissipative Bose–Hubbard and spin-1/2 XY model. For this purpose, we develop a self-consistent expansion in the inverse coordination number of the array (∼ 1/z) to solve the Lindblad master equation of these systems beyond the mean-field approximation. Our formalism is compared and benchmarked with exact numerical methods for small systems based on an exact diagonalization of the Liouvillian and a recently developed corner-space renormalization technique. We then apply this method to obtain insights beyond mean-field in two particular settings: (i) we show that the gas–liquid transition in the driven-dissipative Bose–Hubbard model is characterized by large density fluctuations and bunched photon statistics. (ii) We study the antibunching–bunching transition of the nearest-neighbor correlator in the driven-dissipative spin-1/2 XY model and provide a simple explanation of this phenomenon.

  14. Clustering promotes switching dynamics in networks of noisy neurons

    NASA Astrophysics Data System (ADS)

    Franović, Igor; Klinshov, Vladimir

    2018-02-01

    Macroscopic variability is an emergent property of neural networks, typically manifested in spontaneous switching between the episodes of elevated neuronal activity and the quiescent episodes. We investigate the conditions that facilitate switching dynamics, focusing on the interplay between the different sources of noise and heterogeneity of the network topology. We consider clustered networks of rate-based neurons subjected to external and intrinsic noise and derive an effective model where the network dynamics is described by a set of coupled second-order stochastic mean-field systems representing each of the clusters. The model provides an insight into the different contributions to effective macroscopic noise and qualitatively indicates the parameter domains where switching dynamics may occur. By analyzing the mean-field model in the thermodynamic limit, we demonstrate that clustering promotes multistability, which gives rise to switching dynamics in a considerably wider parameter region compared to the case of a non-clustered network with sparse random connection topology.

  15. Continuum Mean-Field Theories for Molecular Fluids, and Their Validity at the Nanoscale

    NASA Astrophysics Data System (ADS)

    Hanna, C. B.; Peyronel, F.; MacDougall, C.; Marangoni, A.; Pink, D. A.; AFMNet-NCE Collaboration

    2011-03-01

    We present a calculation of the physical properties of solid triglyceride particles dispersed in an oil phase, using atomic- scale molecular dynamics. Significant equilibrium density oscillations in the oil appear when the interparticle distance, d , becomes sufficiently small, with a global minimum in the free energy found at d ~ 1.4 nm. We compare the simulation values of the Hamaker coefficient with those of models which assume that the oil is a homogeneous continuum: (i) Lifshitz theory, (ii) the Fractal Model, and (iii) a Lennard-Jones 6-12 potential model. The last-named yields a minimum in the free energy at d ~ 0.26 nm. We conclude that, at the nanoscale, continuum Lifshitz theory and other continuum mean-field theories based on the assumption of homogeneous fluid density can lead to erroneous conclusions. CBH supported by NSF DMR-0906618. DAP supported by NSERC. This work supported by AFMNet-NCE.

  16. An assessment of laser velocimetry in hypersonic flow

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Although extensive progress has been made in computational fluid mechanics, reliable flight vehicle designs and modifications still cannot be made without recourse to extensive wind tunnel testing. Future progress in the computation of hypersonic flow fields is restricted by the need for a reliable mean flow and turbulence modeling data base which could be used to aid in the development of improved empirical models for use in numerical codes. Currently, there are few compressible flow measurements which could be used for this purpose. In this report, the results of experiments designed to assess the potential for laser velocimeter measurements of mean flow and turbulent fluctuations in hypersonic flow fields are presented. Details of a new laser velocimeter system which was designed and built for this test program are described.

  17. MAGNETOHYDRODYNAMIC SIMULATION-DRIVEN KINEMATIC MEAN FIELD MODEL OF THE SOLAR CYCLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simard, Corinne; Charbonneau, Paul; Bouchat, Amelie, E-mail: corinne@astro.umontreal.ca, E-mail: paulchar@astro.umontreal.ca, E-mail: amelie.bouchat@mail.mcgill.ca

    We construct a series of kinematic axisymmetric mean-field dynamo models operating in the {alpha}{Omega}, {alpha}{sup 2}{Omega} and {alpha}{sup 2} regimes, all using the full {alpha}-tensor extracted from a global magnetohydrodynamical simulation of solar convection producing large-scale magnetic fields undergoing solar-like cyclic polarity reversals. We also include an internal differential rotation profile produced in a purely hydrodynamical parent simulation of solar convection, and a simple meridional flow profile described by a single cell per meridional quadrant. An {alpha}{sup 2}{Omega} mean-field model, presumably closest to the mode of dynamo action characterizing the MHD simulation, produces a spatiotemporal evolution of magnetic fields thatmore » share some striking similarities with the zonally-averaged toroidal component extracted from the simulation. Comparison with {alpha}{sup 2} and {alpha}{Omega} mean-field models operating in the same parameter regimes indicates that much of the complexity observed in the spatiotemporal evolution of the large-scale magnetic field in the simulation can be traced to the turbulent electromotive force. Oscillating {alpha}{sup 2} solutions are readily produced, and show some similarities with the observed solar cycle, including a deep-seated toroidal component concentrated at low latitudes and migrating equatorward in the course of the solar cycle. Various numerical experiments performed using the mean-field models reveal that turbulent pumping plays an important role in setting the global characteristics of the magnetic cycles.« less

  18. Decadal variability in core surface flows deduced from geomagnetic observatory monthly means

    NASA Astrophysics Data System (ADS)

    Whaler, K. A.; Olsen, N.; Finlay, C. C.

    2016-10-01

    Monthly means of the magnetic field measurements at ground observatories are a key data source for studying temporal changes of the core magnetic field. However, when they are calculated in the usual way, contributions of external (magnetospheric and ionospheric) origin may remain, which make them less favourable for studying the field generated by dynamo action in the core. We remove external field predictions, including a new way of characterizing the magnetospheric ring current, from the data and then calculate revised monthly means using robust methods. The geomagnetic secular variation (SV) is calculated as the first annual differences of these monthly means, which also removes the static crustal field. SV time-series based on revised monthly means are much less scattered than those calculated from ordinary monthly means, and their variances and correlations between components are smaller. On the annual to decadal timescale, the SV is generated primarily by advection in the fluid outer core. We demonstrate the utility of the revised monthly means by calculating models of the core surface advective flow between 1997 and 2013 directly from the SV data. One set of models assumes flow that is constant over three months; such models exhibit large and rapid temporal variations. For models of this type, less complex flows achieve the same fit to the SV derived from revised monthly means than those from ordinary monthly means. However, those obtained from ordinary monthly means are able to follow excursions in SV that are likely to be external field contamination rather than core signals. Having established that we can find models that fit the data adequately, we then assess how much temporal variability is required. Previous studies have suggested that the flow is consistent with torsional oscillations (TO), solid body-like oscillations of fluid on concentric cylinders with axes aligned along the Earth's rotation axis. TO have been proposed to explain decadal timescale changes in the length-of-day. We invert for flow models where the only temporal changes are consistent with TO, but such models have an unacceptably large data misfit. However, if we relax the TO constraint to allow a little more temporal variability, we can fit the data as well as with flows assumed constant over three months, demonstrating that rapid SV changes can be reproduced by rather small flow changes. Although the flow itself changes slowly, its time derivative can be locally (temporally and spatially) large, in particular when and where core surface secular acceleration peaks. Spherical harmonic expansion coefficients of the flows are not well resolved, and many of them are strongly correlated. Averaging functions, a measure of our ability to determine the flow at a given location from the data distribution available, are poor approximations to the ideal, even when centred on points of the core surface below areas of high observatory density. Both resolution and averaging functions are noticeably worse for the toroidal flow component, which dominates the flow, than the poloidal flow component, except around the magnetic equator where averaging functions for both components are poor.

  19. High Latitude Precipitating Energy Flux and Joule Heating During Geomagnetic Storms Determined from AMPERE Field-aligned Currents

    NASA Astrophysics Data System (ADS)

    Robinson, R. M.; Zanetti, L. J.; Anderson, B. J.; Korth, H.; Samara, M.; Michell, R.; Grubbs, G. A., II; Hampton, D. L.; Dropulic, A.

    2016-12-01

    A high latitude conductivity model based on field-aligned currents measured by the Active Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE) provides the means for complete specification of electric fields and currents at high latitudes. Based on coordinated measurements made by AMPERE and the Poker Flat Incoherent Scatter Radar, the model determines the most likely value of the ionospheric conductance from the direction, magnitude, and magnetic local time of the field-aligned current. A conductance model driven by field-aligned currents ensures spatial and temporal consistency between the calculated electrodynamic parameters. To validate the results, the Pedersen and Hall conductances were used to calculate the energy flux associated with the energetic particle precipitation. When integrated over the entire hemisphere, the total energy flux compares well with the Hemispheric Power Index derived from the OVATION-PRIME model. The conductances were also combined with the field-aligned currents to calculate the self-consistent electric field, which was then used to compute horizontal currents and Joule heating. The magnetic perturbations derived from the currents replicate most of the variations observed in ground-based magnetograms. The model was used to study high latitude particle precipitation, currents, and Joule heating for 24 magnetic storms. In most cases, the total energy input from precipitating particles and Joule heating exhibits a sharply-peaked maximum at the times of local minima in Dst, suggesting a close coupling between the ring current and the high latitude currents driven by the Region 2 field-aligned currents. The rapid increase and decrease of the high latitude energy deposition suggests an explosive transfer of energy from the magnetosphere to the ionosphere just prior to storm recovery.

  20. Beyond mean-field approximations for accurate and computationally efficient models of on-lattice chemical kinetics

    NASA Astrophysics Data System (ADS)

    Pineda, M.; Stamatakis, M.

    2017-07-01

    Modeling the kinetics of surface catalyzed reactions is essential for the design of reactors and chemical processes. The majority of microkinetic models employ mean-field approximations, which lead to an approximate description of catalytic kinetics by assuming spatially uncorrelated adsorbates. On the other hand, kinetic Monte Carlo (KMC) methods provide a discrete-space continuous-time stochastic formulation that enables an accurate treatment of spatial correlations in the adlayer, but at a significant computation cost. In this work, we use the so-called cluster mean-field approach to develop higher order approximations that systematically increase the accuracy of kinetic models by treating spatial correlations at a progressively higher level of detail. We further demonstrate our approach on a reduced model for NO oxidation incorporating first nearest-neighbor lateral interactions and construct a sequence of approximations of increasingly higher accuracy, which we compare with KMC and mean-field. The latter is found to perform rather poorly, overestimating the turnover frequency by several orders of magnitude for this system. On the other hand, our approximations, while more computationally intense than the traditional mean-field treatment, still achieve tremendous computational savings compared to KMC simulations, thereby opening the way for employing them in multiscale modeling frameworks.

  1. On discrete symmetries for a whole Abelian model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chauca, J.; Doria, R.; Aprendanet, Petropolis, 25600

    Considering the whole concept applied to gauge theory a nonlinear abelian model is derived. A next step is to understand on the model properties. At this work, it will be devoted to discrete symmetries. For this, we will work based in two fields reference systems. This whole gauge symmetry allows to be analyzed through different sets which are the constructor basis {l_brace}D{sub {mu}},X{sup i}{sub {mu}}{r_brace} and the physical basis {l_brace}G{sub {mu}I}{r_brace}. Taking as fields reference system the diagonalized spin-1 sector, P, C, T and PCT symmetries are analyzed. They show that under this systemic model there are conservation laws drivenmore » for the parts and for the whole. It develops the meaning of whole-parity, field-parity and so on. However it is the whole symmetry that rules. This means that usually forbidden particles as pseudovector photons can be introduced through such whole abelian system. As result, one notices that the fields whole {l_brace}G{sub {mu}I}{r_brace} manifest a quanta diversity. It involves particles with different spins, masses and discrete quantum numbers under a same gauge symmetry. It says that without violating PCT symmetry different possibilities on discrete symmetries can be accommodated.« less

  2. Geomagnetic field model for the last 5 My: time-averaged field and secular variation

    NASA Astrophysics Data System (ADS)

    Hatakeyama, Tadahiro; Kono, Masaru

    2002-11-01

    Structure of the geomagnetic field has bee studied by using the paleomagetic direction data of the last 5 million years obtained from lava flows. The method we used is the nonlinear version, similar to the works of Gubbins and Kelly [Nature 365 (1993) 829], Johnson and Constable [Geophys. J. Int. 122 (1995) 488; Geophys. J. Int. 131 (1997) 643], and Kelly and Gubbins [Geophys. J. Int. 128 (1997) 315], but we determined the time-averaged field (TAF) and the paleosecular variation (PSV) simultaneously. As pointed out in our previous work [Earth Planet. Space 53 (2001) 31], the observed mean field directions are affected by the fluctuation of the field, as described by the PSV model. This effect is not excessively large, but cannot be neglected while considering the mean field. We propose that the new TAF+PSV model is a better representation of the ancient magnetic field, since both the average and fluctuation of the field are consistently explained. In the inversion procedure, we used direction cosines instead of inclinations and declinations, as the latter quantities show singularity or unstable behavior at the high latitudes. The obtained model gives reasonably good fit to the observed means and variances of direction cosines. In the TAF model, the geocentric axial dipole term ( g10) is the dominant component; it is much more pronounced than that in the present magnetic field. The equatorial dipole component is quite small, after averaging over time. The model shows a very smooth spatial variation; the nondipole components also seem to be averaged out quite effectively over time. Among the other coefficients, the geocentric axial quadrupole term ( g20) is significantly larger than the other components. On the other hand, the axial octupole term ( g30) is much smaller than that in a TAF model excluding the PSV effect. It is likely that the effect of PSV is most clearly seen in this term, which is consistent with the conclusion reached in our previous work. The PSV model shows large variance of the (2,1) component, which is in good agreement with the previous PSV models obtained by forward approaches. It is also indicated that the variance of the axial dipole term is very small. This is in conflict with the studies based on paleointensity data, but we show that this conclusion is not inconsistent with the paleointensity data because a substantial part of the apparent scatter in paleointensities may be attributable to effects other than the fluctuations in g10 itself.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franović, Igor, E-mail: franovic@ipb.ac.rs; Todorović, Kristina; Burić, Nikola

    We use the mean-field approach to analyze the collective dynamics in macroscopic networks of stochastic Fitzhugh-Nagumo units with delayed couplings. The conditions for validity of the two main approximations behind the model, called the Gaussian approximation and the Quasi-independence approximation, are examined. It is shown that the dynamics of the mean-field model may indicate in a self-consistent fashion the parameter domains where the Quasi-independence approximation fails. Apart from a network of globally coupled units, we also consider the paradigmatic setup of two interacting assemblies to demonstrate how our framework may be extended to hierarchical and modular networks. In both cases,more » the mean-field model can be used to qualitatively analyze the stability of the system, as well as the scenarios for the onset and the suppression of the collective mode. In quantitative terms, the mean-field model is capable of predicting the average oscillation frequency corresponding to the global variables of the exact system.« less

  4. Analysis of area level and unit level models for small area estimation in forest inventories assisted with LiDAR auxiliary information.

    PubMed

    Mauro, Francisco; Monleon, Vicente J; Temesgen, Hailemariam; Ford, Kevin R

    2017-01-01

    Forest inventories require estimates and measures of uncertainty for subpopulations such as management units. These units often times hold a small sample size, so they should be regarded as small areas. When auxiliary information is available, different small area estimation methods have been proposed to obtain reliable estimates for small areas. Unit level empirical best linear unbiased predictors (EBLUP) based on plot or grid unit level models have been studied more thoroughly than area level EBLUPs, where the modelling occurs at the management unit scale. Area level EBLUPs do not require a precise plot positioning and allow the use of variable radius plots, thus reducing fieldwork costs. However, their performance has not been examined thoroughly. We compared unit level and area level EBLUPs, using LiDAR auxiliary information collected for inventorying 98,104 ha coastal coniferous forest. Unit level models were consistently more accurate than area level EBLUPs, and area level EBLUPs were consistently more accurate than field estimates except for large management units that held a large sample. For stand density, volume, basal area, quadratic mean diameter, mean height and Lorey's height, root mean squared errors (rmses) of estimates obtained using area level EBLUPs were, on average, 1.43, 2.83, 2.09, 1.40, 1.32 and 1.64 times larger than those based on unit level estimates, respectively. Similarly, direct field estimates had rmses that were, on average, 1.37, 1.45, 1.17, 1.17, 1.26, and 1.38 times larger than rmses of area level EBLUPs. Therefore, area level models can lead to substantial gains in accuracy compared to direct estimates, and unit level models lead to very important gains in accuracy compared to area level models, potentially justifying the additional costs of obtaining accurate field plot coordinates.

  5. Analysis of area level and unit level models for small area estimation in forest inventories assisted with LiDAR auxiliary information

    PubMed Central

    Monleon, Vicente J.; Temesgen, Hailemariam; Ford, Kevin R.

    2017-01-01

    Forest inventories require estimates and measures of uncertainty for subpopulations such as management units. These units often times hold a small sample size, so they should be regarded as small areas. When auxiliary information is available, different small area estimation methods have been proposed to obtain reliable estimates for small areas. Unit level empirical best linear unbiased predictors (EBLUP) based on plot or grid unit level models have been studied more thoroughly than area level EBLUPs, where the modelling occurs at the management unit scale. Area level EBLUPs do not require a precise plot positioning and allow the use of variable radius plots, thus reducing fieldwork costs. However, their performance has not been examined thoroughly. We compared unit level and area level EBLUPs, using LiDAR auxiliary information collected for inventorying 98,104 ha coastal coniferous forest. Unit level models were consistently more accurate than area level EBLUPs, and area level EBLUPs were consistently more accurate than field estimates except for large management units that held a large sample. For stand density, volume, basal area, quadratic mean diameter, mean height and Lorey’s height, root mean squared errors (rmses) of estimates obtained using area level EBLUPs were, on average, 1.43, 2.83, 2.09, 1.40, 1.32 and 1.64 times larger than those based on unit level estimates, respectively. Similarly, direct field estimates had rmses that were, on average, 1.37, 1.45, 1.17, 1.17, 1.26, and 1.38 times larger than rmses of area level EBLUPs. Therefore, area level models can lead to substantial gains in accuracy compared to direct estimates, and unit level models lead to very important gains in accuracy compared to area level models, potentially justifying the additional costs of obtaining accurate field plot coordinates. PMID:29216290

  6. Yield estimation of corn based on multitemporal LANDSAT-TM data as input for an agrometeorological model

    NASA Astrophysics Data System (ADS)

    Bach, Heike

    1998-07-01

    In order to test remote sensing data with advanced yield formation models for accuracy and timeliness of yield estimation of corn, a project was conducted for the State Ministry for Rural Environment, Food, and Forestry of Baden-Württemberg (Germany). This project was carried out during the course of the `Special Yield Estimation', a regular procedure conducted for the European Union, to more accurately estimate agricultural yield. The methodology employed uses field-based plant parameter estimation from atmospherically corrected multitemporal/multispectral LANDSAT-TM data. An agrometeorological plant-production-model is used for yield prediction. Based solely on four LANDSAT-derived estimates (between May and August) and daily meteorological data, the grain yield of corn fields was determined for 1995. The modelled yields were compared with results gathered independently within the Special Yield Estimation for 23 test fields in the upper Rhine valley. The agreement between LANDSAT-based estimates (six weeks before harvest) and Special Yield Estimation (at harvest) shows a relative error of 2.3%. The comparison of the results for single fields shows that six weeks before harvest, the grain yield of corn was estimated with a mean relative accuracy of 13% using satellite information. The presented methodology can be transferred to other crops and geographical regions. For future applications hyperspectral sensors show great potential to further enhance the results for yield prediction with remote sensing.

  7. Emphysema diagnosis using X-ray dark-field imaging at a laser-driven compact synchrotron light source

    PubMed Central

    Schleede, Simone; Meinel, Felix G.; Bech, Martin; Herzen, Julia; Achterhold, Klaus; Potdevin, Guillaume; Malecki, Andreas; Adam-Neumair, Silvia; Thieme, Sven F.; Bamberg, Fabian; Nikolaou, Konstantin; Bohla, Alexander; Yildirim, Ali Ö.; Loewen, Roderick; Gifford, Martin; Ruth, Ronald; Eickelberg, Oliver; Reiser, Maximilian; Pfeiffer, Franz

    2012-01-01

    In early stages of various pulmonary diseases, such as emphysema and fibrosis, the change in X-ray attenuation is not detectable with absorption-based radiography. To monitor the morphological changes that the alveoli network undergoes in the progression of these diseases, we propose using the dark-field signal, which is related to small-angle scattering in the sample. Combined with the absorption-based image, the dark-field signal enables better discrimination between healthy and emphysematous lung tissue in a mouse model. All measurements have been performed at 36 keV using a monochromatic laser-driven miniature synchrotron X-ray source (Compact Light Source). In this paper we present grating-based dark-field images of emphysematous vs. healthy lung tissue, where the strong dependence of the dark-field signal on mean alveolar size leads to improved diagnosis of emphysema in lung radiographs. PMID:23074250

  8. Periodic variations in stratospheric-mesospheric temperature from 20-65 km at 80 N to 30 S

    NASA Technical Reports Server (NTRS)

    Nastrom, G. D.; Belmont, A. D.

    1975-01-01

    Results on large-scale periodic variations of the stratospheric-mesospheric temperature field based on Meteorological Rocket Network (MRN) measurements are reported for a long-term (12-year) mean, the quasi-biennial oscillation (QBO), and the first three harmonics of the annual wave (annual wave, semi-annual wave, and terannual wave or 4-month variation). Station-to-station comparisons are tabulated and charted for amplitude and phase of periodic variations in the temperature field. Masking and biasing factors, such as diurnal tides, solar radiation variations, mean monthly variations, instrument lag, aerodynamic heating, are singled out for attention. Models of the stratosphere will have to account for these oscillations of different periods in the thermal field and related properties of the wind fields, with multilayered horizontal stratification with height taken into account.-

  9. Convergence of the Bouguer-Beer law for radiation extinction in particulate media

    NASA Astrophysics Data System (ADS)

    Frankel, A.; Iaccarino, G.; Mani, A.

    2016-10-01

    Radiation transport in particulate media is a common physical phenomenon in natural and industrial processes. Developing predictive models of these processes requires a detailed model of the interaction between the radiation and the particles. Resolving the interaction between the radiation and the individual particles in a very large system is impractical, whereas continuum-based representations of the particle field lend themselves to efficient numerical techniques based on the solution of the radiative transfer equation. We investigate radiation transport through discrete and continuum-based representations of a particle field. Exact solutions for radiation extinction are developed using a Monte Carlo model in different particle distributions. The particle distributions are then projected onto a concentration field with varying grid sizes, and the Bouguer-Beer law is applied by marching across the grid. We show that the continuum-based solution approaches the Monte Carlo solution under grid refinement, but quickly diverges as the grid size approaches the particle diameter. This divergence is attributed to the homogenization error of an individual particle across a whole grid cell. We remark that the concentration energy spectrum of a point-particle field does not approach zero, and thus the concentration variance must also diverge under infinite grid refinement, meaning that no grid-converged solution of the radiation transport is possible.

  10. A Variational Level Set Approach Based on Local Entropy for Image Segmentation and Bias Field Correction.

    PubMed

    Tang, Jian; Jiang, Xiaoliang

    2017-01-01

    Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.

  11. Statistical thermodynamics of protein folding: Comparison of a mean-field theory with Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Hao, Ming-Hong; Scheraga, Harold A.

    1995-01-01

    A comparative study of protein folding with an analytical theory and computer simulations, respectively, is reported. The theory is based on an improved mean-field formalism which, in addition to the usual mean-field approximations, takes into account the distributions of energies in the subsets of conformational states. Sequence-specific properties of proteins are parametrized in the theory by two sets of variables, one for the energetics of mean-field interactions and one for the distribution of energies. Simulations are carried out on model polypeptides with different sequences, with different chain lengths, and with different interaction potentials, ranging from strong biases towards certain local chain states (bond angles and torsional angles) to complete absence of local conformational preferences. Theoretical analysis of the simulation results for the model polypeptides reveals three different types of behavior in the folding transition from the statistical coiled state to the compact globular state; these include a cooperative two-state transition, a continuous folding, and a glasslike transition. It is found that, with the fitted theoretical parameters which are specific for each polypeptide under a different potential, the mean-field theory can describe the thermodynamic properties and folding behavior of the different polypeptides accurately. By comparing the theoretical descriptions with simulation results, we verify the basic assumptions of the theory and, thereby, obtain new insights about the folding transitions of proteins. It is found that the cooperativity of the first-order folding transition of the model polypeptides is determined mainly by long-range interactions, in particular the dipolar orientation; the local interactions (e.g., bond-angle and torsion-angle potentials) have only marginal effect on the cooperative characteristic of the folding, but have a large impact on the difference in energy between the folded lowest-energy structure and the unfolded conformations of a protein.

  12. Short-ranged interaction effects on Z2 topological phase transitions: The perturbative mean-field method

    NASA Astrophysics Data System (ADS)

    Lai, Hsin-Hua; Hung, Hsiang-Hsuan

    2015-02-01

    Time-reversal symmetric topological insulator (TI) is a novel state of matter that a bulk-insulating state carries dissipationless spin transport along the surfaces, embedded by the Z2 topological invariant. In the noninteracting limit, this exotic state has been intensively studied and explored with realistic systems, such as HgTe/(Hg, Cd)Te quantum wells. On the other hand, electronic correlation plays a significant role in many solid-state systems, which further influences topological properties and triggers topological phase transitions. Yet an interacting TI is still an elusive subject and most related analyses rely on the mean-field approximation and numerical simulations. Among the approaches, the mean-field approximation fails to predict the topological phase transition, in particular at intermediate interaction strength without spontaneously breaking symmetry. In this paper, we develop an analytical approach based on a combined perturbative and self-consistent mean-field treatment of interactions that is capable of capturing topological phase transitions beyond either method when used independently. As an illustration of the method, we study the effects of short-ranged interactions on the Z2 TI phase, also known as the quantum spin Hall (QSH) phase, in three generalized versions of the Kane-Mele (KM) model at half-filling on the honeycomb lattice. The results are in excellent agreement with quantum Monte Carlo (QMC) calculations on the same model and cannot be reproduced by either a perturbative treatment or a self-consistent mean-field treatment of the interactions. Our analytical approach helps to clarify how the symmetries of the one-body terms of the Hamiltonian determine whether interactions tend to stabilize or destabilize a topological phase. Moreover, our method should be applicable to a wide class of models where topological transitions due to interactions are in principle possible, but are not correctly predicted by either perturbative or self-consistent treatments.

  13. Ozone levels in the Empty Quarter of Saudi Arabia--application of adaptive neuro-fuzzy model.

    PubMed

    Rahman, Syed Masiur; Khondaker, A N; Khan, Rouf Ahmad

    2013-05-01

    In arid regions, primary pollutants may contribute to the increase of ozone levels and cause negative effects on biotic health. This study investigates the use of adaptive neuro-fuzzy inference system (ANFIS) for ozone prediction. The initial fuzzy inference system is developed by using fuzzy C-means (FCM) and subtractive clustering (SC) algorithms, which determines the important rules, increases generalization capability of the fuzzy inference system, reduces computational needs, and ensures speedy model development. The study area is located in the Empty Quarter of Saudi Arabia, which is considered as a source of huge potential for oil and gas field development. The developed clustering algorithm-based ANFIS model used meteorological data and derived meteorological data, along with NO and NO₂ concentrations and their transformations, as inputs. The root mean square error and Willmott's index of agreement of the FCM- and SC-based ANFIS models are 3.5 ppbv and 0.99, and 8.9 ppbv and 0.95, respectively. Based on the analysis of the performance measures and regression error characteristic curves, it is concluded that the FCM-based ANFIS model outperforms the SC-based ANFIS model.

  14. Reduction of initial shock in decadal predictions using a new initialization strategy

    NASA Astrophysics Data System (ADS)

    He, Yujun; Wang, Bin

    2017-04-01

    Initial shock is a well-known problem occurring in the early years of a decadal prediction when assimilating full-field observations into a coupled model, which directly affects the prediction skill. For the purpose to alleviate this problem, we propose a novel full-field initialization method based on dimension-reduced projection four-dimensional variational data assimilation (DRP-4DVar). Different from the available solution strategies including anomaly assimilation and bias correction, it substantially reduces the initial shock through generating more consistent initial conditions for the coupled model, which, along with the model trajectory in one-month windows, best fit the monthly mean analysis data of oceanic temperature and salinity. We evaluate the performance of initialized hindcast experiments according to three proposed indices to measure the intensity of the initial shock. The results indicate that this strategy can obviously reduce the initial shock in decadal predictions by FGOALS-g2 (the Flexible Global Ocean-Atmosphere-Land System model, Grid-point Version 2) compared with the commonly-used nudging full-field initialization for the same model as well as the different full-field initialization strategies for other CMIP5 (the fifth phase of the Coupled Model Intercomparison Project) models whose decadal prediction results are available. It is also comparable to or even better than the anomaly initialization methods. Better hindcasts of global mean surface air temperature anomaly are obtained due to the reduction of initial shock by the new initialization scheme.

  15. The B-dot Earth Average Magnetic Field

    NASA Technical Reports Server (NTRS)

    Capo-Lugo, Pedro A.; Rakoczy, John; Sanders, Devon

    2013-01-01

    The average Earth's magnetic field is solved with complex mathematical models based on mean square integral. Depending on the selection of the Earth magnetic model, the average Earth's magnetic field can have different solutions. This paper presents a simple technique that takes advantage of the damping effects of the b-dot controller and is not dependent of the Earth magnetic model; but it is dependent on the magnetic torquers of the satellite which is not taken into consideration in the known mathematical models. Also the solution of this new technique can be implemented so easily that the flight software can be updated during flight, and the control system can have current gains for the magnetic torquers. Finally, this technique is verified and validated using flight data from a satellite that it has been in orbit for three years.

  16. A 2D Gaussian-Beam-Based Method for Modeling the Dichroic Surfaces of Quasi-Optical Systems

    NASA Astrophysics Data System (ADS)

    Elis, Kevin; Chabory, Alexandre; Sokoloff, Jérôme; Bolioli, Sylvain

    2016-08-01

    In this article, we propose an approach in the spectral domain to treat the interaction of a field with a dichroic surface in two dimensions. For a Gaussian beam illumination of the surface, the reflected and transmitted fields are approximated by one reflected and one transmitted Gaussian beams. Their characteristics are determined by means of a matching in the spectral domain, which requires a second-order approximation of the dichroic surface response when excited by plane waves. This approximation is of the same order as the one used in Gaussian beam shooting algorithm to model curved interfaces associated with lenses, reflector, etc. The method uses general analytical formulations for the GBs that depend either on a paraxial or far-field approximation. Numerical experiments are led to test the efficiency of the method in terms of accuracy and computation time. They include a parametric study and a case for which the illumination is provided by a horn antenna. For the latter, the incident field is firstly expressed as a sum of Gaussian beams by means of Gabor frames.

  17. Field-dependent critical state of high-Tc superconducting strip simultaneously exposed to transport current and perpendicular magnetic field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xue, Cun; He, An; Yong, Huadong

    We present an exact analytical approach for arbitrary field-dependent critical state of high-T{sub c} superconducting strip with transport current. The sheet current and flux-density profiles are derived by solving the integral equations, which agree with experiments quite well. For small transport current, the approximate explicit expressions of sheet current, flux-density and penetration depth for the Kim model are derived based on the mean value theorem for integration. We also extend the results to the field-dependent critical state of superconducting strip in the simultaneous presence of applied field and transport current. The sheet current distributions calculated by the Kim model agreemore » with experiments better than that by the Bean model. Moreover, the lines in the I{sub a}-B{sub a} plane for the Kim model are not monotonic, which is quite different from that the Bean model. The results reveal that the maximum transport current in thin superconducting strip will decrease with increasing applied field which vanishes for the Bean model. The results of this paper are useful to calculate ac susceptibility and ac loss.« less

  18. Estimating population size for Capercaillie (Tetrao urogallus L.) with spatial capture-recapture models based on genotypes from one field sample

    USGS Publications Warehouse

    Mollet, Pierre; Kery, Marc; Gardner, Beth; Pasinelli, Gilberto; Royle, Andy

    2015-01-01

    We conducted a survey of an endangered and cryptic forest grouse, the capercaillie Tetrao urogallus, based on droppings collected on two sampling occasions in eight forest fragments in central Switzerland in early spring 2009. We used genetic analyses to sex and individually identify birds. We estimated sex-dependent detection probabilities and population size using a modern spatial capture-recapture (SCR) model for the data from pooled surveys. A total of 127 capercaillie genotypes were identified (77 males, 46 females, and 4 of unknown sex). The SCR model yielded atotal population size estimate (posterior mean) of 137.3 capercaillies (posterior sd 4.2, 95% CRI 130–147). The observed sex ratio was skewed towards males (0.63). The posterior mean of the sex ratio under the SCR model was 0.58 (posterior sd 0.02, 95% CRI 0.54–0.61), suggesting a male-biased sex ratio in our study area. A subsampling simulation study indicated that a reduced sampling effort representing 75% of the actual detections would still yield practically acceptable estimates of total size and sex ratio in our population. Hence, field work and financial effort could be reduced without compromising accuracy when the SCR model is used to estimate key population parameters of cryptic species.

  19. A hybrid phase-space and histogram source model for GPU-based Monte Carlo radiotherapy dose calculation

    NASA Astrophysics Data System (ADS)

    Townson, Reid W.; Zavgorodni, Sergei

    2014-12-01

    In GPU-based Monte Carlo simulations for radiotherapy dose calculation, source modelling from a phase-space source can be an efficiency bottleneck. Previously, this has been addressed using phase-space-let (PSL) sources, which provided significant efficiency enhancement. We propose that additional speed-up can be achieved through the use of a hybrid primary photon point source model combined with a secondary PSL source. A novel phase-space derived and histogram-based implementation of this model has been integrated into gDPM v3.0. Additionally, a simple method for approximately deriving target photon source characteristics from a phase-space that does not contain inheritable particle history variables (LATCH) has been demonstrated to succeed in selecting over 99% of the true target photons with only ~0.3% contamination (for a Varian 21EX 18 MV machine). The hybrid source model was tested using an array of open fields for various Varian 21EX and TrueBeam energies, and all cases achieved greater than 97% chi-test agreement (the mean was 99%) above the 2% isodose with 1% / 1 mm criteria. The root mean square deviations (RMSDs) were less than 1%, with a mean of 0.5%, and the source generation time was 4-5 times faster. A seven-field intensity modulated radiation therapy patient treatment achieved 95% chi-test agreement above the 10% isodose with 1% / 1 mm criteria, 99.8% for 2% / 2 mm, a RMSD of 0.8%, and source generation speed-up factor of 2.5. Presented as part of the International Workshop on Monte Carlo Techniques in Medical Physics

  20. 3D reconstruction of the magnetic vector potential using model based iterative reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta

    Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less

  1. 3D reconstruction of the magnetic vector potential using model based iterative reconstruction.

    PubMed

    Prabhat, K C; Aditya Mohan, K; Phatak, Charudatta; Bouman, Charles; De Graef, Marc

    2017-11-01

    Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model for image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. A comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. 3D reconstruction of the magnetic vector potential using model based iterative reconstruction

    DOE PAGES

    Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta; ...

    2017-07-03

    Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less

  3. Capturing Knowledge In Order To Optimize The Cutting Process For Polyethylene Pipes Using Knowledge Models

    NASA Astrophysics Data System (ADS)

    Rotaru, Ionela Magdalena

    2015-09-01

    Knowledge management is a powerful instrument. Areas where knowledge - based modelling can be applied are different from business, industry, government to education area. Companies engage in efforts to restructure the database held based on knowledge management principles as they recognize in it a guarantee of models characterized by the fact that they consist only from relevant and sustainable knowledge that can bring value to the companies. The proposed paper presents a theoretical model of what it means optimizing polyethylene pipes, thus bringing to attention two important engineering fields, the one of the metal cutting process and gas industry, who meet in order to optimize the butt fusion welding process - the polyethylene cutting part - of the polyethylene pipes. All approach is shaped on the principles of knowledge management. The study was made in collaboration with companies operating in the field.

  4. Balance between facilitation and resource competition determines biomass-density relationships in plant populations.

    PubMed

    Chu, Cheng-Jin; Maestre, Fernando T; Xiao, Sa; Weiner, Jacob; Wang, You-Shi; Duan, Zheng-Hu; Wang, Gang

    2008-11-01

    Theories based on competition for resources predict a monotonic negative relationship between population density and individual biomass in plant populations. They do not consider the role of facilitative interactions, which are known to be important in high stress environments. Using an individual-based 'zone-of-influence' model, we investigated the hypothesis that the balance between facilitative and competitive interactions determines biomass-density relationships. We tested model predictions with a field experiment on the clonal grass Elymus nutans in an alpine meadow. In the model, the relationship between mean individual biomass and density shifted from monotonic to humped as abiotic stress increased. The model results were supported by the field experiment, in which the greatest individual and population biomass were found at intermediate densities in a high-stress alpine habitat. Our results show that facilitation can affect biomass-density relationships.

  5. Self-consistent chaos in a mean-field Hamiltonian model of fluids and plasmas

    NASA Astrophysics Data System (ADS)

    del-Castillo-Negrete, D.; Firpo, Marie-Christine

    2002-11-01

    We present a mean-field Hamiltonian model that describes the collective dynamics of marginally stable fluids and plasmas. In plasmas, the model describes the self-consistent evolution of electron holes and clumps in phase space. In fluids, the model describes the dynamics of vortices with negative and positive circulation in shear flows. The mean-field nature of the system makes it a tractable model to study the dynamics of large degrees-of-freedom, coupled Hamiltonian systems. Here we focus in the role of self-consistent chaos in the formation and destruction of phase space coherent structures. Numerical simulations in the finite N and in the Narrow kinetic limit (where N is the number of particles) show the existence of coherent, rotating dipole states. We approximate the dipole as two macroparticles, and show that the N = 2 limit has a family of rotating integrable solutions described by a one degree-of-freedom nontwist Hamiltonian. The coherence of the dipole is explained in terms of a parametric resonance between the rotation frequency of the macroparticles and the oscillation frequency of the self-consistent mean field. For a class of initial conditions, the mean field exhibits a self-consistent, elliptic-hyperbolic bifurcation that leads to the destruction of the dipole and violent mixing of the phase space.

  6. Mean-field models for heterogeneous networks of two-dimensional integrate and fire neurons.

    PubMed

    Nicola, Wilten; Campbell, Sue Ann

    2013-01-01

    We analytically derive mean-field models for all-to-all coupled networks of heterogeneous, adapting, two-dimensional integrate and fire neurons. The class of models we consider includes the Izhikevich, adaptive exponential and quartic integrate and fire models. The heterogeneity in the parameters leads to different moment closure assumptions that can be made in the derivation of the mean-field model from the population density equation for the large network. Three different moment closure assumptions lead to three different mean-field systems. These systems can be used for distinct purposes such as bifurcation analysis of the large networks, prediction of steady state firing rate distributions, parameter estimation for actual neurons and faster exploration of the parameter space. We use the mean-field systems to analyze adaptation induced bursting under realistic sources of heterogeneity in multiple parameters. Our analysis demonstrates that the presence of heterogeneity causes the Hopf bifurcation associated with the emergence of bursting to change from sub-critical to super-critical. This is confirmed with numerical simulations of the full network for biologically reasonable parameter values. This change decreases the plausibility of adaptation being the cause of bursting in hippocampal area CA3, an area with a sizable population of heavily coupled, strongly adapting neurons.

  7. Mean-field models for heterogeneous networks of two-dimensional integrate and fire neurons

    PubMed Central

    Nicola, Wilten; Campbell, Sue Ann

    2013-01-01

    We analytically derive mean-field models for all-to-all coupled networks of heterogeneous, adapting, two-dimensional integrate and fire neurons. The class of models we consider includes the Izhikevich, adaptive exponential and quartic integrate and fire models. The heterogeneity in the parameters leads to different moment closure assumptions that can be made in the derivation of the mean-field model from the population density equation for the large network. Three different moment closure assumptions lead to three different mean-field systems. These systems can be used for distinct purposes such as bifurcation analysis of the large networks, prediction of steady state firing rate distributions, parameter estimation for actual neurons and faster exploration of the parameter space. We use the mean-field systems to analyze adaptation induced bursting under realistic sources of heterogeneity in multiple parameters. Our analysis demonstrates that the presence of heterogeneity causes the Hopf bifurcation associated with the emergence of bursting to change from sub-critical to super-critical. This is confirmed with numerical simulations of the full network for biologically reasonable parameter values. This change decreases the plausibility of adaptation being the cause of bursting in hippocampal area CA3, an area with a sizable population of heavily coupled, strongly adapting neurons. PMID:24416013

  8. Detailed numerical investigation of the Bohm limit in cosmic ray diffusion theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hussein, M.; Shalchi, A., E-mail: m_hussein@physics.umanitoba.ca, E-mail: andreasm4@yahoo.com

    2014-04-10

    A standard model in cosmic ray diffusion theory is the so-called Bohm limit in which the particle mean free path is assumed to be equal to the Larmor radius. This type of diffusion is often employed to model the propagation and acceleration of energetic particles. However, recent analytical and numerical work has shown that standard Bohm diffusion is not realistic. In the present paper, we perform test-particle simulations to explore particle diffusion in the strong turbulence limit in which the wave field is much stronger than the mean magnetic field. We show that there is indeed a lower limit ofmore » the particle mean free path along the mean field. In this limit, the mean free path is directly proportional to the unperturbed Larmor radius like in the traditional Bohm limit, but it is reduced by the factor δB/B {sub 0} where B {sub 0} is the mean field and δB the turbulent field. Although we focus on parallel diffusion, we also explore diffusion across the mean field in the strong turbulence limit.« less

  9. Optimization of Geothermal Well Placement under Geological Uncertainty

    NASA Astrophysics Data System (ADS)

    Schulte, Daniel O.; Arnold, Dan; Demyanov, Vasily; Sass, Ingo; Geiger, Sebastian

    2017-04-01

    Well placement optimization is critical to commercial success of geothermal projects. However, uncertainties of geological parameters prohibit optimization based on a single scenario of the subsurface, particularly when few expensive wells are to be drilled. The optimization of borehole locations is usually based on numerical reservoir models to predict reservoir performance and entails the choice of objectives to optimize (total enthalpy, minimum enthalpy rate, production temperature) and the development options to adjust (well location, pump rate, difference in production and injection temperature). Optimization traditionally requires trying different development options on a single geological realization yet there are many possible different interpretations possible. Therefore, we aim to optimize across a range of representative geological models to account for geological uncertainty in geothermal optimization. We present an approach that uses a response surface methodology based on a large number of geological realizations selected by experimental design to optimize the placement of geothermal wells in a realistic field example. A large number of geological scenarios and design options were simulated and the response surfaces were constructed using polynomial proxy models, which consider both geological uncertainties and design parameters. The polynomial proxies were validated against additional simulation runs and shown to provide an adequate representation of the model response for the cases tested. The resulting proxy models allow for the identification of the optimal borehole locations given the mean response of the geological scenarios from the proxy (i.e. maximizing or minimizing the mean response). The approach is demonstrated on the realistic Watt field example by optimizing the borehole locations to maximize the mean heat extraction from the reservoir under geological uncertainty. The training simulations are based on a comprehensive semi-synthetic data set of a hierarchical benchmark case study for a hydrocarbon reservoir, which specifically considers the interpretational uncertainty in the modeling work flow. The optimal choice of boreholes prolongs the time to cold water breakthrough and allows for higher pump rates and increased water production temperatures.

  10. Research to practice in addiction treatment: key terms and a field-driven model of technology transfer.

    PubMed

    2011-09-01

    The transfer of new technologies (e.g., evidence-based practices) into substance abuse treatment organizations often occurs long after they have been developed and shown to be effective. Transfer is slowed, in part, due to a lack of clear understanding about all that is needed to achieve full implementation of these technologies. Such misunderstanding is exacerbated by inconsistent terminology and overlapping models of an innovation, including its development and validation, dissemination to the public, and implementation or use in the field. For this reason, a workgroup of the Addiction Technology Transfer Center (ATTC) Network developed a field-driven conceptual model of the innovation process that more precisely defines relevant terms and concepts and integrates them into a comprehensive taxonomy. The proposed definitions and conceptual framework will allow for improved understanding and consensus regarding the distinct meaning and conceptual relationships between dimensions of the technology transfer process and accelerate the use of evidence-based practices. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. Eliminating large-scale magnetospheric current perturbations from long-term geomagnetic observatory data

    NASA Astrophysics Data System (ADS)

    Pick, L.; Korte, M. C.

    2016-12-01

    Magnetospheric currents generate the largest external contribution to the geomagnetic field observed on Earth. Of particular importance is the solar-driven effect of the ring current whose fluctuations overlap with internal field secular variation (SV). Recent core field models thus co-estimate this effect but their validity is limited to the last 15 years offering satellite data. We aim at eliminating magnetospheric modulation from the whole geomagnetic observatory record from 1840 onwards in order to obtain clean long-term SV that will enhance core flow and geodynamo studies.The ring current effect takes form of a southward directed external dipole field aligned with the geomagnetic main field axis. Commonly the Dst index (Sugiura, 1964) is used to parametrize temporal variations of this dipole term. Because of baseline instabilities, the alternative RC index was derived from hourly means of 21 stations spanning 1997-2013 (Olsen et al., 2014). We follow their methodology based on annual means from a reduced station set spanning 1960-2010. The absolute level of the variation so determined is "hidden" in the static lithospheric offsets taken as quiet-time means. We tackle this issue by subtracting crustal biases independently calculated for each observatory from an inversion of combined Swarm satellite and observatory data.Our index reproduces the original annual RC index variability with a reasonable offset of -10 nT in the reference time window 2000-2010. Prior to that it depicts a long-term trend consistent with the external dipole term from COV-OBS (Gillet et al., 2013), being the only long-term field model available for comparison. Sharper variations that are better correlated with the Ap index than the COV-OBS solution lend support to the usefulness of our initial modeling approach. Following a detailed sensitivity study of station choice future work will focus on increasing the resolution from annual to hourly means.

  12. Field, laboratory and numerical approaches to studying flow through mangrove pneumatophores

    NASA Astrophysics Data System (ADS)

    Chua, V. P.

    2014-12-01

    The circulation of water in riverine mangrove swamps is expected to be influenced by mangrove roots, which in turn affect the nutrients, pollutants and sediments transport in these systems. Field studies were carried out in mangrove areas along the coastline of Singapore where Avicennia marina and Sonneratia alba pneumatophore species are found. Geometrical properties, such as height, diameter and spatial density of the mangrove roots were assessed through the use of photogrammetric methods. Samples of these roots were harvested from mangrove swamps and their material properties, such as bending strength and Young's modulus were determined in the laboratory. It was found that the pneumatophores under hydrodynamic loadings in a mangrove environment could be regarded as fairly rigid. Artificial root models of pneumatophores were fabricated from downscaling based on field observations of mangroves. Flume experiments were performed and measurements of mean flow velocities, Reynolds stress and turbulent kinetic energy were made. The boundary layer formed over the vegetation patch is fully developed after x = 6 m with a linear mean velocity profile. High shear stresses and turbulent kinetic energy were observed at the interface between the top portion of the roots and the upper flow. The experimental data was employed to calibrate and validate three-dimensional simulations of flow in pneumatophores. The simulations were performed with the Delft3D-FLOW model, where the vegetation effect is introduced by adding a depth-distributed resistance force and modifying the k-ɛ turbulence model. The model-predicted profiles for mean velocity, turbulent kinetic energy and concentration were compared with experimental data. The model calibration is performed by adjusting the horizontal and vertical eddy viscosities and diffusivities. A skill assessment of the model is performed using statistical measures that include the Pearson correlation coefficient (r), the mean absolute error (MAE), and the root-mean-squared error (RMSE).

  13. A Canonical Ensemble Correlation Prediction Model for Seasonal Precipitation Anomaly

    NASA Technical Reports Server (NTRS)

    Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Guilong

    2001-01-01

    This report describes an optimal ensemble forecasting model for seasonal precipitation and its error estimation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. This new CCA model includes the following features: (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States precipitation field. The predictor is the sea surface temperature.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    van den Berg, R.; Brandino, G. P.; El Araby, O.

    In this study, we introduce an integrability-based method enabling the study of semiconductor quantum dot models incorporating both the full hyperfine interaction as well as a mean-field treatment of dipole-dipole interactions in the nuclear spin bath. By performing free induction decay and spin echo simulations we characterize the combined effect of both types of interactions on the decoherence of the electron spin, for external fields ranging from low to high values. We show that for spin echo simulations the hyperfine interaction is the dominant source of decoherence at short times for low fields, and competes with the dipole-dipole interactions atmore » longer times. On the contrary, at high fields the main source of decay is due to the dipole-dipole interactions. In the latter regime an asymmetry in the echo is observed. Furthermore, the non-decaying fraction previously observed for zero field free induction decay simulations in quantum dots with only hyperfine interactions, is destroyed for longer times by the mean-field treatment of the dipolar interactions.« less

  15. Competing interactions in semiconductor quantum dots

    DOE PAGES

    van den Berg, R.; Brandino, G. P.; El Araby, O.; ...

    2014-10-14

    In this study, we introduce an integrability-based method enabling the study of semiconductor quantum dot models incorporating both the full hyperfine interaction as well as a mean-field treatment of dipole-dipole interactions in the nuclear spin bath. By performing free induction decay and spin echo simulations we characterize the combined effect of both types of interactions on the decoherence of the electron spin, for external fields ranging from low to high values. We show that for spin echo simulations the hyperfine interaction is the dominant source of decoherence at short times for low fields, and competes with the dipole-dipole interactions atmore » longer times. On the contrary, at high fields the main source of decay is due to the dipole-dipole interactions. In the latter regime an asymmetry in the echo is observed. Furthermore, the non-decaying fraction previously observed for zero field free induction decay simulations in quantum dots with only hyperfine interactions, is destroyed for longer times by the mean-field treatment of the dipolar interactions.« less

  16. Optimal Geoid Modelling to determine the Mean Ocean Circulation - Project Overview and early Results

    NASA Astrophysics Data System (ADS)

    Fecher, Thomas; Knudsen, Per; Bettadpur, Srinivas; Gruber, Thomas; Maximenko, Nikolai; Pie, Nadege; Siegismund, Frank; Stammer, Detlef

    2017-04-01

    The ESA project GOCE-OGMOC (Optimal Geoid Modelling based on GOCE and GRACE third-party mission data and merging with altimetric sea surface data to optimally determine Ocean Circulation) examines the influence of the satellite missions GRACE and in particular GOCE in ocean modelling applications. The project goal is an improved processing of satellite and ground data for the preparation and combination of gravity and altimetry data on the way to an optimal MDT solution. Explicitly, the two main objectives are (i) to enhance the GRACE error modelling and optimally combine GOCE and GRACE [and optionally terrestrial/altimetric data] and (ii) to integrate the optimal Earth gravity field model with MSS and drifter information to derive a state-of-the art MDT including an error assessment. The main work packages referring to (i) are the characterization of geoid model errors, the identification of GRACE error sources, the revision of GRACE error models, the optimization of weighting schemes for the participating data sets and finally the estimation of an optimally combined gravity field model. In this context, also the leakage of terrestrial data into coastal regions shall be investigated, as leakage is not only a problem for the gravity field model itself, but is also mirrored in a derived MDT solution. Related to (ii) the tasks are the revision of MSS error covariances, the assessment of the mean circulation using drifter data sets and the computation of an optimal geodetic MDT as well as a so called state-of-the-art MDT, which combines the geodetic MDT with drifter mean circulation data. This paper presents an overview over the project results with focus on the geodetic results part.

  17. Comparative study of the requantization of the time-dependent mean field for the dynamics of nuclear pairing

    NASA Astrophysics Data System (ADS)

    Ni, Fang; Nakatsukasa, Takashi

    2018-04-01

    To describe quantal collective phenomena, it is useful to requantize the time-dependent mean-field dynamics. We study the time-dependent Hartree-Fock-Bogoliubov (TDHFB) theory for the two-level pairing Hamiltonian, and compare results of different quantization methods. The one constructing microscopic wave functions, using the TDHFB trajectories fulfilling the Einstein-Brillouin-Keller quantization condition, turns out to be the most accurate. The method is based on the stationary-phase approximation to the path integral. We also examine the performance of the collective model which assumes that the pairing gap parameter is the collective coordinate. The applicability of the collective model is limited for the nuclear pairing with a small number of single-particle levels, because the pairing gap parameter represents only a half of the pairing collective space.

  18. Quantum critical point revisited by dynamical mean-field theory

    NASA Astrophysics Data System (ADS)

    Xu, Wenhu; Kotliar, Gabriel; Tsvelik, Alexei M.

    2017-03-01

    Dynamical mean-field theory is used to study the quantum critical point (QCP) in the doped Hubbard model on a square lattice. The QCP is characterized by a universal scaling form of the self-energy and a spin density wave instability at an incommensurate wave vector. The scaling form unifies the low-energy kink and the high-energy waterfall feature in the spectral function, while the spin dynamics includes both the critical incommensurate and high-energy antiferromagnetic paramagnons. We use the frequency-dependent four-point correlation function of spin operators to calculate the momentum-dependent correction to the electron self-energy. By comparing with the calculations based on the spin-fermion model, our results indicate the frequency dependence of the quasiparticle-paramagnon vertices is an important factor to capture the momentum dependence in quasiparticle scattering.

  19. Clear-Sky Longwave Irradiance at the Earth's Surface--Evaluation of Climate Models.

    NASA Astrophysics Data System (ADS)

    Garratt, J. R.

    2001-04-01

    An evaluation of the clear-sky longwave irradiance at the earth's surface (LI) simulated in climate models and in satellite-based global datasets is presented. Algorithm-based estimates of LI, derived from global observations of column water vapor and surface (or screen air) temperature, serve as proxy `observations.' All datasets capture the broad zonal variation and seasonal behavior in LI, mainly because the behavior in column water vapor and temperature is reproduced well. Over oceans, the dependence of annual and monthly mean irradiance upon sea surface temperature (SST) closely resembles the observed behavior of column water with SST. In particular, the observed hemispheric difference in the summer minus winter column water dependence on SST is found in all models, though with varying seasonal amplitudes. The analogous behavior in the summer minus winter LI is seen in all datasets. Over land, all models have a more highly scattered dependence of LI upon surface temperature compared with the situation over the oceans. This is related to a much weaker dependence of model column water on the screen-air temperature at both monthly and annual timescales, as observed. The ability of climate models to simulate realistic LI fields depends as much on the quality of model water vapor and temperature fields as on the quality of the longwave radiation codes. In a comparison of models with observations, root-mean-square gridpoint differences in mean monthly column water and temperature are 4-6 mm (5-8 mm) and 0.5-2 K (3-4 K), respectively, over large regions of ocean (land), consistent with the intermodel differences in LI of 5-13 W m2 (15-28 W m2).

  20. Goce and Its Role in Combined Global High Resolution Gravity Field Determination

    NASA Astrophysics Data System (ADS)

    Fecher, T.; Pail, R.; Gruber, T.

    2013-12-01

    Combined high-resolution gravity field models serve as a mandatory basis to describe static and dynamic processes in system Earth. Ocean dynamics can be modeled referring to a high-accurate geoid as reference surface, solid earth processes are initiated by the gravity field. Also geodetic disciplines such as height system determination depend on high-precise gravity field information. To fulfill the various requirements concerning resolution and accuracy, any kind of gravity field information, that means satellite as well as terrestrial and altimetric gravity field observations have to be included in one combination process. A key role is here reserved for GOCE observations, which contribute with its optimal signal content in the long to medium wavelength part and enable a more accurate gravity field determination than ever before especially in areas, where no high-accurate terrestrial gravity field observations are available, such as South America, Asia or Africa. For our contribution we prepare a combined high-resolution gravity field model up to d/o 720 based on full normal equation including recent GOCE, GRACE and terrestrial / altimetric data. For all data sets, normal equations are set up separately, relative weighted to each other in the combination step and solved. This procedure is computationally challenging and can only be performed using super computers. We put special emphasis on the combination process, for which we modified especially our procedure to include GOCE data optimally in the combination. Furthermore we modified our terrestrial/altimetric data sets, what should result in an improved outcome. With our model, in which we included the newest GOCE TIM4 gradiometry results, we can show how GOCE contributes to a combined gravity field solution especially in areas of poor terrestrial data coverage. The model is validated by independent GPS leveling data in selected regions as well as computation of the mean dynamic topography over the oceans. Further, we analyze the statistical error estimates derived from full covariance propagation and compare them with the absolute validation with independent data sets.

  1. An improved empirical dynamic control system model of global mean sea level rise and surface temperature change

    NASA Astrophysics Data System (ADS)

    Wu, Qing; Luu, Quang-Hung; Tkalich, Pavel; Chen, Ge

    2018-04-01

    Having great impacts on human lives, global warming and associated sea level rise are believed to be strongly linked to anthropogenic causes. Statistical approach offers a simple and yet conceptually verifiable combination of remotely connected climate variables and indices, including sea level and surface temperature. We propose an improved statistical reconstruction model based on the empirical dynamic control system by taking into account the climate variability and deriving parameters from Monte Carlo cross-validation random experiments. For the historic data from 1880 to 2001, we yielded higher correlation results compared to those from other dynamic empirical models. The averaged root mean square errors are reduced in both reconstructed fields, namely, the global mean surface temperature (by 24-37%) and the global mean sea level (by 5-25%). Our model is also more robust as it notably diminished the unstable problem associated with varying initial values. Such results suggest that the model not only enhances significantly the global mean reconstructions of temperature and sea level but also may have a potential to improve future projections.

  2. Glyph-based analysis of multimodal directional distributions in vector field ensembles

    NASA Astrophysics Data System (ADS)

    Jarema, Mihaela; Demir, Ismail; Kehrer, Johannes; Westermann, Rüdiger

    2015-04-01

    Ensemble simulations are increasingly often performed in the geosciences in order to study the uncertainty and variability of model predictions. Describing ensemble data by mean and standard deviation can be misleading in case of multimodal distributions. We present first results of a glyph-based visualization of multimodal directional distributions in 2D and 3D vector ensemble data. Directional information on the circle/sphere is modeled using mixtures of probability density functions (pdfs), which enables us to characterize the distributions with relatively few parameters. The resulting mixture models are represented by 2D and 3D lobular glyphs showing direction, spread and strength of each principal mode of the distributions. A 3D extension of our approach is realized by means of an efficient GPU rendering technique. We demonstrate our method in the context of ensemble weather simulations.

  3. Spatial Statistical Data Fusion (SSDF)

    NASA Technical Reports Server (NTRS)

    Braverman, Amy J.; Nguyen, Hai M.; Cressie, Noel

    2013-01-01

    As remote sensing for scientific purposes has transitioned from an experimental technology to an operational one, the selection of instruments has become more coordinated, so that the scientific community can exploit complementary measurements. However, tech nological and scientific heterogeneity across devices means that the statistical characteristics of the data they collect are different. The challenge addressed here is how to combine heterogeneous remote sensing data sets in a way that yields optimal statistical estimates of the underlying geophysical field, and provides rigorous uncertainty measures for those estimates. Different remote sensing data sets may have different spatial resolutions, different measurement error biases and variances, and other disparate characteristics. A state-of-the-art spatial statistical model was used to relate the true, but not directly observed, geophysical field to noisy, spatial aggregates observed by remote sensing instruments. The spatial covariances of the true field and the covariances of the true field with the observations were modeled. The observations are spatial averages of the true field values, over pixels, with different measurement noise superimposed. A kriging framework is used to infer optimal (minimum mean squared error and unbiased) estimates of the true field at point locations from pixel-level, noisy observations. A key feature of the spatial statistical model is the spatial mixed effects model that underlies it. The approach models the spatial covariance function of the underlying field using linear combinations of basis functions of fixed size. Approaches based on kriging require the inversion of very large spatial covariance matrices, and this is usually done by making simplifying assumptions about spatial covariance structure that simply do not hold for geophysical variables. In contrast, this method does not require these assumptions, and is also computationally much faster. This method is fundamentally different than other approaches to data fusion for remote sensing data because it is inferential rather than merely descriptive. All approaches combine data in a way that minimizes some specified loss function. Most of these are more or less ad hoc criteria based on what looks good to the eye, or some criteria that relate only to the data at hand.

  4. The Path of New Information Technology Affecting Educational Equality in the New Digital Divide--Based on Information System Success Model

    ERIC Educational Resources Information Center

    Zheng, Qian; Liang, Chang-Yong

    2017-01-01

    New information technology (new IT) plays an increasingly important role in the field of education, which greatly enriches the teaching means and promotes the sharing of education resources. However, because of the New Digital Divide existing, the impact of new IT on educational equality has yet to be discussed. Based on Information System Success…

  5. Selection of optimum median-filter-based ambiguity removal algorithm parameters for NSCAT. [NASA scatterometer

    NASA Technical Reports Server (NTRS)

    Shaffer, Scott; Dunbar, R. Scott; Hsiao, S. Vincent; Long, David G.

    1989-01-01

    The NASA Scatterometer, NSCAT, is an active spaceborne radar designed to measure the normalized radar backscatter coefficient (sigma0) of the ocean surface. These measurements can, in turn, be used to infer the surface vector wind over the ocean using a geophysical model function. Several ambiguous wind vectors result because of the nature of the model function. A median-filter-based ambiguity removal algorithm will be used by the NSCAT ground data processor to select the best wind vector from the set of ambiguous wind vectors. This process is commonly known as dealiasing or ambiguity removal. The baseline NSCAT ambiguity removal algorithm and the method used to select the set of optimum parameter values are described. An extensive simulation of the NSCAT instrument and ground data processor provides a means of testing the resulting tuned algorithm. This simulation generates the ambiguous wind-field vectors expected from the instrument as it orbits over a set of realistic meoscale wind fields. The ambiguous wind field is then dealiased using the median-based ambiguity removal algorithm. Performance is measured by comparison of the unambiguous wind fields with the true wind fields. Results have shown that the median-filter-based ambiguity removal algorithm satisfies NSCAT mission requirements.

  6. Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator.

    PubMed

    Hahne, Jan; Dahmen, David; Schuecker, Jannis; Frommer, Andreas; Bolten, Matthias; Helias, Moritz; Diesmann, Markus

    2017-01-01

    Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation.

  7. Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator

    PubMed Central

    Hahne, Jan; Dahmen, David; Schuecker, Jannis; Frommer, Andreas; Bolten, Matthias; Helias, Moritz; Diesmann, Markus

    2017-01-01

    Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation. PMID:28596730

  8. Multiple scattering of waves in random media: Application to the study of the city-site effect in Mexico City area.

    NASA Astrophysics Data System (ADS)

    Ishizawa, O. A.; Clouteau, D.

    2007-12-01

    Long-duration, amplifications and spatial response's variability of the seismic records registered in Mexico City during the September 1985 earthquake cannot only be explained by the soil velocity model. We will try to explain these phenomena by studying the extent of the effect of buildings' diffracted wave fields during an earthquake. The main question is whether the presence of a large number of buildings can significantly modify the seismic wave field. We are interested in the interaction between the incident wave field propagating in a stratified half- space and a large number of structures at the free surface, i.e., the coupled city-site effect. We study and characterize the seismic wave propagation regimes in a city using the theory of wave propagation in random media. In the coupled city-site system, the buildings are modeled as resonant scatterers uniformly distributed at the surface of a deterministic, horizontally layered elastic half-space representing the soil. Based on the mean-field and the field correlation equations, we build a theoretical model which takes into account the multiple scattering of seismic waves and allows us to describe the coupled city-site system behavior in a simple and rapid way. The results obtained for the configurationally averaged field quantities are validated by means of 3D results for the seismic response of a deterministic model. The numerical simulations of this model are computed with MISS3D code based on classical Soil-Structure Interaction techniques and on a variational coupling between Boundary Integral Equations for a layered soil and a modal Finite Element approach for the buildings. This work proposes a detailed numerical and a theoretical analysis of the city-site interaction (CSI) in Mexico City area. The principal parameters in the study of the CSI are the buildings resonant frequency distribution, the soil characteristics of the site, the urban density and position of the buildings in the city, as well as the type of incident wave. The main results of the theoretical and numerical models allow us to characterize the seismic movement in urban areas.

  9. Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators

    USGS Publications Warehouse

    Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.

    2003-01-01

    Statistical models for estimating absolute densities of field populations of animals have been widely used over the last century in both scientific studies and wildlife management programs. To date, two general classes of density estimation models have been developed: models that use data sets from capture–recapture or removal sampling techniques (often derived from trapping grids) from which separate estimates of population size (NÌ‚) and effective sampling area (AÌ‚) are used to calculate density (DÌ‚ = NÌ‚/AÌ‚); and models applicable to sampling regimes using distance-sampling theory (typically transect lines or trapping webs) to estimate detection functions and densities directly from the distance data. However, few studies have evaluated these respective models for accuracy, precision, and bias on known field populations, and no studies have been conducted that compare the two approaches under controlled field conditions. In this study, we evaluated both classes of density estimators on known densities of enclosed rodent populations. Test data sets (n = 11) were developed using nine rodent species from capture–recapture live-trapping on both trapping grids and trapping webs in four replicate 4.2-ha enclosures on the Sevilleta National Wildlife Refuge in central New Mexico, USA. Additional “saturation” trapping efforts resulted in an enumeration of the rodent populations in each enclosure, allowing the computation of true densities. Density estimates (DÌ‚) were calculated using program CAPTURE for the grid data sets and program DISTANCE for the web data sets, and these results were compared to the known true densities (D) to evaluate each model's relative mean square error, accuracy, precision, and bias. In addition, we evaluated a variety of approaches to each data set's analysis by having a group of independent expert analysts calculate their best density estimates without a priori knowledge of the true densities; this “blind” test allowed us to evaluate the influence of expertise and experience in calculating density estimates in comparison to simply using default values in programs CAPTURE and DISTANCE. While the rodent sample sizes were considerably smaller than the recommended minimum for good model results, we found that several models performed well empirically, including the web-based uniform and half-normal models in program DISTANCE, and the grid-based models Mb and Mbh in program CAPTURE (with AÌ‚ adjusted by species-specific full mean maximum distance moved (MMDM) values). These models produced accurate DÌ‚ values (with 95% confidence intervals that included the true D values) and exhibited acceptable bias but poor precision. However, in linear regression analyses comparing each model's DÌ‚ values to the true D values over the range of observed test densities, only the web-based uniform model exhibited a regression slope near 1.0; all other models showed substantial slope deviations, indicating biased estimates at higher or lower density values. In addition, the grid-based DÌ‚ analyses using full MMDM values for WÌ‚ area adjustments required a number of theoretical assumptions of uncertain validity, and we therefore viewed their empirical successes with caution. Finally, density estimates from the independent analysts were highly variable, but estimates from web-based approaches had smaller mean square errors and better achieved confidence-interval coverage of D than did grid-based approaches. Our results support the contention that web-based approaches for density estimation of small-mammal populations are both theoretically and empirically superior to grid-based approaches, even when sample size is far less than often recommended. In view of the increasing need for standardized environmental measures for comparisons among ecosystems and through time, analytical models based on distance sampling appear to offer accurate density estimation approaches for research studies involving small-mammal abundances.

  10. Consistent View of Protein Fluctuations from All-Atom Molecular Dynamics and Coarse-Grained Dynamics with Knowledge-Based Force-Field.

    PubMed

    Jamroz, Michal; Orozco, Modesto; Kolinski, Andrzej; Kmiecik, Sebastian

    2013-01-08

    It is widely recognized that atomistic Molecular Dynamics (MD), a classical simulation method, captures the essential physics of protein dynamics. That idea is supported by a theoretical study showing that various MD force-fields provide a consensus picture of protein fluctuations in aqueous solution [Rueda, M. et al. Proc. Natl. Acad. Sci. U.S.A. 2007, 104, 796-801]. However, atomistic MD cannot be applied to most biologically relevant processes due to its limitation to relatively short time scales. Much longer time scales can be accessed by properly designed coarse-grained models. We demonstrate that the aforementioned consensus view of protein dynamics from short (nanosecond) time scale MD simulations is fairly consistent with the dynamics of the coarse-grained protein model - the CABS model. The CABS model employs stochastic dynamics (a Monte Carlo method) and a knowledge-based force-field, which is not biased toward the native structure of a simulated protein. Since CABS-based dynamics allows for the simulation of entire folding (or multiple folding events) in a single run, integration of the CABS approach with all-atom MD promises a convenient (and computationally feasible) means for the long-time multiscale molecular modeling of protein systems with atomistic resolution.

  11. The physics of chromatin silencing: Bi-stability and front propagation

    NASA Astrophysics Data System (ADS)

    Sedighi, Mohammad

    A mean-field dynamical model of chromatin silencing in budding yeast is provided and the conditions giving rise to two states: one silenced and another un-silenced, is studied. Based on these conditions, the space of control parameters is divided into two distinct regions of mono-stable and bi-stable solutions (the bifurcation diagram). Then, considering both the discrete and continuous versions of the model, the formation of a stable boundary between the silenced and un-silenced areas on DNA is investigated. As a result, a richer phase diagram is provided. The dynamics of the boundary is also studied under different conditions. Consequently, assuming negative feedback due to possible depletion of silencing proteins, the model explains a paradoxical epigenetic behavior of yeast that happens under some mutation. A stochastic treatment of the model is also considered to verify the results of the mean-field approximation and also to understand the role of intrinsic noise at single cell level. This model could be used as a general guide to discuss chromatin silencing in many organisms.

  12. Accessing and constructing driving data to develop fuel consumption forecast model

    NASA Astrophysics Data System (ADS)

    Yamashita, Rei-Jo; Yao, Hsiu-Hsen; Hung, Shih-Wei; Hackman, Acquah

    2018-02-01

    In this study, we develop a forecasting models, to estimate fuel consumption based on the driving behavior, in which vehicles and routes are known. First, the driving data are collected via telematics and OBDII. Then, the driving fuel consumption formula is used to calculate the estimate fuel consumption, and driving behavior indicators are generated for analysis. Based on statistical analysis method, the driving fuel consumption forecasting model is constructed. Some field experiment results were done in this study to generate hundreds of driving behavior indicators. Based on data mining approach, the Pearson coefficient correlation analysis is used to filter highly fuel consumption related DBIs. Only highly correlated DBI will be used in the model. These DBIs are divided into four classes: speed class, acceleration class, Left/Right/U-turn class and the other category. We then use K-means cluster analysis to group to the driver class and the route class. Finally, more than 12 aggregate models are generated by those highly correlated DBIs, using the neural network model and regression analysis. Based on Mean Absolute Percentage Error (MAPE) to evaluate from the developed AMs. The best MAPE values among these AM is below 5%.

  13. Faster is more different: mean-field dynamics of innovation diffusion.

    PubMed

    Baek, Seung Ki; Durang, Xavier; Kim, Mina

    2013-01-01

    Based on a recent model of paradigm shifts by Bornholdt et al., we studied mean-field opinion dynamics in an infinite population where an infinite number of ideas compete simultaneously with their values publicly known. We found that a highly innovative society is not characterized by heavy concentration in highly valued ideas: Rather, ideas are more broadly distributed in a more innovative society with faster progress, provided that the rate of adoption is constant, which suggests a positive correlation between innovation and technological disparity. Furthermore, the distribution is generally skewed in such a way that the fraction of innovators is substantially smaller than has been believed in conventional innovation-diffusion theory based on normality. Thus, the typical adoption pattern is predicted to be asymmetric with slow saturation in the ideal situation, which is compared with empirical data sets.

  14. Performance of a clinical gridded electron gun in magnetic fields: Implications for MRI-linac therapy.

    PubMed

    Whelan, Brendan; Holloway, Lois; Constantin, Dragos; Oborn, Brad; Bazalova-Carter, Magdalena; Fahrig, Rebecca; Keall, Paul

    2016-11-01

    MRI-linac therapy is a rapidly growing field, and requires that conventional linear accelerators are operated with the fringe field of MRI magnets. One of the most sensitive accelerator components is the electron gun, which serves as the source of the beam. The purpose of this work was to develop a validated finite element model (FEM) model of a clinical triode (or gridded) electron gun, based on accurate geometric and electrical measurements, and to characterize the performance of this gun in magnetic fields. The geometry of a Varian electron gun was measured using 3D laser scanning and digital calipers. The electric potentials and emission current of these guns were measured directly from six dose matched true beam linacs for the 6X, 10X, and 15X modes of operation. Based on these measurements, a finite element model (FEM) of the gun was developed using the commercial software opera/scala. The performance of the FEM model in magnetic fields was characterized using parallel fields ranging from 0 to 200 G in the in-line direction, and 0-35 G in the perpendicular direction. The FEM model matched the average measured emission current to within 5% across all three modes of operation. Different high voltage settings are used for the different modes; the 6X, 10X, and 15X modes have an average high voltage setting of 15, 10, and 11 kV. Due to these differences, different operating modes show different sensitivities in magnetic fields. For in line fields, the first current loss occurs at 40, 20, and 30 G for each mode. This is a much greater sensitivity than has previously been observed. For perpendicular fields, first beam loss occurred at 8, 5, and 5 G and total beam loss at 27, 22, and 20 G. A validated FEM model of a clinical triode electron gun has been developed based on accurate geometric and electrical measurements. Three different operating modes were simulated, with a maximum mean error of 5%. This gun shows greater sensitivity to in-line magnetic fields than previously presented models, and different operating modes show different sensitivity.

  15. Performance of a clinical gridded electron gun in magnetic fields: Implications for MRI-linac therapy

    PubMed Central

    Whelan, Brendan; Holloway, Lois; Constantin, Dragos; Oborn, Brad; Bazalova-Carter, Magdalena; Fahrig, Rebecca; Keall, Paul

    2016-01-01

    Purpose: MRI-linac therapy is a rapidly growing field, and requires that conventional linear accelerators are operated with the fringe field of MRI magnets. One of the most sensitive accelerator components is the electron gun, which serves as the source of the beam. The purpose of this work was to develop a validated finite element model (FEM) model of a clinical triode (or gridded) electron gun, based on accurate geometric and electrical measurements, and to characterize the performance of this gun in magnetic fields. Methods: The geometry of a Varian electron gun was measured using 3D laser scanning and digital calipers. The electric potentials and emission current of these guns were measured directly from six dose matched true beam linacs for the 6X, 10X, and 15X modes of operation. Based on these measurements, a finite element model (FEM) of the gun was developed using the commercial software opera/scala. The performance of the FEM model in magnetic fields was characterized using parallel fields ranging from 0 to 200 G in the in-line direction, and 0–35 G in the perpendicular direction. Results: The FEM model matched the average measured emission current to within 5% across all three modes of operation. Different high voltage settings are used for the different modes; the 6X, 10X, and 15X modes have an average high voltage setting of 15, 10, and 11 kV. Due to these differences, different operating modes show different sensitivities in magnetic fields. For in line fields, the first current loss occurs at 40, 20, and 30 G for each mode. This is a much greater sensitivity than has previously been observed. For perpendicular fields, first beam loss occurred at 8, 5, and 5 G and total beam loss at 27, 22, and 20 G. Conclusions: A validated FEM model of a clinical triode electron gun has been developed based on accurate geometric and electrical measurements. Three different operating modes were simulated, with a maximum mean error of 5%. This gun shows greater sensitivity to in-line magnetic fields than previously presented models, and different operating modes show different sensitivity. PMID:27806583

  16. The thermoelectric properties of strongly correlated systems

    NASA Astrophysics Data System (ADS)

    Cai, Jianwei

    Strongly correlated systems are among the most interesting and complicated systems in physics. Large Seebeck coefficients are found in some of these systems, which highlight the possibility for thermoelectric applications. In this thesis, we study the thermoelectric properties of these strongly correlated systems with various methods. We derived analytic formulas for the resistivity and Seebeck coefficient of the periodic Anderson model based on the dynamic mean field theory. These formulas were possible as the self energy of the single impurity Anderson model could be given by an analytic ansatz derived from experiments and numerical calculations instead of complicated numerical calculations. The results show good agreement with the experimental data of rare-earth compound in a restricted temperature range. These formulas help to understand the properties of periodic Anderson model. Based on the study of rare-earth compounds, we proposed a design for the thermoelectric meta-material. This manmade material is made of quantum dots linked by conducting linkers. The quantum dots act as the rare-earth atoms with heavier mass. We set up a model similar to the periodic Anderson model for this new material. The new model was studied with the perturbation theory for energy bands. The dynamic mean field theory with numerical renormalization group as the impurity solver was used to study the transport properties. With these studies, we confirmed the improved thermoelectric properties of the designed material.

  17. Analysis of key technologies in geomagnetic navigation

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoming; Zhao, Yan

    2008-10-01

    Because of the costly price and the error accumulation of high precise Inertial Navigation Systems (INS) and the vulnerability of Global Navigation Satellite Systems (GNSS), the geomagnetic navigation technology, a passive autonomous navigation method, is paid attention again. Geomagnetic field is a natural spatial physical field, and is a function of position and time in near earth space. The navigation technology based on geomagnetic field is researched in a wide range of commercial and military applications. This paper presents the main features and the state-of-the-art of Geomagnetic Navigation System (GMNS). Geomagnetic field models and reference maps are described. Obtaining, modeling and updating accurate Anomaly Magnetic Field information is an important step for high precision geomagnetic navigation. In addition, the errors of geomagnetic measurement using strapdown magnetometers are analyzed. The precise geomagnetic data is obtained by means of magnetometer calibration and vehicle magnetic field compensation. According to the measurement data and reference map or model of geomagnetic field, the vehicle's position and attitude can be obtained using matching algorithm or state-estimating method. The tendency of geomagnetic navigation in near future is introduced at the end of this paper.

  18. Adaptive two-regime method: Application to front propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, Martin, E-mail: martin.robinson@maths.ox.ac.uk; Erban, Radek, E-mail: erban@maths.ox.ac.uk; Flegg, Mark, E-mail: mark.flegg@monash.edu

    2014-03-28

    The Adaptive Two-Regime Method (ATRM) is developed for hybrid (multiscale) stochastic simulation of reaction-diffusion problems. It efficiently couples detailed Brownian dynamics simulations with coarser lattice-based models. The ATRM is a generalization of the previously developed Two-Regime Method [Flegg et al., J. R. Soc., Interface 9, 859 (2012)] to multiscale problems which require a dynamic selection of regions where detailed Brownian dynamics simulation is used. Typical applications include a front propagation or spatio-temporal oscillations. In this paper, the ATRM is used for an in-depth study of front propagation in a stochastic reaction-diffusion system which has its mean-field model given in termsmore » of the Fisher equation [R. Fisher, Ann. Eugen. 7, 355 (1937)]. It exhibits a travelling reaction front which is sensitive to stochastic fluctuations at the leading edge of the wavefront. Previous studies into stochastic effects on the Fisher wave propagation speed have focused on lattice-based models, but there has been limited progress using off-lattice (Brownian dynamics) models, which suffer due to their high computational cost, particularly at the high molecular numbers that are necessary to approach the Fisher mean-field model. By modelling only the wavefront itself with the off-lattice model, it is shown that the ATRM leads to the same Fisher wave results as purely off-lattice models, but at a fraction of the computational cost. The error analysis of the ATRM is also presented for a morphogen gradient model.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Migunov, V., E-mail: v.migunov@fz-juelich.de; Dunin-Borkowski, R. E.; London, A.

    The one-dimensional charge density distribution along an electrically biased Fe atom probe needle is measured using a model-independent approach based on off-axis electron holography in the transmission electron microscope. Both the mean inner potential and the magnetic contribution to the phase shift are subtracted by taking differences between electron-optical phase images recorded with different voltages applied to the needle. The measured one-dimensional charge density distribution along the needle is compared with a similar result obtained using model-based fitting of the phase shift surrounding the needle. On the assumption of cylindrical symmetry, it is then used to infer the three-dimensional electricmore » field and electrostatic potential around the needle with ∼10 nm spatial resolution, without needing to consider either the influence of the perturbed reference wave or the extension of the projected potential outside the field of view of the electron hologram. The present study illustrates how a model-independent approach can be used to measure local variations in charge density in a material using electron holography in the presence of additional contributions to the phase, such as those arising from changes in mean inner potential and specimen thickness.« less

  20. Geomagnetic Jerks in the Swarm Era

    NASA Astrophysics Data System (ADS)

    Brown, William; Beggan, Ciaran; Macmillan, Susan

    2016-08-01

    The timely provision of geomagnetic observations as part of the European Space Agency (ESA) Swarm mission means up-to-date analysis and modelling of the Earth's magnetic field can be conducted rapidly in a manner not possible before. Observations from each of the three Swarm constellation satellites are available within 4 days and a database of close-to-definitive ground observatory measurements is updated every 3 months. This makes it possible to study very recent variations of the core magnetic field. Here we investigate rapid, unpredictable internal field variations known as geomagnetic jerks. Given that jerks represent (currently) unpredictable changes in the core field and have been identified to have happened in 2014 since Swarm was launched, we ask what impact this might have on the future accuracy of the International Geomagnetic Reference Field (IGRF). We assess the performance of each of the IGRF-12 secular variation model candidates in light of recent jerks, given that four of the nine candidates are novel physics-based predictive models.

  1. Critical study of the dispersive n- 90Zr mean field by means of a new variational method

    NASA Astrophysics Data System (ADS)

    Mahaux, C.; Sartor, R.

    1994-02-01

    A new variational method is developed for the construction of the dispersive nucleon-nucleus mean field at negative and positive energies. Like the variational moment approach that we had previously proposed, the new method only uses phenomenological optical-model potentials as input. It is simpler and more flexible than the previous approach. It is applied to a critical investigation of the n- 90Zr mean field between -25 and +25 MeV. This system is of particular interest because conflicting results had recently been obtained by two different groups. While the imaginary parts of the phenomenological optical-model potentials provided by these two groups are similar, their real parts are quite different. Nevertheless, we demonstrate that these two sets of phenomenological optical-model potentials are both compatible with the dispersion relation which connects the real and imaginary parts of the mean field. Previous hints to the contrary, by one of the two other groups, are shown to be due to unjustified approximations. A striking outcome of the present study is that it is important to explicitly introduce volume absorption in the dispersion relation, although volume absorption is negligible in the energy domain investigated here. Because of the existence of two sets of phenomenological optical-model potentials, our variational method yields two dispersive mean fields whose real parts are quite different at small or negative energies. No preference for one of the two dispersive mean fields can be expressed on purely empirical grounds since they both yield fair agreement with the experimental cross sections as well as with the observed energies of the bound single-particle states. However, we argue that one of these two mean fields is physically more meaningful, because the radial shape of its Hartree-Fock type component is independent of energy, as expected on theoretical grounds. This preferred mean field is very close to the one which had been obtained by the Ohio University group by means of fits to experimental cross sections. It is also in good agreement with a recent determination of the p- 90Zr average potential.

  2. Statistics of fully turbulent impinging jets

    NASA Astrophysics Data System (ADS)

    Wilke, Robert; Sesterhenn, Jörn

    2017-08-01

    Direct numerical simulations of sub- and supersonic impinging jets with Reynolds numbers of 3300 and 8000 are carried out to analyse their statistical properties. The influence of the parameters Mach number, Reynolds number and ambient temperature on the mean velocity and temperature fields are studied. For the compressible subsonic cold impinging jets into a heated environment, different Reynolds analogies are assesses. It is shown, that the (original) Reynolds analogy as well as the Chilton Colburn analogy are in good agreement with the DNS data outside the impinging area. The generalised Reynolds analogy (GRA) and the Crocco-Busemann relation are not suited for the estimation of the mean temperature field based on the mean velocity field of impinging jets. Furthermore, the prediction of fluctuating temperatures according to the GRA fails. On the contrary, the linear relation between thermodynamic fluctuations of entropy, density and temperature as suggested by Lechner et al. (2001) can be confirmed for the entire wall jet. The turbulent heat flux and Reynolds stress tensor are analysed and brought into coherence with the primary and secondary ring vortices of the wall jet. Budget terms of the Reynolds stress tensor are given as data base for the improvement of turbulence models.

  3. The Volume Field Model about Strong Interaction and Weak Interaction

    NASA Astrophysics Data System (ADS)

    Liu, Rongwu

    2016-03-01

    For a long time researchers have believed that strong interaction and weak interaction are realized by exchanging intermediate particles. This article proposes a new mechanism as follows: Volume field is a form of material existence in plane space, it takes volume-changing motion in the form of non-continuous motion, volume fields have strong interaction or weak interaction between them by overlapping their volume fields. Based on these concepts, this article further proposes a ``bag model'' of volume field for atomic nucleus, which includes three sub-models of the complex structure of fundamental body (such as quark), the atom-like structure of hadron, and the molecule-like structure of atomic nucleus. This article also proposes a plane space model and formulates a physics model of volume field in the plane space, as well as a model of space-time conversion. The model of space-time conversion suggests that: Point space-time and plane space-time convert each other by means of merging and rupture respectively, the essence of space-time conversion is the mutual transformations of matter and energy respectively; the process of collision of high energy hadrons, the formation of black hole, and the Big Bang of universe are three kinds of space-time conversions.

  4. Self-Consistent Field Lattice Model for Polymer Networks.

    PubMed

    Tito, Nicholas B; Storm, Cornelis; Ellenbroek, Wouter G

    2017-12-26

    A lattice model based on polymer self-consistent field theory is developed to predict the equilibrium statistics of arbitrary polymer networks. For a given network topology, our approach uses moment propagators on a lattice to self-consistently construct the ensemble of polymer conformations and cross-link spatial probability distributions. Remarkably, the calculation can be performed "in the dark", without any prior knowledge on preferred chain conformations or cross-link positions. Numerical results from the model for a test network exhibit close agreement with molecular dynamics simulations, including when the network is strongly sheared. Our model captures nonaffine deformation, mean-field monomer interactions, cross-link fluctuations, and finite extensibility of chains, yielding predictions that differ markedly from classical rubber elasticity theory for polymer networks. By examining polymer networks with different degrees of interconnectivity, we gain insight into cross-link entropy, an important quantity in the macroscopic behavior of gels and self-healing materials as they are deformed.

  5. Modulated phases in a three-dimensional Maier-Saupe model with competing interactions

    NASA Astrophysics Data System (ADS)

    Bienzobaz, P. F.; Xu, Na; Sandvik, Anders W.

    2017-07-01

    This work is dedicated to the study of the discrete version of the Maier-Saupe model in the presence of competing interactions. The competition between interactions favoring different orientational ordering produces a rich phase diagram including modulated phases. Using a mean-field approach and Monte Carlo simulations, we show that the proposed model exhibits isotropic and nematic phases and also a series of modulated phases that meet at a multicritical point, a Lifshitz point. Though the Monte Carlo and mean-field phase diagrams show some quantitative disagreements, the Monte Carlo simulations corroborate the general behavior found within the mean-field approximation.

  6. First independent lunar gravity field solution in the framework of project GRAZIL

    NASA Astrophysics Data System (ADS)

    Wirnsberger, Harald; Krauss, Sandro; Klinger, Beate; Mayer-Gürr, Torsten

    2017-04-01

    The twin satellite mission Gravity Recovery and Interior Laboratory (GRAIL) aims to recovering the lunar gravity field by means of intersatellite Ka-band ranging (KBR) observations. In order to exploit the potential of KBR data, absolute position information of the two probes is required. Hitherto, the Graz lunar gravity field models (GrazLGM) relies on the official orbit products provided by NASA. In this contribution, we present for the first time a completely independent Graz lunar gravity field model to spherical harmonic degree and order 420. The reduced dynamic orbits of the two probes are determined using variational equations following a batch least squares differential adjustment process. These orbits are based on S-band radiometric tracking data collected by the Deep Space Network and are used for the independent GRAIL gravity field recovery. To reveal a highly accurate lunar gravity field, an integral equation approach using short orbital arcs is adopted to process the KBR data. A comparison to state-of-the-art lunar gravity models computed at NASA-GSFC, NASA-JPL and AIUB demonstrate the progress of Graz lunar gravity field models derived within the project GRAZIL.

  7. Empirical and modeled synoptic cloud climatology of the Arctic Ocean

    NASA Technical Reports Server (NTRS)

    Barry, R. G.; Newell, J. P.; Schweiger, A.; Crane, R. G.

    1986-01-01

    A set of cloud cover data were developed for the Arctic during the climatically important spring/early summer transition months. Parallel with the determination of mean monthly cloud conditions, data for different synoptic pressure patterns were also composited as a means of evaluating the role of synoptic variability on Arctic cloud regimes. In order to carry out this analysis, a synoptic classification scheme was developed for the Arctic using an objective typing procedure. A second major objective was to analyze model output of pressure fields and cloud parameters from a control run of the Goddard Institue for Space Studies climate model for the same area and to intercompare the synoptic climatatology of the model with that based on the observational data.

  8. Computational Electromagnetic Analysis in a Human Head Model with EEG Electrodes and Leads Exposed to RF-Field Sources at 915 MHz and 1748 MHz

    PubMed Central

    Angelone, Leonardo M.; Bit-Babik, Giorgi; Chou, Chung-Kwang

    2010-01-01

    An electromagnetic analysis of a human head with EEG electrodes and leads exposed to RF-field sources was performed by means of Finite-Difference Time-Domain simulations on a 1-mm3 MRI-based human head model. RF-field source models included a half-wave dipole, a patch antenna, and a realistic CAD-based mobile phone at 915 MHz and 1748 MHz. EEG electrodes/leads models included two configurations of EEG leads, both a standard 10–20 montage with 19 electrodes and a 32-electrode cap, and metallic and high resistive leads. Whole-head and peak 10-g average SAR showed less than 20% changes with and without leads. Peak 1-g and 10-g average SARs were below the ICNIRP and IEEE guideline limits. Conversely, a comprehensive volumetric assessment of changes in the RF field with and without metallic EEG leads showed an increase of two orders of magnitude in single-voxel power absorption in the epidermis and a 40-fold increase in the brain during exposure to the 915 MHz mobile phone. Results varied with the geometry and conductivity of EEG electrodes/leads. This enhancement confirms the validity of the question whether any observed effects in studies involving EEG recordings during RF-field exposure are directly related to the RF fields generated by the source or indirectly to the RF-field-induced currents due to the presence of conductive EEG leads. PMID:20681803

  9. Yield estimation of corn with multispectral data and the potential of using imaging spectrometers

    NASA Astrophysics Data System (ADS)

    Bach, Heike

    1997-05-01

    In the frame of the special yield estimation, a regular procedure conducted for the European Union to more accurately estimate agricultural yield, a project was conducted for the state minister for Rural Environment, Food and Forestry of Baden-Wuerttemberg, Germany) to test remote sensing data with advanced yield formation models for accuracy and timelines of yield estimation of corn. The methodology employed uses field-based plant parameter estimation from atmospherically corrected multitemporal/multispectral LANDSAT-TM data. An agrometeorological plant-production-model is used for yield prediction. Based solely on 4 LANDSAT-derived estimates and daily meteorological data the grain yield of corn stands was determined for 1995. The modeled yield was compared with results independently gathered within the special yield estimation for 23 test fields in the Upper Rhine Valley. The agrement between LANDSAT-based estimates and Special Yield Estimation shows a relative error of 2.3 percent. The comparison of the results for single fields shows, that six weeks before harvest the grain yield of single corn fields was estimated with a mean relative accuracy of 13 percent using satellite information. The presented methodology can be transferred to other crops and geographical regions. For future applications hyperspectral sensors show great potential to further enhance the results or yield prediction with remote sensing.

  10. Prediction of local concentration statistics in variably saturated soils: Influence of observation scale and comparison with field data

    NASA Astrophysics Data System (ADS)

    Graham, Wendy; Destouni, Georgia; Demmy, George; Foussereau, Xavier

    1998-07-01

    The methodology developed in Destouni and Graham [Destouni, G., Graham, W.D., 1997. The influence of observation method on local concentration statistics in the subsurface. Water Resour. Res. 33 (4) 663-676.] for predicting locally measured concentration statistics for solute transport in heterogeneous porous media under saturated flow conditions is applied to the prediction of conservative nonreactive solute transport in the vadose zone where observations are obtained by soil coring. Exact analytical solutions are developed for both the mean and variance of solute concentrations measured in discrete soil cores using a simplified physical model for vadose-zone flow and solute transport. Theoretical results show that while the ensemble mean concentration is relatively insensitive to the length-scale of the measurement, predictions of the concentration variance are significantly impacted by the sampling interval. Results also show that accounting for vertical heterogeneity in the soil profile results in significantly less spreading in the mean and variance of the measured solute breakthrough curves, indicating that it is important to account for vertical heterogeneity even for relatively small travel distances. Model predictions for both the mean and variance of locally measured solute concentration, based on independently estimated model parameters, agree well with data from a field tracer test conducted in Manatee County, Florida.

  11. Superstatistics model for T₂ distribution in NMR experiments on porous media.

    PubMed

    Correia, M D; Souza, A M; Sinnecker, J P; Sarthour, R S; Santos, B C C; Trevizan, W; Oliveira, I S

    2014-07-01

    We propose analytical functions for T2 distribution to describe transverse relaxation in high- and low-fields NMR experiments on porous media. The method is based on a superstatistics theory, and allows to find the mean and standard deviation of T2, directly from measurements. It is an alternative to multiexponential models for data decay inversion in NMR experiments. We exemplify the method with q-exponential functions and χ(2)-distributions to describe, respectively, data decay and T2 distribution on high-field experiments of fully water saturated glass microspheres bed packs, sedimentary rocks from outcrop and noisy low-field experiment on rocks. The method is general and can also be applied to biological systems. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. A compressible solution of the Navier-Stokes equations for turbulent flow about an airfoil

    NASA Technical Reports Server (NTRS)

    Shamroth, S. J.; Gibeling, H. J.

    1979-01-01

    A compressible time dependent solution of the Navier-Stokes equations including a transition turbulence model is obtained for the isolated airfoil flow field problem. The equations are solved by a consistently split linearized block implicit scheme. A nonorthogonal body-fitted coordinate system is used which has maximum resolution near the airfoil surface and in the region of the airfoil leading edge. The transition turbulence model is based upon the turbulence kinetic energy equation and predicts regions of laminar, transitional, and turbulent flow. Mean flow field and turbulence field results are presented for an NACA 0012 airfoil at zero and nonzero incidence angles of Reynolds number up to one million and low subsonic Mach numbers.

  13. A PDF projection method: A pressure algorithm for stand-alone transported PDFs

    NASA Astrophysics Data System (ADS)

    Ghorbani, Asghar; Steinhilber, Gerd; Markus, Detlev; Maas, Ulrich

    2015-03-01

    In this paper, a new formulation of the projection approach is introduced for stand-alone probability density function (PDF) methods. The method is suitable for applications in low-Mach number transient turbulent reacting flows. The method is based on a fractional step method in which first the advection-diffusion-reaction equations are modelled and solved within a particle-based PDF method to predict an intermediate velocity field. Then the mean velocity field is projected onto a space where the continuity for the mean velocity is satisfied. In this approach, a Poisson equation is solved on the Eulerian grid to obtain the mean pressure field. Then the mean pressure is interpolated at the location of each stochastic Lagrangian particle. The formulation of the Poisson equation avoids the time derivatives of the density (due to convection) as well as second-order spatial derivatives. This in turn eliminates the major sources of instability in the presence of stochastic noise that are inherent in particle-based PDF methods. The convergence of the algorithm (in the non-turbulent case) is investigated first by the method of manufactured solutions. Then the algorithm is applied to a one-dimensional turbulent premixed flame in order to assess the accuracy and convergence of the method in the case of turbulent combustion. As a part of this work, we also apply the algorithm to a more realistic flow, namely a transient turbulent reacting jet, in order to assess the performance of the method.

  14. Interpreting remanence isotherms: a Preisach-based study

    NASA Astrophysics Data System (ADS)

    Roshko, R. M.; Viddal, C.

    2004-07-01

    Numerical simulations of the field dependence of the isothermal remanent moment (IRM) and the thermoremanent moment (TRM) are presented, based on a Preisach formalism which decomposes the free energy landscape into an ensemble of thermally activated, temperature dependent, double well subsystems, each characterized by a dissipation field H d and a bias field H s . The simulations show that the TRM approaches saturation much more rapidly than the corresponding IRM and that, as a consequence, the characteristics of the IRM are determined primarily by the distribution of dissipation fields, as defined by the mean field bar {H}_d (T) and the dispersion σ_d (T), while the characteristics of the TRM are determined primarily by a mixture of the mean dissipation field bar {H}_d (T) and the dispersion of bias fields σ_s (T). The simulations also identify a regime bar {H}_d ≫σ_s , where the influence of bar {H}_d (T) on the TRM is negligible, and hence where the TRM and the IRM provide essentially independent scans of the Preisach distribution along the two orthogonal H s and H d directions, respectively. The systematics established by the model simulations are exploited to analyze TRM and IRM data from a mixed ferromagnetic perovskite Ca0.4Sr0.6RuO3, and to reconstruct the distribution of characteristic fields H d and H s , and its variation with temperature.

  15. Numerical analysis of mixing by sharp-edge-based acoustofluidic micromixer

    NASA Astrophysics Data System (ADS)

    Nama, Nitesh; Huang, Po-Hsun; Jun Huang, Tony; Costanzo, Francesco

    2015-11-01

    Recently, acoustically oscillated sharp-edges have been employed to realize rapid and homogeneous mixing at microscales (Huang, Lab on a Chip, 13, 2013). Here, we present a numerical model, qualitatively validated by experimental results, to analyze the acoustic mixing inside a sharp-edge-based micromixer. We extend our previous numerical model (Nama, Lab on a Chip, 14, 2014) to combine the Generalized Lagrangian Mean (GLM) theory with the convection-diffusion equation, while also allowing for the presence of a background flow as observed in a typical sharp-edge-based micromixer. We employ a perturbation approach to divide the flow variables into zeroth-, first- and second-order fields which are successively solved to obtain the Lagrangian mean velocity. The Langrangian mean velocity and the background flow velocity are further employed with the convection-diffusion equation to obtain the concentration profile. We characterize the effects of various operational and geometrical parameters to suggest potential design changes for improving the mixing performance of the sharp-edge-based micromixer. Lastly, we investigate the possibility of generation of a spatio-temporally controllable concentration gradient by placing sharp-edge structures inside the microchannel.

  16. Mean-field velocity difference model considering the average effect of multi-vehicle interaction

    NASA Astrophysics Data System (ADS)

    Guo, Yan; Xue, Yu; Shi, Yin; Wei, Fang-ping; Lü, Liang-zhong; He, Hong-di

    2018-06-01

    In this paper, a mean-field velocity difference model(MFVD) is proposed to describe the average effect of multi-vehicle interactions on the whole road. By stability analysis, the stability condition of traffic system is obtained. Comparison with stability of full velocity-difference (FVD) model and the completeness of MFVD model are discussed. The mKdV equation is derived from MFVD model through nonlinear analysis to reveal the traffic jams in the form of the kink-antikink density wave. Then the numerical simulation is performed and the results illustrate that the average effect of multi-vehicle interactions plays an important role in effectively suppressing traffic jam. The increase strength of the mean-field velocity difference in MFVD model can rapidly reduce traffic jam and enhance the stability of traffic system.

  17. Beyond mean-field description of Gamow-Teller resonances and β-decay

    NASA Astrophysics Data System (ADS)

    Niu, Yifei; Colò, Gianluca; Vigezzi, Enrico; Bai, Chunlin; Niu, Zhongming; Sagawa, Hiroyuki

    2018-02-01

    β-decay half-lives set the time scale of the rapid neutron capture process, and are therefore essential for understanding the origin of heavy elements in the universe. The random-phase approximation (RPA) based on Skyrme energy density functionals is widely used to calculate the properties of Gamow-Teller (GT) transitions, which play a dominant role in β-decay half-lives. However, the RPA model has its limitations in reproducing the resonance width and often overestimates β-decay half-lives. To overcome these problems, effects beyond mean-field can be included on top of the RPA model. In particular, this can be obtained by taking into account the particle-vibration coupling (PVC). Within the RPA+PVC model, we successfully reproduce the experimental GT resonance width and β-decay half-lives in magic nuclei. We then extend the formalism to superfluid nuclei and apply it to the GT resonance in 120Sn, obtaining a good reproduction of the experimental strength distribution. The effect of isoscalar pairing is also discussed.

  18. CONVECTIVE BABCOCK-LEIGHTON DYNAMO MODELS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miesch, Mark S.; Brown, Benjamin P., E-mail: miesch@ucar.edu

    We present the first global, three-dimensional simulations of solar/stellar convection that take into account the influence of magnetic flux emergence by means of the Babcock-Leighton (BL) mechanism. We have shown that the inclusion of a BL poloidal source term in a convection simulation can promote cyclic activity in an otherwise steady dynamo. Some cycle properties are reminiscent of solar observations, such as the equatorward propagation of toroidal flux near the base of the convection zone. However, the cycle period in this young sun (rotating three times faster than the solar rate) is very short ({approx}6 months) and it is unclearmore » whether much longer cycles may be achieved within this modeling framework, given the high efficiency of field generation and transport by the convection. Even so, the incorporation of mean-field parameterizations in three-dimensional convection simulations to account for elusive processes such as flux emergence may well prove useful in the future modeling of solar and stellar activity cycles.« less

  19. Bridging gaps: On the performance of airborne LiDAR to model wood mouse-habitat structure relationships in pine forests.

    PubMed

    Jaime-González, Carlos; Acebes, Pablo; Mateos, Ana; Mezquida, Eduardo T

    2017-01-01

    LiDAR technology has firmly contributed to strengthen the knowledge of habitat structure-wildlife relationships, though there is an evident bias towards flying vertebrates. To bridge this gap, we investigated and compared the performance of LiDAR and field data to model habitat preferences of wood mouse (Apodemus sylvaticus) in a Mediterranean high mountain pine forest (Pinus sylvestris). We recorded nine field and 13 LiDAR variables that were summarized by means of Principal Component Analyses (PCA). We then analyzed wood mouse's habitat preferences using three different models based on: (i) field PCs predictors, (ii) LiDAR PCs predictors; and (iii) both set of predictors in a combined model, including a variance partitioning analysis. Elevation was also included as a predictor in the three models. Our results indicate that LiDAR derived variables were better predictors than field-based variables. The model combining both data sets slightly improved the predictive power of the model. Field derived variables indicated that wood mouse was positively influenced by the gradient of increasing shrub cover and negatively affected by elevation. Regarding LiDAR data, two LiDAR PCs, i.e. gradients in canopy openness and complexity in forest vertical structure positively influenced wood mouse, although elevation interacted negatively with the complexity in vertical structure, indicating wood mouse's preferences for plots with lower elevations but with complex forest vertical structure. The combined model was similar to the LiDAR-based model and included the gradient of shrub cover measured in the field. Variance partitioning showed that LiDAR-based variables, together with elevation, were the most important predictors and that part of the variation explained by shrub cover was shared. LiDAR derived variables were good surrogates of environmental characteristics explaining habitat preferences by the wood mouse. Our LiDAR metrics represented structural features of the forest patch, such as the presence and cover of shrubs, as well as other characteristics likely including time since perturbation, food availability and predation risk. Our results suggest that LiDAR is a promising technology for further exploring habitat preferences by small mammal communities.

  20. Enhanced global Radionuclide Source Attribution for the Nuclear-Test-Ban Verification by means of the Adjoint Ensemble Dispersion Modeling Technique applied at the IDC/CTBTO.

    NASA Astrophysics Data System (ADS)

    Becker, A.; Wotawa, G.; de Geer, L.

    2006-05-01

    The Provisional Technical Secretariat (PTS) of the CTBTO Preparatory Commission maintains and permanently updates a source-receptor matrix (SRM) describing the global monitoring capability of a highly sensitive 80 stations radionuclide (RN) network in order to verify states signatories' compliance of the comprehensive nuclear-test-ban treaty (CTBT). This is done by means of receptor-oriented Lagrangian particle dispersion modeling (LPDM) to help determine the region from which suspicious radionuclides may originate. In doing so the LPDM FLEXPART5.1 is integrated backward in time based on global analysis wind fields yielding global source-receptor sensitivity (SRS) fields stored in three-hour frequency and at 1º horizontal resolution. A database of these SRS fields substantially helps in improving the interpretation of the RN samples measurements and categorizations because it enables the testing of source-hypothesis's later on in a pure post-processing (SRM inversion) step being feasible on hardware with specifications comparable to currently sold PC's or Notebooks and at any place (decentralized), provided access to the SRS fields is warranted. Within the CTBT environment it is important to quickly achieve decision-makers confidence in the SRM based backtracking products issued by the PTS in the case of the occurrence of treaty relevant radionuclides. Therefore the PTS has set up a highly automated response system together with the Regional Specialized Meteorological Centers of the World Meteorological Organization in the field of dispersion modeling who committed themselves to provide the PTS with the same standard SRS fields as calculated by their systems for CTBT relevant cases. This system was twice utilized in 2005 in order to perform adjoint ensemble dispersion modeling (EDM) and demonstrated the potential of EDM based backtracking to improve the accuracy of the source location related to singular nuclear events thus serving the backward analogue to the findings of the ensemble dispersion modeling (EDM) technique No. 5 efforts performed by Galmarini et al, 2004 (Atmos. Env. 38, 4607-4617). As the scope of the adjoint EDM methodology is not limited to CTBT verification but can be applied to any kind of nuclear event monitoring and location it bears the potential to improve the design of manifold emergency response systems towards preparedness concepts as needed for mitigation of disasters (like Chernobyl) and pre-emptive estimation of pollution hazards.

  1. Potential sources of precipitation in Lake Baikal basin

    NASA Astrophysics Data System (ADS)

    Shukurov, K. A.; Mokhov, I. I.

    2017-11-01

    Based on the data of long-term measurements at 23 meteorological stations in the Russian part of the Lake Baikal basin the probabilities of daily precipitation with different intensity and their contribution to the total precipitation are estimated. Using the trajectory model HYSPLIT_4 for each meteorological station for the period 1948-2016 the 10-day backward trajectories of air parcels, the height of these trajectories and distribution of specific humidity along the trajectories are calculated. The average field of power of potential sources of daily precipitation (less than 10 mm) for all meteorological stations in the Russian part of the Lake Baikal basin was obtained using the CWT (concentration weighted trajectory) method. The areas have been identified from which within 10 days water vapor can be transported to the Lake Baikal basin, as well as regions of the most and least powerful potential sources. The fields of the mean height of air parcels trajectories and the mean specific humidity along the trajectories are compared with the field of mean power of potential sources.

  2. Mean-field theory on mixed ferro-ferrimagnetic compounds with (A aB bC c) yD

    NASA Astrophysics Data System (ADS)

    Wei, Guo-Zhu; Xin, Zihua; Liang, Yaqiu; Zhang, Qi

    2004-01-01

    The magnetic properties of the mixed ferro-ferrimagnetic compounds with (A aB bC c) yD, in which A, B, C and D are four different magnetic ions and form four different sublattices, are studied by using the Ising model. And the Ising model was dealt with standard mean-field approximation. The regions of concentration in which two compensation points or one compensation point exit are given in c- a, b- c and a- b planes. The phase diagrams of the transition temperature Tc and compensation temperature Tcomp are obtained. The temperature dependences of the magnetization are also investigated. Some of the result can be used to explain the experimental work of the molecule-based ferro-ferrimagnet (Ni IIaMn IIbFe IIc) 1.5[Cr III(CN) 6]· zH 2O.

  3. Quantum Critical Point revisited by the Dynamical Mean Field Theory

    NASA Astrophysics Data System (ADS)

    Xu, Wenhu; Kotliar, Gabriel; Tsvelik, Alexei

    Dynamical mean field theory is used to study the quantum critical point (QCP) in the doped Hubbard model on a square lattice. The QCP is characterized by a universal scaling form of the self energy and a spin density wave instability at an incommensurate wave vector. The scaling form unifies the low energy kink and the high energy waterfall feature in the spectral function, while the spin dynamics includes both the critical incommensurate and high energy antiferromagnetic paramagnons. We use the frequency dependent four-point correlation function of spin operators to calculate the momentum dependent correction to the electron self energy. Our results reveal a substantial difference with the calculations based on the Spin-Fermion model which indicates that the frequency dependence of the the quasiparitcle-paramagnon vertices is an important factor. The authors are supported by Center for Computational Design of Functional Strongly Correlated Materials and Theoretical Spectroscopy under DOE Grant DE-FOA-0001276.

  4. On the Dependence of the Ionospheric E-Region Electric Field of the Solar Activity

    NASA Astrophysics Data System (ADS)

    Denardini, Clezio Marcos; Schuch, Nelson Jorge; Moro, Juliano; Araujo Resende, Laysa Cristina; Chen, Sony Su; Costa, D. Joaquim

    2016-07-01

    We have being studying the zonal and vertical E region electric field components inferred from the Doppler shifts of type 2 echoes (gradient drift irregularities) detected with the 50 MHz backscatter coherent (RESCO) radar set at Sao Luis, Brazil (SLZ, 2.3° S, 44.2° W) during the solar cycle 24. In this report we present the dependence of the vertical and zonal components of this electric field with the solar activity, based on the solar flux F10.7. For this study we consider the geomagnetically quiet days only (Kp <= 3+). A magnetic field-aligned-integrated conductivity model was developed for proving the conductivities, using the IRI-2007, the MISIS-2000 and the IGRF-11 models as input parameters for ionosphere, neutral atmosphere and Earth magnetic field, respectively. The ion-neutron collision frequencies of all the species are combined through the momentum transfer collision frequency equation. The mean zonal component of the electric field, which normally ranged from 0.19 to 0.35 mV/m between the 8 and 18 h (LT) in the Brazilian sector, show a small dependency with the solar activity. Whereas, the mean vertical component of the electric field, which normally ranges from 4.65 to 10.12 mV/m, highlight the more pronounced dependency of the solar flux.

  5. A dual two dimensional electronic portal imaging device transit dosimetry model based on an empirical quadratic formalism

    PubMed Central

    Metwaly, M; Glegg, M; Baggarley, S P; Elliott, A

    2015-01-01

    Objective: This study describes a two dimensional electronic portal imaging device (EPID) transit dosimetry model that can predict either: (1) in-phantom exit dose, or (2) EPID transit dose, for treatment verification. Methods: The model was based on a quadratic equation that relates the reduction in intensity to the equivalent path length (EPL) of the attenuator. In this study, two sets of quadratic equation coefficients were derived from calibration dose planes measured with EPID and ionization chamber in water under reference conditions. With two sets of coefficients, EPL can be calculated from either EPID or treatment planning system (TPS) dose planes. Consequently, either the in-phantom exit dose or the EPID transit dose can be predicted from the EPL. The model was tested with two open, five wedge and seven sliding window prostate and head and neck intensity-modulated radiation therapy (IMRT) fields on phantoms. Results were analysed using absolute gamma analysis (3%/3 mm). Results: The open fields gamma pass rates were >96.8% for all comparisons. For wedge and IMRT fields, comparisons between predicted and TPS-computed in-phantom exit dose resulted in mean gamma pass rate of 97.4% (range, 92.3–100%). As for the comparisons between predicted and measured EPID transit dose, the mean gamma pass rate was 97.5% (range, 92.6–100%). Conclusion: An EPID transit dosimetry model that can predict in-phantom exit dose and EPID transit dose was described and proven to be valid. Advances in knowledge: The described model is practical, generic and flexible to encourage widespread implementation of EPID dosimetry for the improvement of patients' safety in radiotherapy. PMID:25969867

  6. Comparing methods for modelling spreading cell fronts.

    PubMed

    Markham, Deborah C; Simpson, Matthew J; Maini, Philip K; Gaffney, Eamonn A; Baker, Ruth E

    2014-07-21

    Spreading cell fronts play an essential role in many physiological processes. Classically, models of this process are based on the Fisher-Kolmogorov equation; however, such continuum representations are not always suitable as they do not explicitly represent behaviour at the level of individual cells. Additionally, many models examine only the large time asymptotic behaviour, where a travelling wave front with a constant speed has been established. Many experiments, such as a scratch assay, never display this asymptotic behaviour, and in these cases the transient behaviour must be taken into account. We examine the transient and the asymptotic behaviour of moving cell fronts using techniques that go beyond the continuum approximation via a volume-excluding birth-migration process on a regular one-dimensional lattice. We approximate the averaged discrete results using three methods: (i) mean-field, (ii) pair-wise, and (iii) one-hole approximations. We discuss the performance of these methods, in comparison to the averaged discrete results, for a range of parameter space, examining both the transient and asymptotic behaviours. The one-hole approximation, based on techniques from statistical physics, is not capable of predicting transient behaviour but provides excellent agreement with the asymptotic behaviour of the averaged discrete results, provided that cells are proliferating fast enough relative to their rate of migration. The mean-field and pair-wise approximations give indistinguishable asymptotic results, which agree with the averaged discrete results when cells are migrating much more rapidly than they are proliferating. The pair-wise approximation performs better in the transient region than does the mean-field, despite having the same asymptotic behaviour. Our results show that each approximation only works in specific situations, thus we must be careful to use a suitable approximation for a given system, otherwise inaccurate predictions could be made. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Somatic growth of mussels Mytilus edulis in field studies compared to predictions using BEG, DEB, and SFG models

    NASA Astrophysics Data System (ADS)

    Larsen, Poul S.; Filgueira, Ramón; Riisgård, Hans Ulrik

    2014-04-01

    Prediction of somatic growth of blue mussels, Mytilus edulis, based on the data from 2 field-growth studies of mussels in suspended net-bags in Danish waters was made by 3 models: the bioenergetic growth (BEG), the dynamic energy budget (DEB), and the scope for growth (SFG). Here, the standard BEG model has been expanded to include the temperature dependence of filtration rate and respiration and an ad hoc modification to ensure a smooth transition to zero ingestion as chlorophyll a (chl a) concentration approaches zero, both guided by published data. The first 21-day field study was conducted at nearly constant environmental conditions with a mean chl a concentration of C = 2.7 μg L- 1, and the observed monotonous growth in the dry weight of soft parts was best predicted by DEB while BEG and SFG models produced lower growth. The second 165-day field study was affected by large variations in chl a and temperature, and the observed growth varied accordingly, but nevertheless, DEB and SFG predicted monotonous growth in good agreement with the mean pattern while BEG mimicked the field data in response to observed changes in chl a concentration and temperature. The general features of the models were that DEB produced the best average predictions, SFG mostly underestimated growth, whereas only BEG was sensitive to variations in chl a concentration and temperature. DEB and SFG models rely on the calibration of the half-saturation coefficient to optimize the food ingestion function term to that of observed growth, and BEG is independent of observed actual growth as its predictions solely rely on the time history of the local chl a concentration and temperature.

  8. Theoretical model for thin ferroelectric films and the multilayer structures based on them

    NASA Astrophysics Data System (ADS)

    Starkov, A. S.; Pakhomov, O. V.; Starkov, I. A.

    2013-06-01

    A modified Weiss mean-field theory is used to study the dependence of the properties of a thin ferroelectric film on its thickness. The possibility of introducing gradient terms into the thermodynamic potential is analyzed using the calculus of variations. An integral equation is introduced to generalize the well-known Langevin equation to the case of the boundaries of a ferroelectric. An analysis of this equation leads to the existence of a transition layer at the interface between ferroelectrics or a ferroelectric and a dielectric. The permittivity of this layer is shown to depend on the electric field direction even if the ferroelectrics in contact are homogeneous. The results obtained in terms of the Weiss model are compared with the results of the models based on the correlation effect and the presence of a dielectric layer at the boundary of a ferroelectric and with experimental data.

  9. Diamond-Based Magnetic Imaging with Fourier Optical Processing

    NASA Astrophysics Data System (ADS)

    Backlund, Mikael P.; Kehayias, Pauli; Walsworth, Ronald L.

    2017-11-01

    Diamond-based magnetic field sensors have attracted great interest in recent years. In particular, wide-field magnetic imaging using nitrogen-vacancy (NV) centers in diamond has been previously demonstrated in condensed matter, biological, and paleomagnetic applications. Vector magnetic imaging with NV ensembles typically requires a significant applied field (>10 G ) to resolve the contributions from four crystallographic orientations, hindering studies of magnetic samples that require measurement in low or independently specified bias fields. Here we model and measure the complex amplitude distribution of NV emission at the microscope's Fourier plane and show that by modulating this collected light at the Fourier plane, one can decompose the NV ensemble magnetic resonance spectrum into its constituent orientations by purely optical means. This decomposition effectively extends the dynamic range at a given bias field and enables wide-field vector magnetic imaging at arbitrarily low bias fields, thus broadening potential applications of NV imaging and sensing. Our results demonstrate that NV-based microscopy stands to benefit greatly from Fourier optical approaches, which have already found widespread utility in other branches of microscopy.

  10. Theoretical approaches to the steady-state statistical physics of interacting dissipative units

    NASA Astrophysics Data System (ADS)

    Bertin, Eric

    2017-02-01

    The aim of this review is to provide a concise overview of some of the generic approaches that have been developed to deal with the statistical description of large systems of interacting dissipative ‘units’. The latter notion includes, e.g. inelastic grains, active or self-propelled particles, bubbles in a foam, low-dimensional dynamical systems like driven oscillators, or even spatially extended modes like Fourier modes of the velocity field in a fluid. We first review methods based on the statistical properties of a single unit, starting with elementary mean-field approximations, either static or dynamic, that describe a unit embedded in a ‘self-consistent’ environment. We then discuss how this basic mean-field approach can be extended to account for spatial dependences, in the form of space-dependent mean-field Fokker-Planck equations, for example. We also briefly review the use of kinetic theory in the framework of the Boltzmann equation, which is an appropriate description for dilute systems. We then turn to descriptions in terms of the full N-body distribution, starting from exact solutions of one-dimensional models, using a matrix-product ansatz method when correlations are present. Since exactly solvable models are scarce, we also present some approximation methods which can be used to determine the N-body distribution in a large system of dissipative units. These methods include the Edwards approach for dense granular matter and the approximate treatment of multiparticle Langevin equations with colored noise, which models systems of self-propelled particles. Throughout this review, emphasis is put on methodological aspects of the statistical modeling and on formal similarities between different physical problems, rather than on the specific behavior of a given system.

  11. Interpreting tracer breakthrough tailing from different forced-gradient tracer experiment configurations in fractured bedrock

    USGS Publications Warehouse

    Becker, M.W.; Shapiro, A.M.

    2003-01-01

    Conceptual and mathematical models are presented that explain tracer breakthrough tailing in the absence of significant matrix diffusion. Model predictions are compared to field results from radially convergent, weak-dipole, and push-pull tracer experiments conducted in a saturated crystalline bedrock. The models are based upon the assumption that flow is highly channelized, that the mass of tracer in a channel is proportional to the cube of the mean channel aperture, and the mean transport time in the channel is related to the square of the mean channel aperture. These models predict the consistent -2 straight line power law slope observed in breakthrough from radially convergent and weak-dipole tracer experiments and the variable straight line power law slope observed in push-pull tracer experiments with varying injection volumes. The power law breakthrough slope is predicted in the absence of matrix diffusion. A comparison of tracer experiments in which the flow field was reversed to those in which it was not indicates that the apparent dispersion in the breakthrough curve is partially reversible. We hypothesize that the observed breakthrough tailing is due to a combination of local hydrodynamic dispersion, which always increases in the direction of fluid velocity, and heterogeneous advection, which is partially reversed when the flow field is reversed. In spite of our attempt to account for heterogeneous advection using a multipath approach, a much smaller estimate of hydrodynamic dispersivity was obtained from push-pull experiments than from radially convergent or weak dipole experiments. These results suggest that although we can explain breakthrough tailing as an advective phenomenon, we cannot ignore the relationship between hydrodynamic dispersion and flow field geometry at this site. The design of the tracer experiment can severely impact the estimation of hydrodynamic dispersion and matrix diffusion in highly heterogeneous geologic media.

  12. GOCO05c: A New Combined Gravity Field Model Based on Full Normal Equations and Regionally Varying Weighting

    NASA Astrophysics Data System (ADS)

    Fecher, T.; Pail, R.; Gruber, T.

    2017-05-01

    GOCO05c is a gravity field model computed as a combined solution of a satellite-only model and a global data set of gravity anomalies. It is resolved up to degree and order 720. It is the first model applying regionally varying weighting. Since this causes strong correlations among all gravity field parameters, the resulting full normal equation system with a size of 2 TB had to be solved rigorously by applying high-performance computing. GOCO05c is the first combined gravity field model independent of EGM2008 that contains GOCE data of the whole mission period. The performance of GOCO05c is externally validated by GNSS-levelling comparisons, orbit tests, and computation of the mean dynamic topography, achieving at least the quality of existing high-resolution models. Results show that the additional GOCE information is highly beneficial in insufficiently observed areas, and that due to the weighting scheme of individual data the spectral and spatial consistency of the model is significantly improved. Due to usage of fill-in data in specific regions, the model cannot be used for physical interpretations in these regions.

  13. Development of a Josephson vortex two-state system based on a confocal annular Josephson junction

    NASA Astrophysics Data System (ADS)

    Monaco, Roberto; Mygind, Jesper; Koshelets, Valery P.

    2018-07-01

    We report theoretical and experimental work on the development of a Josephson vortex two-state system based on a confocal annular Josephson tunnel junction (CAJTJ). The key ingredient of this geometrical configuration is a periodically variable width that generates a spatial vortex potential with bistable states. This intrinsic vortex potential can be tuned by an externally applied magnetic field and tilted by a bias current. The two-state system is accurately modeled by a one-dimensional sine-Gordon like equation by means of which one can numerically calculate both the magnetic field needed to set the vortex in a given state as well as the vortex-depinning currents. Experimental data taken at 4.2 {{K}} on high-quality Nb/Al-AlOx/Nb CAJTJs with an individual trapped fluxon advocate the presence of a robust and finely tunable double-well potential for which reliable manipulation of the vortex state has been classically demonstrated. The vortex is prepared in a given potential by means of an externally applied magnetic field, while the state readout is accomplished by measuring the vortex-depinning current in a small magnetic field. Our proof of principle experiment convincingly demonstrates that the proposed vortex two-state system based on CAJTJs is robust and workable.

  14. Polarization of the interference field during reflection of electromagnetic waves from an intermedia boundary

    NASA Astrophysics Data System (ADS)

    Bulakhov, M. G.; Buyanov, Yu. I.; Yakubov, V. P.

    1996-10-01

    It has been shown that a full vector measurement of the total field allows one to uniquely distinguish the incident and reflected waves at each observation point without the use of a spatial difference based on an analysis of the polarization structure of the interference pattern which arises during reflection of electromagnetic waves from an intermedia boundary. We have investigated the stability of these procedures with respect to measurement noise by means of numerical modeling.

  15. Exploring the Influence of Differentiated Nutrition Information on Consumers' Mental Models Regarding Foods from Edible Insects: A Means-End Chain Analysis.

    PubMed

    Pambo, Kennedy O; Okello, Julius J; Mbeche, Robert M; Kinyuru, John N

    2017-01-01

    This study used a field experiment and means-end chain analysis to examine the effects of positive and perceived negative nutrition information on the households' motivations to consume insect-based foods. It used a random sample of households drawn from rural communities in Kenya. The study found that provision of nutrition information on benefits of edible insects and perceived negative aspects of insect-based foods influences participants' perceptions of insect-based foods and hence acceptance. We also found that tasting real products influenced the nature of mental constructs. The results provide marketers of edible insects with potential marketing messages for promotion.

  16. Model based estimation of image depth and displacement

    NASA Technical Reports Server (NTRS)

    Damour, Kevin T.

    1992-01-01

    Passive depth and displacement map determinations have become an important part of computer vision processing. Applications that make use of this type of information include autonomous navigation, robotic assembly, image sequence compression, structure identification, and 3-D motion estimation. With the reliance of such systems on visual image characteristics, a need to overcome image degradations, such as random image-capture noise, motion, and quantization effects, is clearly necessary. Many depth and displacement estimation algorithms also introduce additional distortions due to the gradient operations performed on the noisy intensity images. These degradations can limit the accuracy and reliability of the displacement or depth information extracted from such sequences. Recognizing the previously stated conditions, a new method to model and estimate a restored depth or displacement field is presented. Once a model has been established, the field can be filtered using currently established multidimensional algorithms. In particular, the reduced order model Kalman filter (ROMKF), which has been shown to be an effective tool in the reduction of image intensity distortions, was applied to the computed displacement fields. Results of the application of this model show significant improvements on the restored field. Previous attempts at restoring the depth or displacement fields assumed homogeneous characteristics which resulted in the smoothing of discontinuities. In these situations, edges were lost. An adaptive model parameter selection method is provided that maintains sharp edge boundaries in the restored field. This has been successfully applied to images representative of robotic scenarios. In order to accommodate image sequences, the standard 2-D ROMKF model is extended into 3-D by the incorporation of a deterministic component based on previously restored fields. The inclusion of past depth and displacement fields allows a means of incorporating the temporal information into the restoration process. A summary on the conditions that indicate which type of filtering should be applied to a field is provided.

  17. The prediction of sea-surface temperature variations by means of an advective mixed-layer ocean model

    NASA Technical Reports Server (NTRS)

    Atlas, R. M.

    1976-01-01

    An advective mixed layer ocean model was developed by eliminating the assumption of horizontal homogeneity in an already existing mixed layer model, and then superimposing a mean and anomalous wind driven current field. This model is based on the principle of conservation of heat and mechanical energy and utilizes a box grid for the advective part of the calculation. Three phases of experiments were conducted: evaluation of the model's ability to account for climatological sea surface temperature (SST) variations in the cooling and heating seasons, sensitivity tests in which the effect of hypothetical anomalous winds was evaluated, and a thirty-day synoptic calculation using the model. For the case studied, the accuracy of the predictions was improved by the inclusion of advection, although nonadvective effects appear to have dominated.

  18. Hybrid Optimal Design of the Eco-Hydrological Wireless Sensor Network in the Middle Reach of the Heihe River Basin, China

    PubMed Central

    Kang, Jian; Li, Xin; Jin, Rui; Ge, Yong; Wang, Jinfeng; Wang, Jianghao

    2014-01-01

    The eco-hydrological wireless sensor network (EHWSN) in the middle reaches of the Heihe River Basin in China is designed to capture the spatial and temporal variability and to estimate the ground truth for validating the remote sensing productions. However, there is no available prior information about a target variable. To meet both requirements, a hybrid model-based sampling method without any spatial autocorrelation assumptions is developed to optimize the distribution of EHWSN nodes based on geostatistics. This hybrid model incorporates two sub-criteria: one for the variogram modeling to represent the variability, another for improving the spatial prediction to evaluate remote sensing productions. The reasonability of the optimized EHWSN is validated from representativeness, the variogram modeling and the spatial accuracy through using 15 types of simulation fields generated with the unconditional geostatistical stochastic simulation. The sampling design shows good representativeness; variograms estimated by samples have less than 3% mean error relative to true variograms. Then, fields at multiple scales are predicted. As the scale increases, estimated fields have higher similarities to simulation fields at block sizes exceeding 240 m. The validations prove that this hybrid sampling method is effective for both objectives when we do not know the characteristics of an optimized variables. PMID:25317762

  19. Hybrid optimal design of the eco-hydrological wireless sensor network in the middle reach of the Heihe River Basin, China.

    PubMed

    Kang, Jian; Li, Xin; Jin, Rui; Ge, Yong; Wang, Jinfeng; Wang, Jianghao

    2014-10-14

    The eco-hydrological wireless sensor network (EHWSN) in the middle reaches of the Heihe River Basin in China is designed to capture the spatial and temporal variability and to estimate the ground truth for validating the remote sensing productions. However, there is no available prior information about a target variable. To meet both requirements, a hybrid model-based sampling method without any spatial autocorrelation assumptions is developed to optimize the distribution of EHWSN nodes based on geostatistics. This hybrid model incorporates two sub-criteria: one for the variogram modeling to represent the variability, another for improving the spatial prediction to evaluate remote sensing productions. The reasonability of the optimized EHWSN is validated from representativeness, the variogram modeling and the spatial accuracy through using 15 types of simulation fields generated with the unconditional geostatistical stochastic simulation. The sampling design shows good representativeness; variograms estimated by samples have less than 3% mean error relative to true variograms. Then, fields at multiple scales are predicted. As the scale increases, estimated fields have higher similarities to simulation fields at block sizes exceeding 240 m. The validations prove that this hybrid sampling method is effective for both objectives when we do not know the characteristics of an optimized variables.

  20. Low-high junction theory applied to solar cells

    NASA Technical Reports Server (NTRS)

    Godlewski, M. P.; Baraona, C. R.; Brandhorst, H. W., Jr.

    1973-01-01

    Recent use of alloying techniques for rear contact formation has yielded a new kind of silicon solar cell, the back surface field (BSF) cell, with abnormally high open circuit voltage and improved radiation resistance. Several analytical models for open circuit voltage based on the reverse saturation current are formulated to explain these observations. The zero SRV case of the conventional cell model, the drift field model, and the low-high junction (LHJ) model can predict the experimental trends. The LHJ model applies the theory of the low-high junction and is considered to reflect a more realistic view of cell fabrication. This model can predict the experimental trends observed for BSF cells. Detailed descriptions and derivations for the models are included. The correspondences between them are discussed. This modeling suggests that the meaning of minority carrier diffusion length measured in BSF cells be reexamined.

  1. On The Development of One-way Nesting of Air-pollution Model Smog Into Numerical Weather Prediction Model Eta

    NASA Astrophysics Data System (ADS)

    Halenka, T.; Bednar, J.; Brechler, J.

    The spatial distribution of air pollution on the regional scale (Bohemian region) is simulated by means of Charles University puff model SMOG. The results are used for the assessment of the concentration fields of ozone, nitrogen oxides and other ozone precursors. Current improved version of the model covers up to 16 groups of basic compounds and it is based on trajectory computation and puff interaction both by means of Gaussian diffusion mixing and chemical reactions of basic species. Gener- ally, the method used for trajectory computation is valuable mainly for episodes sim- ulation, nevertheless, climatological study can be solved as well by means of average wind rose. For the study being presented huge database of real emission sources was incorporated with all kind of sources included. Some problem with the background values of concentrations was removed. The model SMOG has been nested into the forecast model ETA to obtain appropriate meteorological data input. We can estimate air pollution characteristics both for episodes analysis and the prediction of future air quality conditions. Necessary prognostic variables from the numerical weather pre- diction model are taken for the region of the central Bohemia, where the original puff model was tested. We used mainly 850 hPa wind field for computation of prognos- tic trajectories, the influence of surface temperature as a parameter of photochemistry reactions as well as the effect of cloudness has been tested.

  2. Atom transistor from the point of view of nonequilibrium dynamics

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Dunjko, V.; Olshanii, M.

    2015-12-01

    We analyze the atom field-effect transistor scheme (Stickney et al 2007 Phys. Rev. A 75 013608) using the standard tools of quantum and classical nonequlilibrium dynamics. We first study the correspondence between the quantum and the mean-field descriptions of this system by computing, both ab initio and by using their mean-field analogs, the deviations from the Eigenstate Thermalization Hypothesis, quantum fluctuations, and the density of states. We find that, as far as the quantities that interest us, the mean-field model can serve as a semi-classical emulator of the quantum system. Then, using the mean-field model, we interpret the point of maximal output signal in our transistor as the onset of ergodicity—the point where the system becomes, in principle, able to attain the thermal values of the former integrals of motion, albeit not being fully thermalized yet.

  3. Non-elliptic wavevector anisotropy for magnetohydrodynamic turbulence

    NASA Astrophysics Data System (ADS)

    Narita, Y.

    2015-11-01

    A model of non-elliptic wavevector anisotropy is developed for the inertial-range spectrum of magnetohydrodynamic turbulence and is presented in the two-dimensional wavevector domain spanning the directions parallel and perpendicular to the mean magnetic field. The non-elliptic model is a variation of the elliptic model with different scalings along the parallel and the perpendicular components of the wavevectors to the mean magnetic field. The non-elliptic anisotropy model reproduces the smooth transition of the power-law spectra from an index of -2 in the parallel projection with respect to the mean magnetic field to an index of -5/3 in the perpendicular projection observed in solar wind turbulence, and is as competitive as the critical balance model to explain the measured frequency spectra in the solar wind. The parameters in the non-elliptic spectrum model are compared with the solar wind observations.

  4. Matched field localization based on CS-MUSIC algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Shuangle; Tang, Ruichun; Peng, Linhui; Ji, Xiaopeng

    2016-04-01

    The problem caused by shortness or excessiveness of snapshots and by coherent sources in underwater acoustic positioning is considered. A matched field localization algorithm based on CS-MUSIC (Compressive Sensing Multiple Signal Classification) is proposed based on the sparse mathematical model of the underwater positioning. The signal matrix is calculated through the SVD (Singular Value Decomposition) of the observation matrix. The observation matrix in the sparse mathematical model is replaced by the signal matrix, and a new concise sparse mathematical model is obtained, which means not only the scale of the localization problem but also the noise level is reduced; then the new sparse mathematical model is solved by the CS-MUSIC algorithm which is a combination of CS (Compressive Sensing) method and MUSIC (Multiple Signal Classification) method. The algorithm proposed in this paper can overcome effectively the difficulties caused by correlated sources and shortness of snapshots, and it can also reduce the time complexity and noise level of the localization problem by using the SVD of the observation matrix when the number of snapshots is large, which will be proved in this paper.

  5. Progress in Developing a New Field-theoretical Crossover Equation-of-State

    NASA Technical Reports Server (NTRS)

    Rudnick, Joseph; Barmatz, M.; Zhong, Fang

    2003-01-01

    A new field-theoretical crossover equation-of-state model is being developed. This model of a liquid-gas critical point provides a bridge between the asymptotic equation-of-state behavior close to the transition, obtained by the Guida and Zinn-Justin parametric model [J. Phys. A: Math. Gen. 31, 8103 (1998)], and the expected mean field behavior farther away. The crossover is based on the beta function for the renormalized fourth-order coupling constant and incorporates the correct crossover exponents and critical amplitude ratios in both regimes. A crossover model is now being developed that is consistent with predictions along the critical isochore and along the coexistence curve of the minimal subtraction renormalization approach developed by Dohm and co-workers and recently applied to the O(1) universality class [Phys. Rev. E, 67, 021106 (2003)]. Experimental measurements of the heat capacity at constant volume, isothermal susceptibility, and coexistence curve near the He-3 critical point are being compared to the predictions of this model. The results of these comparisons will be presented.

  6. Random phase approximation and cluster mean field studies of hard core Bose Hubbard model

    NASA Astrophysics Data System (ADS)

    Alavani, Bhargav K.; Gaude, Pallavi P.; Pai, Ramesh V.

    2018-04-01

    We investigate zero temperature and finite temperature properties of the Bose Hubbard Model in the hard core limit using Random Phase Approximation (RPA) and Cluster Mean Field Theory (CMFT). We show that our RPA calculations are able to capture quantum and thermal fluctuations significantly better than CMFT.

  7. A diagram for evaluating multiple aspects of model performance in simulating vector fields

    NASA Astrophysics Data System (ADS)

    Xu, Zhongfeng; Hou, Zhaolu; Han, Ying; Guo, Weidong

    2016-12-01

    Vector quantities, e.g., vector winds, play an extremely important role in climate systems. The energy and water exchanges between different regions are strongly dominated by wind, which in turn shapes the regional climate. Thus, how well climate models can simulate vector fields directly affects model performance in reproducing the nature of a regional climate. This paper devises a new diagram, termed the vector field evaluation (VFE) diagram, which is a generalized Taylor diagram and able to provide a concise evaluation of model performance in simulating vector fields. The diagram can measure how well two vector fields match each other in terms of three statistical variables, i.e., the vector similarity coefficient, root mean square length (RMSL), and root mean square vector difference (RMSVD). Similar to the Taylor diagram, the VFE diagram is especially useful for evaluating climate models. The pattern similarity of two vector fields is measured by a vector similarity coefficient (VSC) that is defined by the arithmetic mean of the inner product of normalized vector pairs. Examples are provided, showing that VSC can identify how close one vector field resembles another. Note that VSC can only describe the pattern similarity, and it does not reflect the systematic difference in the mean vector length between two vector fields. To measure the vector length, RMSL is included in the diagram. The third variable, RMSVD, is used to identify the magnitude of the overall difference between two vector fields. Examples show that the VFE diagram can clearly illustrate the extent to which the overall RMSVD is attributed to the systematic difference in RMSL and how much is due to the poor pattern similarity.

  8. Acid-base properties of 2:1 clays. I. Modeling the role of electrostatics.

    PubMed

    Delhorme, Maxime; Labbez, Christophe; Caillet, Céline; Thomas, Fabien

    2010-06-15

    We present a theoretical investigation of the titratable charge of clays with various structural charge (sigma(b)): pyrophyllite (sigma(b) = 0 e x nm(-2)), montmorillonite (sigma(b) = -0.7 e x nm(-2)) and illite (sigma(b) = -1.2 e x nm(-2)). The calculations were carried out using a Monte Carlo method in the Grand Canonical ensemble and in the framework of the primitive model. The clay particle was modeled as a perfect hexagonal platelet, with an "ideal" crystal structure. The only fitting parameters used are the intrinsic equilibrium constants (pK(0)) for the protonation/deprotonation reactions of the broken-bond sites on the lateral faces of the clay particles, silanol, =SiO(-) + H(+) --> =SiOH, and aluminol, =AlO(-1/2) + H(+) --> =AlOH(+1/2). Simulations are found to give a satisfactory description of the acid-base titration of montmorillonite without any additional fitting parameter. In particular, combining the electrostatics from the crystal substitutions with ionization constants, the simulations satisfactorily catch the shift in the titration curve of montmorillonite according to the ionic strength. Change in the ionic strength modulates the screening of the electrostatic interactions which results in this shift. Accordingly, the PZNPC is found to shift toward alkaline pH upon increasing the permanent basal charge. Unlike previous mean field model results, a significant decrease in PZNPC values is predicted in response to stack formation. Finally, the mean field approach is shown to be inappropriate to study the acid-base properties of clays.

  9. T- P Phase Diagram of Nitrogen at High Pressures

    NASA Astrophysics Data System (ADS)

    Algul, G.; Enginer, Y.; Yurtseven, H.

    2018-05-01

    By employing a mean field model, calculation of the T- P phase diagram of molecular nitrogen is performed at high pressures up to 200 GPa. Experimental data from the literature are used to fit a quadratic function in T and P, describing the phase line equations which have been derived using the mean field model studied here for N 2, and the fitted parameters are determined. Our model study gives that the observed T- P phase diagram can be described satisfactorily for the first-order transitions between the phases at low as well as high pressures in nitrogen. Some thermodynamic quantities can also be predicted as functions of temperature and pressure from the mean field model studied here and they can be compared with the experimental data.

  10. Comparative study of microelectrode recording-based STN location and MRI-based STN location in low to ultra-high field (7.0 T) T2-weighted MRI images

    NASA Astrophysics Data System (ADS)

    Verhagen, Rens; Schuurman, P. Richard; van den Munckhof, Pepijn; Fiorella Contarino, M.; de Bie, Rob M. A.; Bour, Lo J.

    2016-12-01

    Objective. The correspondence between the anatomical STN and the STN observed in T2-weighted MRI images used for deep brain stimulation (DBS) targeting remains unclear. Using a new method, we compared the STN borders seen on MRI images with those estimated by intraoperative microelectrode recordings (MER). Approach. We developed a method to automatically generate a detailed estimation of STN shape and the location of its borders, based on multiple-channel MER measurements. In 33 STNs of 19 Parkinson patients, we quantitatively compared the dorsal and lateral borders of this MER-based STN model with the STN borders visualized by 1.5 T (n = 14), 3.0 T (n = 10) and 7.0 T (n = 9) T2-weighted MRI. Main results. The dorsal border was identified more dorsally on coronal T2 MRI than by the MER-based STN model, with a significant difference in the 3.0 T (range 0.97-1.19 mm) and 7.0 T (range 1.23-1.25 mm) groups. The lateral border was significantly more medial on 1.5 T (mean: 1.97 mm) and 3.0 T (mean: 2.49 mm) MRI than in the MER-based STN; a difference that was not found in the 7.0 T group. Significance. The STN extends further in the dorsal direction on coronal T2 MRI images than is measured by MER. Increasing MRI field strength to 3.0 T or 7.0 T yields similar discrepancies between MER and MRI at the dorsal STN border. In contrast, increasing MRI field strength to 7.0 T may be useful for identification of the lateral STN border and thereby improve DBS targeting.

  11. Individual-based modelling and control of bovine brucellosis

    NASA Astrophysics Data System (ADS)

    Nepomuceno, Erivelton G.; Barbosa, Alípio M.; Silva, Marcos X.; Perc, Matjaž

    2018-05-01

    We present a theoretical approach to control bovine brucellosis. We have used individual-based modelling, which is a network-type alternative to compartmental models. Our model thus considers heterogeneous populations, and spatial aspects such as migration among herds and control actions described as pulse interventions are also easily implemented. We show that individual-based modelling reproduces the mean field behaviour of an equivalent compartmental model. Details of this process, as well as flowcharts, are provided to facilitate the reproduction of the presented results. We further investigate three numerical examples using real parameters of herds in the São Paulo state of Brazil, in scenarios which explore eradication, continuous and pulsed vaccination and meta-population effects. The obtained results are in good agreement with the expected behaviour of this disease, which ultimately showcases the effectiveness of our theory.

  12. The Applicability of the Generalized Method of Cells for Analyzing Discontinuously Reinforced Composites

    NASA Technical Reports Server (NTRS)

    Pahr, D. H.; Arnold, S. M.

    2001-01-01

    The paper begins with a short overview of the recent work done in the field of discontinuous reinforced composites, focusing on the different parameters which influence the material behavior of discontinuous reinforced composites, as well as the various analysis approaches undertaken. Based on this overview it became evident, that in order to investigate the enumerated effects in an efficient and comprehensive manner, an alternative approach to the computationally intensive finite-element based micromechanics approach is required. Therefore, an investigation is conducted to demonstrate the utility of utilizing the generalized method of cells (GMC), a semi-analytical micromechanics-based approach, to simulate the elastic and elastoplastic material behavior of aligned short fiber composites. The results are compared with (1) simulations using other micromechanical based mean field models and finite element (FE) unit cell models found in the literature given elastic material behavior, as well as (2) finite element unit cell and a new semianalytical elastoplastic shear lag model in the inelastic range. GMC is shown to definitely have a window of applicability when simulating discontinuously reinforced composite material behavior.

  13. The Applicability of the Generalized Method of Cells for Analyzing Discontinuously Reinforced Composites

    NASA Technical Reports Server (NTRS)

    Pahr, D. H.; Arnold, S. M.

    2001-01-01

    The paper begins with a short overview of the recent work done in the field of discontinuous reinforced composites, focusing on the different parameters which influence the material behavior of discontinuous reinforced composites, as well as the various analysis approaches undertaken. Based on this overview it became evident that in order to investigate the enumerated effects in an efficient and comprehensive manner, an alternative approach to the computationally intensive finite-element based micromechanics approach is required. Therefore, an investigation is conducted to demonstrate the utility of utilizing the generalized method of cells (GMC), a semi-analytical micromechanics-based approach, to simulate the elastic and elastoplastic material behavior of aligned short fiber composites. The results are compared with simulations using other micromechanical based mean field models and finite element (FE) unit cell models found in the literature given elastic material behavior, as well as finite element unit cell and a new semianalytical elastoplastic shear lag model in the inelastic range. GMC is shown to definitely have a window of applicability when simulating discontinuously reinforced composite material behavior.

  14. Spatial Irrigation Management Using Remote Sensing Water Balance Modeling and Soil Water Content Monitoring

    NASA Astrophysics Data System (ADS)

    Barker, J. Burdette

    Spatially informed irrigation management may improve the optimal use of water resources. Sub-field scale water balance modeling and measurement were studied in the context of irrigation management. A spatial remote-sensing-based evapotranspiration and soil water balance model was modified and validated for use in real-time irrigation management. The modeled ET compared well with eddy covariance data from eastern Nebraska. Placement and quantity of sub-field scale soil water content measurement locations was also studied. Variance reduction factor and temporal stability were used to analyze soil water content data from an eastern Nebraska field. No consistent predictor of soil water temporal stability patterns was identified. At least three monitoring locations were needed per irrigation management zone to adequately quantify the mean soil water content. The remote-sensing-based water balance model was used to manage irrigation in a field experiment. The research included an eastern Nebraska field in 2015 and 2016 and a western Nebraska field in 2016 for a total of 210 plot-years. The response of maize and soybean to irrigation using variations of the model were compared with responses from treatments using soil water content measurement and a rainfed treatment. The remote-sensing-based treatment prescribed more irrigation than the other treatments in all cases. Excessive modeled soil evaporation and insufficient drainage times were suspected causes of the model drift. Modifying evaporation and drainage reduced modeled soil water depletion error. None of the included response variables were significantly different between treatments in western Nebraska. In eastern Nebraska, treatment differences for maize and soybean included evapotranspiration and a combined variable including evapotranspiration and deep percolation. Both variables were greatest for the remote-sensing model when differences were found to be statistically significant. Differences in maize yield in 2015 were attributed to random error. Soybean yield was lowest for the remote-sensing-based treatment and greatest for rainfed, possibly because of overwatering and lodging. The model performed well considering that it did not include soil water content measurements during the season. Future work should improve the soil evaporation and drainage formulations, because of excessive precipitation and include aerial remote sensing imagery and soil water content measurement as model inputs.

  15. Quantitative composition determination at the atomic level using model-based high-angle annular dark field scanning transmission electron microscopy.

    PubMed

    Martinez, G T; Rosenauer, A; De Backer, A; Verbeeck, J; Van Aert, S

    2014-02-01

    High angle annular dark field scanning transmission electron microscopy (HAADF STEM) images provide sample information which is sensitive to the chemical composition. The image intensities indeed scale with the mean atomic number Z. To some extent, chemically different atomic column types can therefore be visually distinguished. However, in order to quantify the atomic column composition with high accuracy and precision, model-based methods are necessary. Therefore, an empirical incoherent parametric imaging model can be used of which the unknown parameters are determined using statistical parameter estimation theory (Van Aert et al., 2009, [1]). In this paper, it will be shown how this method can be combined with frozen lattice multislice simulations in order to evolve from a relative toward an absolute quantification of the composition of single atomic columns with mixed atom types. Furthermore, the validity of the model assumptions are explored and discussed. © 2013 Published by Elsevier B.V. All rights reserved.

  16. Removing systematic errors in interionic potentials of mean force computed in molecular simulations using reaction-field-based electrostatics

    PubMed Central

    Baumketner, Andrij

    2009-01-01

    The performance of reaction-field methods to treat electrostatic interactions is tested in simulations of ions solvated in water. The potential of mean force between sodium chloride pair of ions and between side chains of lysine and aspartate are computed using umbrella sampling and molecular dynamics simulations. It is found that in comparison with lattice sum calculations, the charge-group-based approaches to reaction-field treatments produce a large error in the association energy of the ions that exhibits strong systematic dependence on the size of the simulation box. The atom-based implementation of the reaction field is seen to (i) improve the overall quality of the potential of mean force and (ii) remove the dependence on the size of the simulation box. It is suggested that the atom-based truncation be used in reaction-field simulations of mixed media. PMID:19292522

  17. Statistical modeling of temperature, humidity and wind fields in the atmospheric boundary layer over the Siberian region

    NASA Astrophysics Data System (ADS)

    Lomakina, N. Ya.

    2017-11-01

    The work presents the results of the applied climatic division of the Siberian region into districts based on the methodology of objective classification of the atmospheric boundary layer climates by the "temperature-moisture-wind" complex realized with using the method of principal components and the special similarity criteria of average profiles and the eigen values of correlation matrices. On the territory of Siberia, it was identified 14 homogeneous regions for winter season and 10 regions were revealed for summer. The local statistical models were constructed for each region. These include vertical profiles of mean values, mean square deviations, and matrices of interlevel correlation of temperature, specific humidity, zonal and meridional wind velocity. The advantage of the obtained local statistical models over the regional models is shown.

  18. Aeroacoustic directivity via wave-packet analysis of mean or base flows

    NASA Astrophysics Data System (ADS)

    Edstrand, Adam; Schmid, Peter; Cattafesta, Louis

    2017-11-01

    Noise pollution is an ever-increasing problem in society, and knowledge of the directivity patterns of the sound radiation is required for prediction and control. Directivity is frequently determined through costly numerical simulations of the flow field combined with an acoustic analogy. We introduce a new computationally efficient method of finding directivity for a given mean or base flow field using wave-packet analysis (Trefethen, PRSA 2005). Wave-packet analysis approximates the eigenvalue spectrum with spectral accuracy by modeling the eigenfunctions as wave packets. With the wave packets determined, we then follow the method of Obrist (JFM, 2009), which uses Lighthill's acoustic analogy to determine the far-field sound radiation and directivity of wave-packet modes. We apply this method to a canonical jet flow (Gudmundsson and Colonius, JFM 2011) and determine the directivity of potentially unstable wave packets. Furthermore, we generalize the method to consider a three-dimensional flow field of a trailing vortex wake. In summary, we approximate the disturbances as wave packets and extract the directivity from the wave-packet approximation in a fraction of the time of standard aeroacoustic solvers. ONR Grant N00014-15-1-2403.

  19. Mean field study of a propagation-turnover lattice model for the dynamics of histone marking

    NASA Astrophysics Data System (ADS)

    Yao, Fan; Li, FangTing; Li, TieJun

    2017-02-01

    We present a mean field study of a propagation-turnover lattice model, which was proposed by Hodges and Crabtree [Proc. Nat. Acad. Sci. 109, 13296 (2012)] for understanding how posttranslational histone marks modulate gene expression in mammalian cells. The kinetics of the lattice model consists of nucleation, propagation and turnover mechanisms, and exhibits second-order phase transition for the histone marking domain. We showed rigorously that the dynamics essentially depends on a non-dimensional parameter κ = k +/ k -, the ratio between the propagation and turnover rates, which has been observed in the simulations. We then studied the lowest order mean field approximation, and observed the phase transition with an analytically obtained critical parameter. The boundary layer analysis was utilized to investigate the structure of the decay profile of the mark density. We also studied the higher order mean field approximation to achieve sharper estimate of the critical transition parameter and more detailed features. The comparison between the simulation and theoretical results shows the validity of our theory.

  20. A proposed International Geomagnetic Reference Field for 1965- 1985.

    USGS Publications Warehouse

    Peddie, N.W.; Fabiano, E.B.

    1982-01-01

    A set of spherical harmonic models describing the Earth's main magnetic field from 1965 to 1985 has been developed and is proposed as the next revision of the International Geomagnetic Reference Field (IGRF). A tenth degree and order spherical harmonic model of the main field was derived from Magsat data. A series of eighth degree and order spherical harmonic models of the secular variation of the main field was derived from magnetic observatory annual mean values. Models of the main field at 1965, 1970, 1975, and 1980 were obtained by extrapolating the main-field model using the secular variation models.-Authors spherical harmonic models Earth main magnetic field Magsat data

  1. Self-consistent field model for strong electrostatic correlations and inhomogeneous dielectric media.

    PubMed

    Ma, Manman; Xu, Zhenli

    2014-12-28

    Electrostatic correlations and variable permittivity of electrolytes are essential for exploring many chemical and physical properties of interfaces in aqueous solutions. We propose a continuum electrostatic model for the treatment of these effects in the framework of the self-consistent field theory. The model incorporates a space- or field-dependent dielectric permittivity and an excluded ion-size effect for the correlation energy. This results in a self-energy modified Poisson-Nernst-Planck or Poisson-Boltzmann equation together with state equations for the self energy and the dielectric function. We show that the ionic size is of significant importance in predicting a finite self energy for an ion in an inhomogeneous medium. Asymptotic approximation is proposed for the solution of a generalized Debye-Hückel equation, which has been shown to capture the ionic correlation and dielectric self energy. Through simulating ionic distribution surrounding a macroion, the modified self-consistent field model is shown to agree with particle-based Monte Carlo simulations. Numerical results for symmetric and asymmetric electrolytes demonstrate that the model is able to predict the charge inversion at high correlation regime in the presence of multivalent interfacial ions which is beyond the mean-field theory and also show strong effect to double layer structure due to the space- or field-dependent dielectric permittivity.

  2. MODELING AIR TOXICS AND PM 2.5 CONCENTRATION FIELDS AS A MEANS FOR FACILITATING HUMAN EXPOSURE ASSESSMENTS

    EPA Science Inventory

    The capability of the US EPA Models-3/Community Multiscale Air Quality (CMAQ) modeling system is extended to provide gridded ambient air quality concentration fields at fine scales. These fields will drive human exposure to air toxics and fine particulate matter (PM2.5) models...

  3. Diagnostic and model dependent uncertainty of simulated Tibetan permafrost area

    NASA Astrophysics Data System (ADS)

    Wang, W.; Rinke, A.; Moore, J. C.; Cui, X.; Ji, D.; Li, Q.; Zhang, N.; Wang, C.; Zhang, S.; Lawrence, D. M.; McGuire, A. D.; Zhang, W.; Delire, C.; Koven, C.; Saito, K.; MacDougall, A.; Burke, E.; Decharme, B.

    2015-03-01

    We perform a land surface model intercomparison to investigate how the simulation of permafrost area on the Tibetan Plateau (TP) varies between 6 modern stand-alone land surface models (CLM4.5, CoLM, ISBA, JULES, LPJ-GUESS, UVic). We also examine the variability in simulated permafrost area and distribution introduced by 5 different methods of diagnosing permafrost (from modeled monthly ground temperature, mean annual ground and air temperatures, air and surface frost indexes). There is good agreement (99-135 x 104 km2) between the two diagnostic methods based on air temperature which are also consistent with the best current observation-based estimate of actual permafrost area (101 x 104 km2). However the uncertainty (1-128 x 104 km2) using the three methods that require simulation of ground temperature is much greater. Moreover simulated permafrost distribution on TP is generally only fair to poor for these three methods (diagnosis of permafrost from monthly, and mean annual ground temperature, and surface frost index), while permafrost distribution using air temperature based methods is generally good. Model evaluation at field sites highlights specific problems in process simulations likely related to soil texture specification and snow cover. Models are particularly poor at simulating permafrost distribution using definition that soil temperature remains at or below 0°C for 24 consecutive months, which requires reliable simulation of both mean annual ground temperatures and seasonal cycle, and hence is relatively demanding. Although models can produce better permafrost maps using mean annual ground temperature and surface frost index, analysis of simulated soil temperature profiles reveals substantial biases. The current generation of land surface models need to reduce biases in simulated soil temperature profiles before reliable contemporary permafrost maps and predictions of changes in permafrost distribution can be made for the Tibetan Plateau.

  4. TRANSPORT BY MERIDIONAL CIRCULATIONS IN SOLAR-TYPE STARS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, T. S.; Brummell, N. H., E-mail: tsw25@soe.ucsc.edu

    2012-08-20

    Transport by meridional flows has significant consequences for stellar evolution, but is difficult to capture in global-scale numerical simulations because of the wide range of timescales involved. Stellar evolution models therefore usually adopt parameterizations for such transport based on idealized laminar or mean-field models. Unfortunately, recent attempts to model this transport in global simulations have produced results that are not consistent with any of these idealized models. In an effort to explain the discrepancies between global simulations and idealized models, here we use three-dimensional local Cartesian simulations of compressible convection to study the efficiency of transport by meridional flows belowmore » a convection zone in several parameter regimes of relevance to the Sun and solar-type stars. In these local simulations we are able to establish the correct ordering of dynamical timescales, although the separation of the timescales remains unrealistic. We find that, even though the generation of internal waves by convective overshoot produces a high degree of time dependence in the meridional flow field, the mean flow has the qualitative behavior predicted by laminar, 'balanced' models. In particular, we observe a progressive deepening, or 'burrowing', of the mean circulation if the local Eddington-Sweet timescale is shorter than the viscous diffusion timescale. Such burrowing is a robust prediction of laminar models in this parameter regime, but has never been observed in any previous numerical simulation. We argue that previous simulations therefore underestimate the transport by meridional flows.« less

  5. Stochastic downscaling of numerically simulated spatial rain and cloud fields using a transient multifractal approach

    NASA Astrophysics Data System (ADS)

    Nogueira, M.; Barros, A. P.; Miranda, P. M.

    2012-04-01

    Atmospheric fields can be extremely variable over wide ranges of spatial scales, with a scale ratio of 109-1010 between largest (planetary) and smallest (viscous dissipation) scale. Furthermore atmospheric fields with strong variability over wide ranges in scale most likely should not be artificially split apart into large and small scales, as in reality there is no scale separation between resolved and unresolved motions. Usually the effects of the unresolved scales are modeled by a deterministic bulk formula representing an ensemble of incoherent subgrid processes on the resolved flow. This is a pragmatic approach to the problem and not the complete solution to it. These models are expected to underrepresent the small-scale spatial variability of both dynamical and scalar fields due to implicit and explicit numerical diffusion as well as physically based subgrid scale turbulent mixing, resulting in smoother and less intermittent fields as compared to observations. Thus, a fundamental change in the way we formulate our models is required. Stochastic approaches equipped with a possible realization of subgrid processes and potentially coupled to the resolved scales over the range of significant scale interactions range provide one alternative to address the problem. Stochastic multifractal models based on the cascade phenomenology of the atmosphere and its governing equations in particular are the focus of this research. Previous results have shown that rain and cloud fields resulting from both idealized and realistic numerical simulations display multifractal behavior in the resolved scales. This result is observed even in the absence of scaling in the initial conditions or terrain forcing, suggesting that multiscaling is a general property of the nonlinear solutions of the Navier-Stokes equations governing atmospheric dynamics. Our results also show that the corresponding multiscaling parameters for rain and cloud fields exhibit complex nonlinear behavior depending on large scale parameters such as terrain forcing and mean atmospheric conditions at each location, particularly mean wind speed and moist stability. A particularly robust behavior found is the transition of the multiscaling parameters between stable and unstable cases, which has a clear physical correspondence to the transition from stratiform to organized (banded) convective regime. Thus multifractal diagnostics of moist processes are fundamentally transient and should provide a physically robust basis for the downscaling and sub-grid scale parameterizations of moist processes. Here, we investigate the possibility of using a simplified computationally efficient multifractal downscaling methodology based on turbulent cascades to produce statistically consistent fields at scales higher than the ones resolved by the model. Specifically, we are interested in producing rainfall and cloud fields at spatial resolutions necessary for effective flash flood and earth flows forecasting. The results are examined by comparing downscaled field against observations, and tendency error budgets are used to diagnose the evolution of transient errors in the numerical model prediction which can be attributed to aliasing.

  6. Modeling eutrophic lakes: From mass balance laws to ordinary differential equations

    NASA Astrophysics Data System (ADS)

    Marasco, Addolorata; Ferrara, Luciano; Romano, Antonio

    Starting from integral balance laws, a model based on nonlinear ordinary differential equations (ODEs) describing the evolution of Phosphorus cycle in a lake is proposed. After showing that the usual homogeneous model is not compatible with the mixture theory, we prove that an ODEs model still holds but for the mean values of the state variables provided that the nonhomogeneous involved fields satisfy suitable conditions. In this model the trophic state of a lake is described by the mean densities of Phosphorus in water and sediments, and phytoplankton biomass. All the quantities appearing in the model can be experimentally evaluated. To propose restoration programs, the evolution of these state variables toward stable steady state conditions is analyzed. Moreover, the local stability analysis is performed with respect to all the model parameters. Some numerical simulations and a real application to lake Varese conclude the paper.

  7. Modeling tracer transport in randomly heterogeneous porous media by nonlocal moment equations: Anomalous transport

    NASA Astrophysics Data System (ADS)

    Morales-Casique, E.; Lezama-Campos, J. L.; Guadagnini, A.; Neuman, S. P.

    2013-05-01

    Modeling tracer transport in geologic porous media suffers from the corrupt characterization of the spatial distribution of hydrogeologic properties of the system and the incomplete knowledge of processes governing transport at multiple scales. Representations of transport dynamics based on a Fickian model of the kind considered in the advection-dispersion equation (ADE) fail to capture (a) the temporal variation associated with the rate of spreading of a tracer, and (b) the distribution of early and late arrival times which are often observed in field and/or laboratory scenarios and are considered as the signature of anomalous transport. Elsewhere we have presented exact stochastic moment equations to model tracer transport in randomly heterogeneous aquifers. We have also developed a closure scheme which enables one to provide numerical solutions of such moment equations at different orders of approximations. The resulting (ensemble) average and variance of concentration fields were found to display a good agreement against Monte Carlo - based simulation results for mildly heterogeneous (or well-conditioned strongly heterogeneous) media. Here we explore the ability of the moment equations approach to describe the distribution of early arrival times and late time tailing effects which can be observed in Monte-Carlo based breakthrough curves (BTCs) of the (ensemble) mean concentration. We show that BTCs of mean resident concentration calculated at a fixed space location through higher-order approximations of moment equations display long tailing features of the kind which is typically associated with anomalous transport behavior and are not represented by an ADE model with constant dispersive parameter, such as the zero-order approximation.

  8. Mean electromotive force generated by asymmetric fluid flow near the surface of earth's outer core

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Archana

    1992-10-01

    The phi component of the mean electromotive force, (ETF) generated by asymmetric flow of fluid just beneath the core-mantle boundary (CMB), is obtained using a geomagnetic field model. This analysis is based on the supposition that the axisymmetric part of fluid flow beneath the CMB is tangentially geostrophic and toroidal. For all the epochs studied, the computed phi component is stronger in the Southern Hemisphere than that in the Northern Hemisphere. Assuming a linear relationship between (ETF) and the azimuthally averaged magnetic field (AAMF), the only nonzero off-diagonal components of the pseudotensor relating ETF to AAMF, are estimated as functions of colatitude, and the physical implications of the results are discussed.

  9. Applicability of scaling behavior and power laws in the analysis of the magnetocaloric effect in second-order phase transition materials

    NASA Astrophysics Data System (ADS)

    Romero-Muñiz, Carlos; Tamura, Ryo; Tanaka, Shu; Franco, Victorino

    2016-10-01

    In recent years, universal scaling has gained renewed attention in the study of magnetocaloric materials. It has been applied to a wide variety of pure elements and compounds, ranging from rare-earth-based materials to transition metal alloys, from bulk crystalline samples to nanoparticles. It is therefore necessary to quantify the limits within which the scaling laws would remain applicable for magnetocaloric research. For this purpose, a threefold approach has been followed: (a) the magnetocaloric responses of a set of materials with Curie temperatures ranging from 46 to 336 K have been modeled with a mean-field Brillouin model, (b) experimental data for Gd has been analyzed, and (c) a 3D-Ising model—which is beyond the mean-field approximation—has been studied. In this way, we can demonstrate that the conclusions extracted in this work are model-independent. It is found that universal scaling remains applicable up to applied fields, which provide a magnetic energy to the system up to 8% of the thermal energy at the Curie temperature. In this range, the predicted deviations from scaling laws remain below the experimental error margin of carefully performed experiments. Therefore, for materials whose Curie temperature is close to room temperature, scaling laws at the Curie temperature would be applicable for the magnetic field range available at conventional magnetism laboratories (˜10 T), well above the fields which are usually available for magnetocaloric devices.

  10. Forecasting Lightning Threat using Cloud-resolving Model Simulations

    NASA Technical Reports Server (NTRS)

    McCaul, E. W., Jr.; Goodman, S. J.; LaCasse, K. M.; Cecil, D. J.

    2009-01-01

    As numerical forecasts capable of resolving individual convective clouds become more common, it is of interest to see if quantitative forecasts of lightning flash rate density are possible, based on fields computed by the numerical model. Previous observational research has shown robust relationships between observed lightning flash rates and inferred updraft and large precipitation ice fields in the mixed phase regions of storms, and that these relationships might allow simulated fields to serve as proxies for lightning flash rate density. It is shown in this paper that two simple proxy fields do indeed provide reasonable and cost-effective bases for creating time-evolving maps of predicted lightning flash rate density, judging from a series of diverse simulation case study events in North Alabama for which Lightning Mapping Array data provide ground truth. One method is based on the product of upward velocity and the mixing ratio of precipitating ice hydrometeors, modeled as graupel only, in the mixed phase region of storms at the -15\\dgc\\ level, while the second method is based on the vertically integrated amounts of ice hydrometeors in each model grid column. Each method can be calibrated by comparing domainwide statistics of the peak values of simulated flash rate proxy fields against domainwide peak total lightning flash rate density data from observations. Tests show that the first method is able to capture much of the temporal variability of the lightning threat, while the second method does a better job of depicting the areal coverage of the threat. A blended solution is designed to retain most of the temporal sensitivity of the first method, while adding the improved spatial coverage of the second. Weather Research and Forecast Model simulations of selected North Alabama cases show that this model can distinguish the general character and intensity of most convective events, and that the proposed methods show promise as a means of generating quantitatively realistic fields of lightning threat. However, because models tend to have more difficulty in correctly predicting the instantaneous placement of storms, forecasts of the detailed location of the lightning threat based on single simulations can be in error. Although these model shortcomings presently limit the precision of lightning threat forecasts from individual runs of current generation models, the techniques proposed herein should continue to be applicable as newer and more accurate physically-based model versions, physical parameterizations, initialization techniques and ensembles of cloud-allowing forecasts become available.

  11. Energetic Particle Transport across the Mean Magnetic Field: Before Diffusion

    NASA Astrophysics Data System (ADS)

    Laitinen, T.; Dalla, S.

    2017-01-01

    Current particle transport models describe the propagation of charged particles across the mean field direction in turbulent plasmas as diffusion. However, recent studies suggest that at short timescales, such as soon after solar energetic particle (SEP) injection, particles remain on turbulently meandering field lines, which results in nondiffusive initial propagation across the mean magnetic field. In this work, we use a new technique to investigate how the particles are displaced from their original field lines, and we quantify the parameters of the transition from field-aligned particle propagation along meandering field lines to particle diffusion across the mean magnetic field. We show that the initial decoupling of the particles from the field lines is slow, and particles remain within a Larmor radius from their initial meandering field lines for tens to hundreds of Larmor periods, for 0.1-10 MeV protons in turbulence conditions typical of the solar wind at 1 au. Subsequently, particles decouple from their initial field lines and after hundreds to thousands of Larmor periods reach time-asymptotic diffusive behavior consistent with particle diffusion across the mean field caused by the meandering of the field lines. We show that the typical duration of the prediffusive phase, hours to tens of hours for 10 MeV protons in 1 au solar wind turbulence conditions, is significant for SEP propagation to 1 au and must be taken into account when modeling SEP propagation in the interplanetary space.

  12. Investigating Jupiter's Deep Flow Structure using the Juno Magnetic and Gravity Measurements

    NASA Astrophysics Data System (ADS)

    Duer, K.; Galanti, E.; Cao, H.; Kaspi, Y.

    2017-12-01

    Jupiter's flow below its cloud-level is still largely unknown. The gravity measurements from Juno provide now an initial insight into the depth of the flow via the relation between the gravity field and the flow field. Furthermore, additional constraints could be put on the flow if the expected Juno magnetic measurements are also used. Specifically, the gravity and magnetic measurements can be combined to allow a more robust estimate of the deep flow structure. However, a complexity comes from the fact that both the radial profile of the flow, and it's connection to the induced magnetic field, might vary with latitude. In this study we propose a method for using the expected Juno's high-precision measurements of both the magnetic and gravity fields, together with latitude dependent models that relate the measurements to the structure of the internal flow. We simulate possible measurements by setting-up specific deep wind profiles and forward calculate the resulting anomalies in both the magnetic and gravity fields. We allow these profiles to include also latitude dependency. The relation of the flow field to the gravity field is based on thermal wind balance, and it's relation to the magnetic field is via a mean-field electrodynamics balance. The latter includes an alpha-effect, describing the mean magnetic effect of turbulent rotating convection, which might also vary with latitude. Using an adjoint based optimization process, we examine the ability of the combined magnetic-gravity model to decipher the flow structure under the different potential Juno measurements. We investigate the effect of different latitude dependencies on the derived solutions and their associated uncertainties. The novelty of this study is the combination of two independent Juno measurements for the calculation of a latitudinal dependent interior flow profile. This method might lead to a better constraint of Jupiter's flow structure.

  13. A new constraint on mean-field galactic dynamo theory

    NASA Astrophysics Data System (ADS)

    Chamandy, Luke; Singh, Nishant K.

    2017-07-01

    Appealing to an analytical result from mean-field theory, we show, using a generic galaxy model, that galactic dynamo action can be suppressed by small-scale magnetic fluctuations. This is caused by the magnetic analogue of the Rädler or Ω × J effect, where rotation-induced corrections to the mean-field turbulent transport result in what we interpret to be an effective reduction of the standard α effect in the presence of small-scale magnetic fields.

  14. Phase transition studies of BiMnO3: Mean field theory approximations

    NASA Astrophysics Data System (ADS)

    Priya K. B, Lakshmi; Natesan, Baskaran

    2015-06-01

    We studied the phase transition and magneto-electric coupling effect of BiMnO3 by employing mean field theory approximations. To capture the ferromagnetic and ferroelectric transitions of BiMnO3, we construct an extended Ising model in a 2D square lattice, wherein, the magnetic (electric) interactions are described in terms of the direct interactions between the localized magnetic (electric dipole) moments of Mn ions with their nearest neighbors. To evaluate our model, we obtain magnetization, magnetic susceptibility and electric polarization using mean field approximation calculations. Our results reproduce both the ferromagnetic and the ferroelectric transitions, matching very well with the experimental reports. Furthermore, consistent with experimental observations, our mean field results suggest that there is indeed a coupling between the magnetic and electric ordering in BiMnO3.

  15. The transition between immune and disease states in a cellular automaton model of clonal immune response

    NASA Astrophysics Data System (ADS)

    Bezzi, Michele; Celada, Franco; Ruffo, Stefano; Seiden, Philip E.

    1997-02-01

    In this paper we extend the Celada-Seiden (CS) model of the humoral immune response to include infections virus and killer T cells (cellular response). The model represents molecules and cells with bitstrings. The response of the system to virus involves a competition between the ability of the virus to kill the host cells and the host's ability to eliminate the virus. We find two basins of attraction in the dynamics of this system, one is identified with disease and the other with the immune state. There is also an oscillating state that exists on the border of these two stable states. Fluctuations in the population of virus or antibody can end the oscillation and drive the system into one of the stable states. The introduction of mechanisms of cross-regulation between the two responses can bias the system towards one of them. We also study a mean field model, based on coupled maps, to investigate virus-like infections. This simple model reproduces the attractors for average populations observed in the cellular automaton. All the dynamical behavior connected to spatial extension is lost, as is the oscillating feature. Thus the mean field approximation introduced with coupled maps destroys oscillations.

  16. Flow structure and unsteadiness in the supersonic wake of a generic space launcher

    NASA Astrophysics Data System (ADS)

    Schreyer, Anne-Marie; Stephan, Sören; Radespiel, Rolf

    2015-11-01

    At the junction between the rocket engine and the main body of a classical space launcher, a separation-dominated and highly unstable flow field develops and induces strong wall-pressure oscillations. These can excite structural vibrations detrimental to the launcher. It is desirable to minimize these effects, for which a better understanding of the flow field is required. We study the wake flow of a generic axisymmetric space-launcher model with and without propulsive jet (cold air). Experimental investigations are performed at Mach 2.9 and a Reynolds number ReD = 1 . 3 .106 based on model diameter D. The jet exits the nozzle at Mach 2.5. Velocity measurements by means of Particle Image Velocimetry and mean and unsteady wall-pressure measurements on the main-body base are performed simultaneously. Additionally, we performed hot-wire measurements at selected points in the wake. We can thus observe the evolution of the wake flow along with its spectral content. We describe the mean and turbulent flow topology and evolution of the structures in the wake flow and discuss the origin of characteristic frequencies observed in the pressure signal at the launcher base. The influence of a propulsive jet on the evolution and topology of the wake flow is discussed in detail. The German Research Foundation DFG is gratefully acknowledged for funding this research within the SFB-TR40 ``Technological foundations for the design of thermally and mechanically highly loaded components of future space transportation systems.''

  17. Three-dimensional vortex-bright solitons in a spin-orbit-coupled spin-1 condensate

    NASA Astrophysics Data System (ADS)

    Gautam, Sandeep; Adhikari, S. K.

    2018-01-01

    We demonstrate stable and metastable vortex-bright solitons in a three-dimensional spin-orbit-coupled three-component hyperfine spin-1 Bose-Einstein condensate (BEC) using numerical solution and variational approximation of a mean-field model. The spin-orbit coupling provides attraction to form vortex-bright solitons in both attractive and repulsive spinor BECs. The ground state of these vortex-bright solitons is axially symmetric for weak polar interaction. For a sufficiently strong ferromagnetic interaction, we observe the emergence of a fully asymmetric vortex-bright soliton as the ground state. We also numerically investigate moving solitons. The present mean-field model is not Galilean invariant, and we use a Galilean-transformed mean-field model for generating the moving solitons.

  18. A high speed model-based approach for wavefront sensorless adaptive optics systems

    NASA Astrophysics Data System (ADS)

    Lianghua, Wen; Yang, Ping; Shuai, Wang; Wenjing, Liu; Shanqiu, Chen; Xu, Bing

    2018-02-01

    To improve temporal-frequency property of wavefront sensorless adaptive optics (AO) systems, a fast general model-based aberration correction algorithm is presented. The fast general model-based approach is based on the approximately linear relation between the mean square of the aberration gradients and the second moment of far-field intensity distribution. The presented model-based method is capable of completing a mode aberration effective correction just applying one disturbing onto the deformable mirror(one correction by one disturbing), which is reconstructed by the singular value decomposing the correlation matrix of the Zernike functions' gradients. Numerical simulations of AO corrections under the various random and dynamic aberrations are implemented. The simulation results indicate that the equivalent control bandwidth is 2-3 times than that of the previous method with one aberration correction after applying N times disturbing onto the deformable mirror (one correction by N disturbing).

  19. Fully automated prostate segmentation in 3D MR based on normalized gradient fields cross-correlation initialization and LOGISMOS refinement

    NASA Astrophysics Data System (ADS)

    Yin, Yin; Fotin, Sergei V.; Periaswamy, Senthil; Kunz, Justin; Haldankar, Hrishikesh; Muradyan, Naira; Cornud, François; Turkbey, Baris; Choyke, Peter

    2012-02-01

    Manual delineation of the prostate is a challenging task for a clinician due to its complex and irregular shape. Furthermore, the need for precisely targeting the prostate boundary continues to grow. Planning for radiation therapy, MR-ultrasound fusion for image-guided biopsy, multi-parametric MRI tissue characterization, and context-based organ retrieval are examples where accurate prostate delineation can play a critical role in a successful patient outcome. Therefore, a robust automated full prostate segmentation system is desired. In this paper, we present an automated prostate segmentation system for 3D MR images. In this system, the prostate is segmented in two steps: the prostate displacement and size are first detected, and then the boundary is refined by a shape model. The detection approach is based on normalized gradient fields cross-correlation. This approach is fast, robust to intensity variation and provides good accuracy to initialize a prostate mean shape model. The refinement model is based on a graph-search based framework, which contains both shape and topology information during deformation. We generated the graph cost using trained classifiers and used coarse-to-fine search and region-specific classifier training. The proposed algorithm was developed using 261 training images and tested on another 290 cases. The segmentation performance using mean DSC ranging from 0.89 to 0.91 depending on the evaluation subset demonstrates state of the art performance. Running time for the system is about 20 to 40 seconds depending on image size and resolution.

  20. Adapting Poisson-Boltzmann to the self-consistent mean field theory: Application to protein side-chain modeling

    NASA Astrophysics Data System (ADS)

    Koehl, Patrice; Orland, Henri; Delarue, Marc

    2011-08-01

    We present an extension of the self-consistent mean field theory for protein side-chain modeling in which solvation effects are included based on the Poisson-Boltzmann (PB) theory. In this approach, the protein is represented with multiple copies of its side chains. Each copy is assigned a weight that is refined iteratively based on the mean field energy generated by the rest of the protein, until self-consistency is reached. At each cycle, the variational free energy of the multi-copy system is computed; this free energy includes the internal energy of the protein that accounts for vdW and electrostatics interactions and a solvation free energy term that is computed using the PB equation. The method converges in only a few cycles and takes only minutes of central processing unit time on a commodity personal computer. The predicted conformation of each residue is then set to be its copy with the highest weight after convergence. We have tested this method on a database of hundred highly refined NMR structures to circumvent the problems of crystal packing inherent to x-ray structures. The use of the PB-derived solvation free energy significantly improves prediction accuracy for surface side chains. For example, the prediction accuracies for χ1 for surface cysteine, serine, and threonine residues improve from 68%, 35%, and 43% to 80%, 53%, and 57%, respectively. A comparison with other side-chain prediction algorithms demonstrates that our approach is consistently better in predicting the conformations of exposed side chains.

  1. Novel Physical Model for DC Partial Discharge in Polymeric Insulators

    NASA Astrophysics Data System (ADS)

    Andersen, Allen; Dennison, J. R.

    The physics of DC partial discharge (DCPD) continues to pose a challenge to researchers. We present a new physically-motivated model of DCPD in amorphous polymers based on our dual-defect model of dielectric breakdown. The dual-defect model is an extension of standard static mean field theories, such as the Crine model, that describe avalanche breakdown of charge carriers trapped on uniformly distributed defect sites. It assumes the presence of both high-energy chemical defects and low-energy thermally-recoverable physical defects. We present our measurements of breakdown and DCPD for several common polymeric materials in the context of this model. Improved understanding of DCPD and how it relates to eventual dielectric breakdown is critical to the fields of spacecraft charging, high voltage DC power distribution, high density capacitors, and microelectronics. This work was supported by a NASA Space Technology Research Fellowship.

  2. Using the full tensor of GOCE gravity gradients for regional gravity field modelling

    NASA Astrophysics Data System (ADS)

    Lieb, Verena; Bouman, Johannes; Dettmering, Denise; Fuchs, Martin; Schmidt, Michael

    2013-04-01

    With its 3-axis gradiometer GOCE delivers 3-dimensional (3D) information of the Earth's gravity field. This essential advantage - e.g. compared with the 1D gravity field information from GRACE - can be used for research on the Earth's interior and for geophysical exploration. To benefit from this multidimensional measurement system, the combination of all 6 GOCE gradients and additionally the consistent combination with other gravity observations mean an innovative challenge for regional gravity field modelling. As the individual gravity gradients reflect the gravity field depending on different spatial directions, observation equations are formulated separately for each of these components. In our approach we use spherical localizing base functions to display the gravity field for specified regions. Therefore the series expansions based on Legendre polynomials have to be adopted to obtain mathematical expressions for the second derivatives of the gravitational potential which are observed by GOCE in the Cartesian Gradiometer Reference Frame (GRF). We (1) have to transform the equations from the spherical terrestrial into a Cartesian Local North-Oriented Reference Frame (LNOF), (2) to set up a 3x3 tensor of observation equations and (3) finally to rotate the tensor defined in the terrestrial LNOF into the GRF. Thus we ensure the use of the original non-rotated and unaffected GOCE measurements within the analysis procedure. As output from the synthesis procedure we then obtain the second derivatives of the gravitational potential for all combinations of the xyz Cartesian coordinates in the LNOF. Further the implementation of variance component estimation provides a flexible tool to diversify the influence of the input gradiometer observations. On the one hand the less accurate xy and yz measurements are nearly excluded by estimating large variance components. On the other hand the yy measurements, which show systematic errors increasing at high latitudes, could be manually down-weighted in the corresponding regions. We choose different test areas to compute regional gravity field models at mean GOCE altitudes for different spectral resolutions and varying relative weights for the observations. Further we compare the regional models with the static global GOCO03S model. Especially the flexible handling and combination of the 3D measurements promise a great benefit for geophysical applications from GOCE gravity gradients, as they contain information on radial as well as on lateral gravity changes.

  3. Active and reactive behaviour in human mobility: the influence of attraction points on pedestrians

    NASA Astrophysics Data System (ADS)

    Gutiérrez-Roig, M.; Sagarra, O.; Oltra, A.; Palmer, J. R. B.; Bartumeus, F.; Díaz-Guilera, A.; Perelló, J.

    2016-07-01

    Human mobility is becoming an accessible field of study, thanks to the progress and availability of tracking technologies as a common feature of smart phones. We describe an example of a scalable experiment exploiting these circumstances at a public, outdoor fair in Barcelona (Spain). Participants were tracked while wandering through an open space with activity stands attracting their attention. We develop a general modelling framework based on Langevin dynamics, which allows us to test the influence of two distinct types of ingredients on mobility: reactive or context-dependent factors, modelled by means of a force field generated by attraction points in a given spatial configuration and active or inherent factors, modelled from intrinsic movement patterns of the subjects. The additive and constructive framework model accounts for some observed features. Starting with the simplest model (purely random walkers) as a reference, we progressively introduce different ingredients such as persistence, memory and perceptual landscape, aiming to untangle active and reactive contributions and quantify their respective relevance. The proposed approach may help in anticipating the spatial distribution of citizens in alternative scenarios and in improving the design of public events based on a facts-based approach.

  4. Active and reactive behaviour in human mobility: the influence of attraction points on pedestrians

    PubMed Central

    Sagarra, O.; Oltra, A.; Palmer, J. R. B.; Bartumeus, F.; Díaz-Guilera, A.; Perelló, J.

    2016-01-01

    Human mobility is becoming an accessible field of study, thanks to the progress and availability of tracking technologies as a common feature of smart phones. We describe an example of a scalable experiment exploiting these circumstances at a public, outdoor fair in Barcelona (Spain). Participants were tracked while wandering through an open space with activity stands attracting their attention. We develop a general modelling framework based on Langevin dynamics, which allows us to test the influence of two distinct types of ingredients on mobility: reactive or context-dependent factors, modelled by means of a force field generated by attraction points in a given spatial configuration and active or inherent factors, modelled from intrinsic movement patterns of the subjects. The additive and constructive framework model accounts for some observed features. Starting with the simplest model (purely random walkers) as a reference, we progressively introduce different ingredients such as persistence, memory and perceptual landscape, aiming to untangle active and reactive contributions and quantify their respective relevance. The proposed approach may help in anticipating the spatial distribution of citizens in alternative scenarios and in improving the design of public events based on a facts-based approach. PMID:27493774

  5. Spectral wave dissipation by submerged aquatic vegetation in a back-barrier estuary

    USGS Publications Warehouse

    Nowacki, Daniel J.; Beudin, Alexis; Ganju, Neil K.

    2017-01-01

    Submerged aquatic vegetation is generally thought to attenuate waves, but this interaction remains poorly characterized in shallow-water field settings with locally generated wind waves. Better quantification of wave–vegetation interaction can provide insight to morphodynamic changes in a variety of environments and also is relevant to the planning of nature-based coastal protection measures. Toward that end, an instrumented transect was deployed across a Zostera marina (common eelgrass) meadow in Chincoteague Bay, Maryland/Virginia, U.S.A., to characterize wind-wave transformation within the vegetated region. Field observations revealed wave-height reduction, wave-period transformation, and wave-energy dissipation with distance into the meadow, and the data informed and calibrated a spectral wave model of the study area. The field observations and model results agreed well when local wind forcing and vegetation-induced drag were included in the model, either explicitly as rigid vegetation elements or implicitly as large bed-roughness values. Mean modeled parameters were similar for both the explicit and implicit approaches, but the spectral performance of the explicit approach was poor compared to the implicit approach. The explicit approach over-predicted low-frequency energy within the meadow because the vegetation scheme determines dissipation using mean wavenumber and frequency, in contrast to the bed-friction formulations, which dissipate energy in a variable fashion across frequency bands. Regardless of the vegetation scheme used, vegetation was the most important component of wave dissipation within much of the study area. These results help to quantify the influence of submerged aquatic vegetation on wave dynamics in future model parameterizations, field efforts, and coastal-protection measures.

  6. Using weighted power mean for equivalent square estimation.

    PubMed

    Zhou, Sumin; Wu, Qiuwen; Li, Xiaobo; Ma, Rongtao; Zheng, Dandan; Wang, Shuo; Zhang, Mutian; Li, Sicong; Lei, Yu; Fan, Qiyong; Hyun, Megan; Diener, Tyler; Enke, Charles

    2017-11-01

    Equivalent Square (ES) enables the calculation of many radiation quantities for rectangular treatment fields, based only on measurements from square fields. While it is widely applied in radiotherapy, its accuracy, especially for extremely elongated fields, still leaves room for improvement. In this study, we introduce a novel explicit ES formula based on Weighted Power Mean (WPM) function and compare its performance with the Sterling formula and Vadash/Bjärngard's formula. The proposed WPM formula is ESWPMa,b=waα+1-wbα1/α for a rectangular photon field with sides a and b. The formula performance was evaluated by three methods: standard deviation of model fitting residual error, maximum relative model prediction error, and model's Akaike Information Criterion (AIC). Testing datasets included the ES table from British Journal of Radiology (BJR), photon output factors (S cp ) from the Varian TrueBeam Representative Beam Data (Med Phys. 2012;39:6981-7018), and published S cp data for Varian TrueBeam Edge (J Appl Clin Med Phys. 2015;16:125-148). For the BJR dataset, the best-fit parameter value α = -1.25 achieved a 20% reduction in standard deviation in ES estimation residual error compared with the two established formulae. For the two Varian datasets, employing WPM reduced the maximum relative error from 3.5% (Sterling) or 2% (Vadash/Bjärngard) to 0.7% for open field sizes ranging from 3 cm to 40 cm, and the reduction was even more prominent for 1 cm field sizes on Edge (J Appl Clin Med Phys. 2015;16:125-148). The AIC value of the WPM formula was consistently lower than its counterparts from the traditional formulae on photon output factors, most prominent on very elongated small fields. The WPM formula outperformed the traditional formulae on three testing datasets. With increasing utilization of very elongated, small rectangular fields in modern radiotherapy, improved photon output factor estimation is expected by adopting the WPM formula in treatment planning and secondary MU check. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  7. Remote Sensing Image Change Detection Based on NSCT-HMT Model and Its Application.

    PubMed

    Chen, Pengyun; Zhang, Yichen; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola

    2017-06-06

    Traditional image change detection based on a non-subsampled contourlet transform always ignores the neighborhood information's relationship to the non-subsampled contourlet coefficients, and the detection results are susceptible to noise interference. To address these disadvantages, we propose a denoising method based on the non-subsampled contourlet transform domain that uses the Hidden Markov Tree model (NSCT-HMT) for change detection of remote sensing images. First, the ENVI software is used to calibrate the original remote sensing images. After that, the mean-ratio operation is adopted to obtain the difference image that will be denoised by the NSCT-HMT model. Then, using the Fuzzy Local Information C-means (FLICM) algorithm, the difference image is divided into the change area and unchanged area. The proposed algorithm is applied to a real remote sensing data set. The application results show that the proposed algorithm can effectively suppress clutter noise, and retain more detailed information from the original images. The proposed algorithm has higher detection accuracy than the Markov Random Field-Fuzzy C-means (MRF-FCM), the non-subsampled contourlet transform-Fuzzy C-means clustering (NSCT-FCM), the pointwise approach and graph theory (PA-GT), and the Principal Component Analysis-Nonlocal Means (PCA-NLM) denosing algorithm. Finally, the five algorithms are used to detect the southern boundary of the Gurbantunggut Desert in Xinjiang Uygur Autonomous Region of China, and the results show that the proposed algorithm has the best effect on real remote sensing image change detection.

  8. Remote Sensing Image Change Detection Based on NSCT-HMT Model and Its Application

    PubMed Central

    Chen, Pengyun; Zhang, Yichen; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola

    2017-01-01

    Traditional image change detection based on a non-subsampled contourlet transform always ignores the neighborhood information’s relationship to the non-subsampled contourlet coefficients, and the detection results are susceptible to noise interference. To address these disadvantages, we propose a denoising method based on the non-subsampled contourlet transform domain that uses the Hidden Markov Tree model (NSCT-HMT) for change detection of remote sensing images. First, the ENVI software is used to calibrate the original remote sensing images. After that, the mean-ratio operation is adopted to obtain the difference image that will be denoised by the NSCT-HMT model. Then, using the Fuzzy Local Information C-means (FLICM) algorithm, the difference image is divided into the change area and unchanged area. The proposed algorithm is applied to a real remote sensing data set. The application results show that the proposed algorithm can effectively suppress clutter noise, and retain more detailed information from the original images. The proposed algorithm has higher detection accuracy than the Markov Random Field-Fuzzy C-means (MRF-FCM), the non-subsampled contourlet transform-Fuzzy C-means clustering (NSCT-FCM), the pointwise approach and graph theory (PA-GT), and the Principal Component Analysis-Nonlocal Means (PCA-NLM) denosing algorithm. Finally, the five algorithms are used to detect the southern boundary of the Gurbantunggut Desert in Xinjiang Uygur Autonomous Region of China, and the results show that the proposed algorithm has the best effect on real remote sensing image change detection. PMID:28587299

  9. Glaucomatous patterns in Frequency Doubling Technology (FDT) perimetry data identified by unsupervised machine learning classifiers.

    PubMed

    Bowd, Christopher; Weinreb, Robert N; Balasubramanian, Madhusudhanan; Lee, Intae; Jang, Giljin; Yousefi, Siamak; Zangwill, Linda M; Medeiros, Felipe A; Girkin, Christopher A; Liebmann, Jeffrey M; Goldbaum, Michael H

    2014-01-01

    The variational Bayesian independent component analysis-mixture model (VIM), an unsupervised machine-learning classifier, was used to automatically separate Matrix Frequency Doubling Technology (FDT) perimetry data into clusters of healthy and glaucomatous eyes, and to identify axes representing statistically independent patterns of defect in the glaucoma clusters. FDT measurements were obtained from 1,190 eyes with normal FDT results and 786 eyes with abnormal FDT results from the UCSD-based Diagnostic Innovations in Glaucoma Study (DIGS) and African Descent and Glaucoma Evaluation Study (ADAGES). For all eyes, VIM input was 52 threshold test points from the 24-2 test pattern, plus age. FDT mean deviation was -1.00 dB (S.D. = 2.80 dB) and -5.57 dB (S.D. = 5.09 dB) in FDT-normal eyes and FDT-abnormal eyes, respectively (p<0.001). VIM identified meaningful clusters of FDT data and positioned a set of statistically independent axes through the mean of each cluster. The optimal VIM model separated the FDT fields into 3 clusters. Cluster N contained primarily normal fields (1109/1190, specificity 93.1%) and clusters G1 and G2 combined, contained primarily abnormal fields (651/786, sensitivity 82.8%). For clusters G1 and G2 the optimal number of axes were 2 and 5, respectively. Patterns automatically generated along axes within the glaucoma clusters were similar to those known to be indicative of glaucoma. Fields located farther from the normal mean on each glaucoma axis showed increasing field defect severity. VIM successfully separated FDT fields from healthy and glaucoma eyes without a priori information about class membership, and identified familiar glaucomatous patterns of loss.

  10. A NEW SIMPLE DYNAMO MODEL FOR STELLAR ACTIVITY CYCLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yokoi, N.; Hamba, F.; Schmitt, D.

    2016-06-20

    A new simple dynamo model for stellar activity cycle is proposed. By considering an inhomogeneous flow effect on turbulence, it is shown that turbulent cross helicity (velocity–magnetic-field correlation) enters the expression of turbulent electromotive force as the coupling coefficient for the mean absolute vorticity. This makes the present model different from the current α –Ω-type models in two main ways. First, in addition to the usual helicity ( α ) and turbulent magnetic diffusivity ( β ) effects, we consider the cross-helicity effect as a key ingredient of the dynamo process. Second, the spatiotemporal evolution of cross helicity is solvedmore » simultaneously with the mean magnetic fields. The basic scenario is as follows. In the presence of turbulent cross helicity, the toroidal field is induced by the toroidal rotation. Then, as in usual models, the α effect generates the poloidal field from the toroidal one. This induced poloidal field produces a turbulent cross helicity whose sign is opposite to the original one (negative production). With this cross helicity of the reversed sign, a reversal in field configuration starts. Eigenvalue analyses of the simplest possible model give a butterfly diagram, which confirms the above scenario and the equatorward migrations, the phase relationship between the cross helicity and magnetic fields. These results suggest that the oscillation of the turbulent cross helicity is a key for the activity cycle. The reversal of the cross helicity is not the result of the magnetic-field reversal, but the cause of the latter. This new model is expected to open up the possibility of the mean-field or turbulence closure dynamo approaches.« less

  11. Improving the geomagnetic field modeling with a selection of high-quality archaeointensity data

    NASA Astrophysics Data System (ADS)

    Pavon-Carrasco, Francisco Javier; Gomez-Paccard, Miriam; Herve, Gwenael; Osete, Maria Luisa; Chauvin, Annick

    2014-05-01

    Geomagnetic field reconstructions for the last millennia are based on archeomagnetic data. However, the scatter of the archaeointensity data is very puzzling and clearly suggests that some of the intensity data might not be reliable. In this work we apply different selection criteria to the European and Western Asian archaeointensity data covering the last three millennia in order to investigate if the data selection affects geomagnetic field models results. Thanks to the recently developed archeomagnetic databases, new valuable information related to the methodology used to determine the archeointensity data is now available. We therefore used this information to rank the archaeointensity data in four quality categories depending on the methodology used during the laboratory treatment of the samples and on the number of specimens retained to calculate the mean intensities. Results show how the intensity geomagnetic field component given by the regional models hardly depends on the selected quality data used. When all the available data are used a different behavior of the geomagnetic field is observed in Western and Eastern Europe. However, when the regional model is obtained from a selection of high-quality intensity data the same features are observed at the European scale.

  12. Fragmentation modeling of a resin bonded sand

    NASA Astrophysics Data System (ADS)

    Hilth, William; Ryckelynck, David

    2017-06-01

    Cemented sands exhibit a complex mechanical behavior that can lead to sophisticated models, with numerous parameters without real physical meaning. However, using a rather simple generalized critical state bonded soil model has proven to be a relevant compromise between an easy calibration and good results. The constitutive model formulation considers a non-associated elasto-plastic formulation within the critical state framework. The calibration procedure, using standard laboratory tests, is complemented by the study of an uniaxial compression test observed by tomography. Using finite elements simulations, this test is simulated considering a non-homogeneous 3D media. The tomography of compression sample gives access to 3D displacement fields by using image correlation techniques. Unfortunately these fields have missing experimental data because of the low resolution of correlations for low displacement magnitudes. We propose a recovery method that reconstructs 3D full displacement fields and 2D boundary displacement fields. These fields are mandatory for the calibration of the constitutive parameters by using 3D finite element simulations. The proposed recovery technique is based on a singular value decomposition of available experimental data. This calibration protocol enables an accurate prediction of the fragmentation of the specimen.

  13. Influence of the mode of deformation on recrystallisation behaviour of titanium through experiments, mean field theory and phase field model

    NASA Astrophysics Data System (ADS)

    Athreya, C. N.; Mukilventhan, A.; Suwas, Satyam; Vedantam, Srikanth; Subramanya Sarma, V.

    2018-04-01

    The influence of the mode of deformation on recrystallisation behaviour of Ti was studied by experiments and modelling. Ti samples were deformed through torsion and rolling to the same equivalent strain of 0.5. The deformed samples were annealed at different temperatures for different time durations and the recrystallisation kinetics were compared. Recrystallisation is found to be faster in the rolled samples compared to the torsion deformed samples. This is attributed to the differences in stored energy and number of nuclei per unit area in the two modes of deformation. Considering decay in stored energy during recrystallisation, the grain boundary mobility was estimated through a mean field model. The activation energy for recrystallisation obtained from experiments matched with the activation energy for grain boundary migration obtained from mobility calculation. A multi-phase field model (with mobility estimated from the mean field model as a constitutive input) was used to simulate the kinetics, microstructure and texture evolution. The recrystallisation kinetics and grain size distributions obtained from experiments matched reasonably well with the phase field simulations. The recrystallisation texture predicted through phase field simulations compares well with experiments though few additional texture components are present in simulations. This is attributed to the anisotropy in grain boundary mobility, which is not accounted for in the present study.

  14. Generation of a Large-scale Magnetic Field in a Convective Full-sphere Cross-helicity Dynamo

    NASA Astrophysics Data System (ADS)

    Pipin, V. V.; Yokoi, N.

    2018-05-01

    We study the effects of the cross-helicity in the full-sphere large-scale mean-field dynamo models of a 0.3 M ⊙ star rotating with a period of 10 days. In exploring several dynamo scenarios that stem from magnetic field generation by the cross-helicity effect, we found that the cross-helicity provides the natural generation mechanisms for the large-scale scale axisymmetric and nonaxisymmetric magnetic field. Therefore, the rotating stars with convective envelopes can produce a large-scale magnetic field generated solely due to the turbulent cross-helicity effect (we call it γ 2-dynamo). Using mean-field models we compare the properties of the large-scale magnetic field organization that stems from dynamo mechanisms based on the kinetic helicity (associated with the α 2 dynamos) and cross-helicity. For the fully convective stars, both generation mechanisms can maintain large-scale dynamos even for the solid body rotation law inside the star. The nonaxisymmetric magnetic configurations become preferable when the cross-helicity and the α-effect operate independently of each other. This corresponds to situations with purely γ 2 or α 2 dynamos. The combination of these scenarios, i.e., the γ 2 α 2 dynamo, can generate preferably axisymmetric, dipole-like magnetic fields at strengths of several kGs. Thus, we found a new dynamo scenario that is able to generate an axisymmetric magnetic field even in the case of a solid body rotation of the star. We discuss the possible applications of our findings to stellar observations.

  15. Accurate position estimation methods based on electrical impedance tomography measurements

    NASA Astrophysics Data System (ADS)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less than 0.05% of the tomograph radius value. These results demonstrate that the proposed approaches can estimate an object’s position accurately based on EIT measurements if enough process information is available for training or modelling. Since they do not require complex calculations it is possible to use them in real-time applications without requiring high-performance computers.

  16. Binocular device for displaying numerical information in field of view

    NASA Technical Reports Server (NTRS)

    Fuller, H. V. (Inventor)

    1977-01-01

    An apparatus is described for superimposing numerical information on the field of view of binoculars. The invention has application in the flying of radio-controlled model airplanes. Information such as airspeed and angle of attack are sensed on a model airplane and transmitted back to earth where this information is changed into numerical form. Optical means are attached to the binoculars that a pilot is using to track the model air plane for displaying the numerical information in the field of view of the binoculars. The device includes means for focusing the numerical information at infinity whereby the user of the binoculars can see both the field of view and the numerical information without refocusing his eyes.

  17. Connections between the Sznajd model with general confidence rules and graph theory

    NASA Astrophysics Data System (ADS)

    Timpanaro, André M.; Prado, Carmen P. C.

    2012-10-01

    The Sznajd model is a sociophysics model that is used to model opinion propagation and consensus formation in societies. Its main feature is that its rules favor bigger groups of agreeing people. In a previous work, we generalized the bounded confidence rule in order to model biases and prejudices in discrete opinion models. In that work, we applied this modification to the Sznajd model and presented some preliminary results. The present work extends what we did in that paper. We present results linking many of the properties of the mean-field fixed points, with only a few qualitative aspects of the confidence rule (the biases and prejudices modeled), finding an interesting connection with graph theory problems. More precisely, we link the existence of fixed points with the notion of strongly connected graphs and the stability of fixed points with the problem of finding the maximal independent sets of a graph. We state these results and present comparisons between the mean field and simulations in Barabási-Albert networks, followed by the main mathematical ideas and appendices with the rigorous proofs of our claims and some graph theory concepts, together with examples. We also show that there is no qualitative difference in the mean-field results if we require that a group of size q>2, instead of a pair, of agreeing agents be formed before they attempt to convince other sites (for the mean field, this would coincide with the q-voter model).

  18. Persistence and failure of mean-field approximations adapted to a class of systems of delay-coupled excitable units

    NASA Astrophysics Data System (ADS)

    Franović, Igor; Todorović, Kristina; Vasović, Nebojša; Burić, Nikola

    2014-02-01

    We consider the approximations behind the typical mean-field model derived for a class of systems made up of type II excitable units influenced by noise and coupling delays. The formulation of the two approximations, referred to as the Gaussian and the quasi-independence approximation, as well as the fashion in which their validity is verified, are adapted to reflect the essential properties of the underlying system. It is demonstrated that the failure of the mean-field model associated with the breakdown of the quasi-independence approximation can be predicted by the noise-induced bistability in the dynamics of the mean-field system. As for the Gaussian approximation, its violation is related to the increase of noise intensity, but the actual condition for failure can be cast in qualitative, rather than quantitative terms. We also discuss how the fulfillment of the mean-field approximations affects the statistics of the first return times for the local and global variables, further exploring the link between the fulfillment of the quasi-independence approximation and certain forms of synchronization between the individual units.

  19. Assessment of the Appalachian Basin Geothermal Field: Combining Risk Factors to Inform Development of Low Temperature Projects

    NASA Astrophysics Data System (ADS)

    Smith, J. D.; Whealton, C.; Camp, E. R.; Horowitz, F.; Frone, Z. S.; Jordan, T. E.; Stedinger, J. R.

    2015-12-01

    Exploration methods for deep geothermal energy projects must primarily consider whether or not a location has favorable thermal resources. Even where the thermal field is favorable, other factors may impede project development and success. A combined analysis of these factors and their uncertainty is a strategy for moving geothermal energy proposals forward from the exploration phase at the scale of a basin to the scale of a project, and further to design of geothermal systems. For a Department of Energy Geothermal Play Fairway Analysis we assessed quality metrics, which we call risk factors, in the Appalachian Basin of New York, Pennsylvania, and West Virginia. These included 1) thermal field variability, 2) productivity of natural reservoirs from which to extract heat, 3) potential for induced seismicity, and 4) presence of thermal utilization centers. The thermal field was determined using a 1D heat flow model for 13,400 bottomhole temperatures (BHT) from oil and gas wells. Steps included the development of i) a set of corrections to BHT data and ii) depth models of conductivity stratigraphy at each borehole based on generalized stratigraphy that was verified for a select set of wells. Wells are control points in a spatial statistical analysis that resulted in maps of the predicted mean thermal field properties and of the standard error of the predicted mean. Seismic risk was analyzed by comparing earthquakes and stress orientations in the basin to gravity and magnetic potential field edges at depth. Major edges in the potential fields served as interpolation boundaries for the thermal maps (Figure 1). Natural reservoirs were identified from published studies, and productivity was determined based on the expected permeability and dimensions of each reservoir. Visualizing the natural reservoirs and population centers on a map of the thermal field communicates options for viable pilot sites and project designs (Figure 1). Furthermore, combining the four risk factors at favorable sites enables an evaluation of project feasibility across sites based on tradeoffs in the risk factors. Uncertainties in each risk factor can also be considered to determine if the tradeoffs in risk factors between sites are meaningful.

  20. On extinction time of a generalized endemic chain-binomial model.

    PubMed

    Aydogmus, Ozgur

    2016-09-01

    We considered a chain-binomial epidemic model not conferring immunity after infection. Mean field dynamics of the model has been analyzed and conditions for the existence of a stable endemic equilibrium are determined. The behavior of the chain-binomial process is probabilistically linked to the mean field equation. As a result of this link, we were able to show that the mean extinction time of the epidemic increases at least exponentially as the population size grows. We also present simulation results for the process to validate our analytical findings. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. A Deep-Structured Conditional Random Field Model for Object Silhouette Tracking

    PubMed Central

    Shafiee, Mohammad Javad; Azimifar, Zohreh; Wong, Alexander

    2015-01-01

    In this work, we introduce a deep-structured conditional random field (DS-CRF) model for the purpose of state-based object silhouette tracking. The proposed DS-CRF model consists of a series of state layers, where each state layer spatially characterizes the object silhouette at a particular point in time. The interactions between adjacent state layers are established by inter-layer connectivity dynamically determined based on inter-frame optical flow. By incorporate both spatial and temporal context in a dynamic fashion within such a deep-structured probabilistic graphical model, the proposed DS-CRF model allows us to develop a framework that can accurately and efficiently track object silhouettes that can change greatly over time, as well as under different situations such as occlusion and multiple targets within the scene. Experiment results using video surveillance datasets containing different scenarios such as occlusion and multiple targets showed that the proposed DS-CRF approach provides strong object silhouette tracking performance when compared to baseline methods such as mean-shift tracking, as well as state-of-the-art methods such as context tracking and boosted particle filtering. PMID:26313943

  2. Single-particle dynamics of the Anderson model: a local moment approach

    NASA Astrophysics Data System (ADS)

    Glossop, Matthew T.; Logan, David E.

    2002-07-01

    A non-perturbative local moment approach to single-particle dynamics of the general asymmetric Anderson impurity model is developed. The approach encompasses all energy scales and interaction strengths. It captures thereby strong coupling Kondo behaviour, including the resultant universal scaling behaviour of the single-particle spectrum; as well as the mixed valence and essentially perturbative empty orbital regimes. The underlying approach is physically transparent and innately simple, and as such is capable of practical extension to lattice-based models within the framework of dynamical mean-field theory.

  3. A DOUBLE-RING ALGORITHM FOR MODELING SOLAR ACTIVE REGIONS: UNIFYING KINEMATIC DYNAMO MODELS AND SURFACE FLUX-TRANSPORT SIMULATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munoz-Jaramillo, Andres; Martens, Petrus C. H.; Nandy, Dibyendu

    The emergence of tilted bipolar active regions (ARs) and the dispersal of their flux, mediated via processes such as diffusion, differential rotation, and meridional circulation, is believed to be responsible for the reversal of the Sun's polar field. This process (commonly known as the Babcock-Leighton mechanism) is usually modeled as a near-surface, spatially distributed {alpha}-effect in kinematic mean-field dynamo models. However, this formulation leads to a relationship between polar field strength and meridional flow speed which is opposite to that suggested by physical insight and predicted by surface flux-transport simulations. With this in mind, we present an improved double-ring algorithmmore » for modeling the Babcock-Leighton mechanism based on AR eruption, within the framework of an axisymmetric dynamo model. Using surface flux-transport simulations, we first show that an axisymmetric formulation-which is usually invoked in kinematic dynamo models-can reasonably approximate the surface flux dynamics. Finally, we demonstrate that our treatment of the Babcock-Leighton mechanism through double-ring eruption leads to an inverse relationship between polar field strength and meridional flow speed as expected, reconciling the discrepancy between surface flux-transport simulations and kinematic dynamo models.« less

  4. Optimal fire histories for biodiversity conservation.

    PubMed

    Kelly, Luke T; Bennett, Andrew F; Clarke, Michael F; McCarthy, Michael A

    2015-04-01

    Fire is used as a management tool for biodiversity conservation worldwide. A common objective is to avoid population extinctions due to inappropriate fire regimes. However, in many ecosystems, it is unclear what mix of fire histories will achieve this goal. We determined the optimal fire history of a given area for biological conservation with a method that links tools from 3 fields of research: species distribution modeling, composite indices of biodiversity, and decision science. We based our case study on extensive field surveys of birds, reptiles, and mammals in fire-prone semi-arid Australia. First, we developed statistical models of species' responses to fire history. Second, we determined the optimal allocation of successional states in a given area, based on the geometric mean of species relative abundance. Finally, we showed how conservation targets based on this index can be incorporated into a decision-making framework for fire management. Pyrodiversity per se did not necessarily promote vertebrate biodiversity. Maximizing pyrodiversity by having an even allocation of successional states did not maximize the geometric mean abundance of bird species. Older vegetation was disproportionately important for the conservation of birds, reptiles, and small mammals. Because our method defines fire management objectives based on the habitat requirements of multiple species in the community, it could be used widely to maximize biodiversity in fire-prone ecosystems. © 2014 Society for Conservation Biology.

  5. Improvement of the GPS/A system for extensive observation along subduction zones around Japan

    NASA Astrophysics Data System (ADS)

    Fujimoto, H.; Kido, M.; Tadokoro, K.; Sato, M.; Ishikawa, T.; Asada, A.; Mochizuki, M.

    2011-12-01

    Combined high-resolution gravity field models serve as a mandatory basis to describe static and dynamic processes in system Earth. Ocean dynamics can be modeled referring to a high-accurate geoid as reference surface, solid earth processes are initiated by the gravity field. Also geodetic disciplines such as height system determination depend on high-precise gravity field information. To fulfill the various requirements concerning resolution and accuracy, any kind of gravity field information, that means satellite as well as terrestrial and altimetric gravity field observations have to be included in one combination process. A key role is here reserved for GOCE observations, which contribute with its optimal signal content in the long to medium wavelength part and enable a more accurate gravity field determination than ever before especially in areas, where no high-accurate terrestrial gravity field observations are available, such as South America, Asia or Africa. For our contribution we prepare a combined high-resolution gravity field model up to d/o 720 based on full normal equation including recent GOCE, GRACE and terrestrial / altimetric data. For all data sets, normal equations are set up separately, relative weighted to each other in the combination step and solved. This procedure is computationally challenging and can only be performed using super computers. We put special emphasis on the combination process, for which we modified especially our procedure to include GOCE data optimally in the combination. Furthermore we modified our terrestrial/altimetric data sets, what should result in an improved outcome. With our model, in which we included the newest GOCE TIM4 gradiometry results, we can show how GOCE contributes to a combined gravity field solution especially in areas of poor terrestrial data coverage. The model is validated by independent GPS leveling data in selected regions as well as computation of the mean dynamic topography over the oceans. Further, we analyze the statistical error estimates derived from full covariance propagation and compare them with the absolute validation with independent data sets.

  6. Acoustic wave in a suspension of magnetic nanoparticle with sodium oleate coating

    NASA Astrophysics Data System (ADS)

    Józefczak, A.; Hornowski, T.; Závišová, V.; Skumiel, A.; Kubovčíková, M.; Timko, M.

    2014-03-01

    The ultrasonic propagation in the water-based magnetic fluid with doubled layered surfactant shell was studied. The measurements were carried out both in the presence as well as in the absence of the external magnetic field. The thickness of the surfactant shell was evaluated by comparing the mean size of magnetic grain extracted from magnetization curve with the mean hydrodynamic diameter obtained from differential centrifugal sedimentation method. The thickness of surfactant shell was used to estimate volume fraction of the particle aggregates consisted of magnetite grain and surfactant layer. From the ultrasonic velocity measurements in the absence of the applied magnetic field, the adiabatic compressibility of the particle aggregates was determined. In the external magnetic field, the magnetic fluid studied in this article becomes acoustically anisotropic, i.e., velocity and attenuation of the ultrasonic wave depend on the angle between the wave vector and the direction of the magnetic field. The results of the ultrasonic measurements in the external magnetic field were compared with the hydrodynamic theory of Ovchinnikov and Sokolov (velocity) and with the internal chain dynamics model of Shliomis, Mond and Morozov (attenuation).

  7. Acoustic wave in a suspension of magnetic nanoparticle with sodium oleate coating.

    PubMed

    Józefczak, A; Hornowski, T; Závišová, V; Skumiel, A; Kubovčíková, M; Timko, M

    2014-01-01

    The ultrasonic propagation in the water-based magnetic fluid with doubled layered surfactant shell was studied. The measurements were carried out both in the presence as well as in the absence of the external magnetic field. The thickness of the surfactant shell was evaluated by comparing the mean size of magnetic grain extracted from magnetization curve with the mean hydrodynamic diameter obtained from differential centrifugal sedimentation method. The thickness of surfactant shell was used to estimate volume fraction of the particle aggregates consisted of magnetite grain and surfactant layer. From the ultrasonic velocity measurements in the absence of the applied magnetic field, the adiabatic compressibility of the particle aggregates was determined. In the external magnetic field, the magnetic fluid studied in this article becomes acoustically anisotropic, i.e., velocity and attenuation of the ultrasonic wave depend on the angle between the wave vector and the direction of the magnetic field. The results of the ultrasonic measurements in the external magnetic field were compared with the hydrodynamic theory of Ovchinnikov and Sokolov (velocity) and with the internal chain dynamics model of Shliomis, Mond and Morozov (attenuation).

  8. Statistical modeling of 4D respiratory lung motion using diffeomorphic image registration.

    PubMed

    Ehrhardt, Jan; Werner, René; Schmidt-Richberg, Alexander; Handels, Heinz

    2011-02-01

    Modeling of respiratory motion has become increasingly important in various applications of medical imaging (e.g., radiation therapy of lung cancer). Current modeling approaches are usually confined to intra-patient registration of 3D image data representing the individual patient's anatomy at different breathing phases. We propose an approach to generate a mean motion model of the lung based on thoracic 4D computed tomography (CT) data of different patients to extend the motion modeling capabilities. Our modeling process consists of three steps: an intra-subject registration to generate subject-specific motion models, the generation of an average shape and intensity atlas of the lung as anatomical reference frame, and the registration of the subject-specific motion models to the atlas in order to build a statistical 4D mean motion model (4D-MMM). Furthermore, we present methods to adapt the 4D mean motion model to a patient-specific lung geometry. In all steps, a symmetric diffeomorphic nonlinear intensity-based registration method was employed. The Log-Euclidean framework was used to compute statistics on the diffeomorphic transformations. The presented methods are then used to build a mean motion model of respiratory lung motion using thoracic 4D CT data sets of 17 patients. We evaluate the model by applying it for estimating respiratory motion of ten lung cancer patients. The prediction is evaluated with respect to landmark and tumor motion, and the quantitative analysis results in a mean target registration error (TRE) of 3.3 ±1.6 mm if lung dynamics are not impaired by large lung tumors or other lung disorders (e.g., emphysema). With regard to lung tumor motion, we show that prediction accuracy is independent of tumor size and tumor motion amplitude in the considered data set. However, tumors adhering to non-lung structures degrade local lung dynamics significantly and the model-based prediction accuracy is lower in these cases. The statistical respiratory motion model is capable of providing valuable prior knowledge in many fields of applications. We present two examples of possible applications in radiation therapy and image guided diagnosis.

  9. Mean-force-field and mean-spherical approximations for the electric microfield distribution at a charged point in the charged-hard-particles fluid

    NASA Astrophysics Data System (ADS)

    Rosenfeld, Yaakov

    1989-01-01

    The linearized mean-force-field approximation, leading to a Gaussian distribution, provides an exact formal solution to the mean-spherical integral equation model for the electric microfield distribution at a charged point in the general charged-hard-particles fluid. Lado's explicit solution for plasmas immediately follows this general observation.

  10. An improved empirical model of electron and ion fluxes at geosynchronous orbit based on upstream solar wind conditions

    DOE PAGES

    Denton, M. H.; Henderson, M. G.; Jordanova, V. K.; ...

    2016-07-01

    In this study, a new empirical model of the electron fluxes and ion fluxes at geosynchronous orbit (GEO) is introduced, based on observations by Los Alamos National Laboratory (LANL) satellites. The model provides flux predictions in the energy range ~1 eV to ~40 keV, as a function of local time, energy, and the strength of the solar wind electric field (the negative product of the solar wind speed and the z component of the magnetic field). Given appropriate upstream solar wind measurements, the model provides a forecast of the fluxes at GEO with a ~1 h lead time. Model predictionsmore » are tested against in-sample observations from LANL satellites and also against out-of-sample observations from the Compact Environmental Anomaly Sensor II detector on the AMC-12 satellite. The model does not reproduce all structure seen in the observations. However, for the intervals studied here (quiet and storm times) the normalized root-mean-square deviation < ~0.3. It is intended that the model will improve forecasting of the spacecraft environment at GEO and also provide improved boundary/input conditions for physical models of the magnetosphere.« less

  11. Color and symbology: symbolic systems of color ordering

    NASA Astrophysics Data System (ADS)

    Varela, Diana

    2002-06-01

    Color has been used symbolically in various different fields, such as Heraldry, Music, Liturgy, Alchemy, Art and Literature. In this study, we shall investigate and analyse the structures of relationships that have taken shape as symbolic systems within each specific area of analysis. We shall discuss the most significant symbolic fields and their systems of color ording, considering each one of them as a topological model based on a logic that determines the total organization, according to the scale of reciprocities applied, and the cultural context that gives it meaning.

  12. Phase transition studies of BiMnO{sub 3}: Mean field theory approximations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lakshmi Priya, K. B.; Natesan, Baskaran, E-mail: nbaski@nitt.edu

    We studied the phase transition and magneto-electric coupling effect of BiMnO{sub 3} by employing mean field theory approximations. To capture the ferromagnetic and ferroelectric transitions of BiMnO{sub 3}, we construct an extended Ising model in a 2D square lattice, wherein, the magnetic (electric) interactions are described in terms of the direct interactions between the localized magnetic (electric dipole) moments of Mn ions with their nearest neighbors. To evaluate our model, we obtain magnetization, magnetic susceptibility and electric polarization using mean field approximation calculations. Our results reproduce both the ferromagnetic and the ferroelectric transitions, matching very well with the experimental reports.more » Furthermore, consistent with experimental observations, our mean field results suggest that there is indeed a coupling between the magnetic and electric ordering in BiMnO{sub 3}.« less

  13. Bifurcations of large networks of two-dimensional integrate and fire neurons.

    PubMed

    Nicola, Wilten; Campbell, Sue Ann

    2013-08-01

    Recently, a class of two-dimensional integrate and fire models has been used to faithfully model spiking neurons. This class includes the Izhikevich model, the adaptive exponential integrate and fire model, and the quartic integrate and fire model. The bifurcation types for the individual neurons have been thoroughly analyzed by Touboul (SIAM J Appl Math 68(4):1045-1079, 2008). However, when the models are coupled together to form networks, the networks can display bifurcations that an uncoupled oscillator cannot. For example, the networks can transition from firing with a constant rate to burst firing. This paper introduces a technique to reduce a full network of this class of neurons to a mean field model, in the form of a system of switching ordinary differential equations. The reduction uses population density methods and a quasi-steady state approximation to arrive at the mean field system. Reduced models are derived for networks with different topologies and different model neurons with biologically derived parameters. The mean field equations are able to qualitatively and quantitatively describe the bifurcations that the full networks display. Extensions and higher order approximations are discussed.

  14. On the Conditioning of Machine-Learning-Assisted Turbulence Modeling

    NASA Astrophysics Data System (ADS)

    Wu, Jinlong; Sun, Rui; Wang, Qiqi; Xiao, Heng

    2017-11-01

    Recently, several researchers have demonstrated that machine learning techniques can be used to improve the RANS modeled Reynolds stress by training on available database of high fidelity simulations. However, obtaining improved mean velocity field remains an unsolved challenge, restricting the predictive capability of current machine-learning-assisted turbulence modeling approaches. In this work we define a condition number to evaluate the model conditioning of data-driven turbulence modeling approaches, and propose a stability-oriented machine learning framework to model Reynolds stress. Two canonical flows, the flow in a square duct and the flow over periodic hills, are investigated to demonstrate the predictive capability of the proposed framework. The satisfactory prediction performance of mean velocity field for both flows demonstrates the predictive capability of the proposed framework for machine-learning-assisted turbulence modeling. With showing the capability of improving the prediction of mean flow field, the proposed stability-oriented machine learning framework bridges the gap between the existing machine-learning-assisted turbulence modeling approaches and the demand of predictive capability of turbulence models in real applications.

  15. The influence of 14CO2 releases from regional nuclear facilities at the Heidelberg 14CO2 sampling site (1986-2014)

    NASA Astrophysics Data System (ADS)

    Kuderer, Matthias; Hammer, Samuel; Levin, Ingeborg

    2018-06-01

    Atmospheric Δ14CO2 measurements are a well-established tool to estimate the regional fossil-fuel-derived CO2 component. However, emissions from nuclear facilities can significantly alter the regional Δ14CO2 level. In order to accurately quantify the signal originating from fossil CO2 emissions, a correction term for anthropogenic 14CO2 sources has to be determined. In this study, the HYSPLIT atmospheric dispersion model has been applied to calculate this correction for the long-term Δ14CO2 monitoring site in Heidelberg. Wind fields with a spatial resolution of 2.5° × 2.5°, 1° × 1°, and 0.5° × 0.5° show systematic deviations, with coarser resolved wind fields leading to higher mean values for the correction. The finally applied mean Δ14CO2 correction for the period from 1986-2014 is 2.3 ‰ with a standard deviation of 2.1 ‰ and maximum values up to 15.2 ‰. These results are based on the 0.5° × 0.5° wind field simulations in years when these fields were available (2009, 2011-2014), and for the other years they are based on 2.5° × 2.5° wind field simulations, corrected with a factor of 0.43. After operations at the Philippsburg boiling water reactor ceased in 2011, the monthly nuclear correction terms decreased to less than 2 ‰, with a mean value of 0.44 ± 0.32 ‰ from 2012 to 2014.

  16. The performance of approximations of farm contiguity compared to contiguity defined using detailed geographical information in two sample areas in Scotland: implications for foot-and-mouth disease modelling.

    PubMed

    Flood, Jessica S; Porphyre, Thibaud; Tildesley, Michael J; Woolhouse, Mark E J

    2013-10-08

    When modelling infectious diseases, accurately capturing the pattern of dissemination through space is key to providing optimal recommendations for control. Mathematical models of disease spread in livestock, such as for foot-and-mouth disease (FMD), have done this by incorporating a transmission kernel which describes the decay in transmission rate with increasing Euclidean distance from an infected premises (IP). However, this assumes a homogenous landscape, and is based on the distance between point locations of farms. Indeed, underlying the spatial pattern of spread are the contact networks involved in transmission. Accordingly, area-weighted tessellation around farm point locations has been used to approximate field-contiguity and simulate the effect of contiguous premises (CP) culling for FMD. Here, geographic data were used to determine contiguity based on distance between premises' fields and presence of landscape features for two sample areas in Scotland. Sensitivity, positive predictive value, and the True Skill Statistic (TSS) were calculated to determine how point distance measures and area-weighted tessellation compared to the 'gold standard' of the map-based measures in identifying CPs. In addition, the mean degree and density of the different contact networks were calculated. Utilising point distances <1 km and <5 km as a measure for contiguity resulted in poor discrimination between map-based CPs/non-CPs (TSS 0.279-0.344 and 0.385-0.400, respectively). Point distance <1 km missed a high proportion of map-based CPs; <5 km point distance picked up a high proportion of map-based non-CPs as CPs. Area-weighted tessellation performed best, with reasonable discrimination between map-based CPs/non-CPs (TSS 0.617-0.737) and comparable mean degree and density. Landscape features altered network properties considerably when taken into account. The farming landscape is not homogeneous. Basing contiguity on geographic locations of field boundaries and including landscape features known to affect transmission into FMD models are likely to improve individual farm-level accuracy of spatial predictions in the event of future outbreaks. If a substantial proportion of FMD transmission events are by contiguous spread, and CPs should be assigned an elevated relative transmission rate, the shape of the kernel could be significantly altered since ability to discriminate between map-based CPs and non-CPs is different over different Euclidean distances.

  17. Mechanistic equivalent circuit modelling of a commercial polymer electrolyte membrane fuel cell

    NASA Astrophysics Data System (ADS)

    Giner-Sanz, J. J.; Ortega, E. M.; Pérez-Herranz, V.

    2018-03-01

    Electrochemical impedance spectroscopy (EIS) has been widely used in the fuel cell field since it allows deconvolving the different physic-chemical processes that affect the fuel cell performance. Typically, EIS spectra are modelled using electric equivalent circuits. In this work, EIS spectra of an individual cell of a commercial PEM fuel cell stack were obtained experimentally. The goal was to obtain a mechanistic electric equivalent circuit in order to model the experimental EIS spectra. A mechanistic electric equivalent circuit is a semiempirical modelling technique which is based on obtaining an equivalent circuit that does not only correctly fit the experimental spectra, but which elements have a mechanistic physical meaning. In order to obtain the aforementioned electric equivalent circuit, 12 different models with defined physical meanings were proposed. These equivalent circuits were fitted to the obtained EIS spectra. A 2 step selection process was performed. In the first step, a group of 4 circuits were preselected out of the initial list of 12, based on general fitting indicators as the determination coefficient and the fitted parameter uncertainty. In the second step, one of the 4 preselected circuits was selected on account of the consistency of the fitted parameter values with the physical meaning of each parameter.

  18. Application of fuzzy C-Means Algorithm for Determining Field of Interest in Information System Study STTH Medan

    NASA Astrophysics Data System (ADS)

    Rahman Syahputra, Edy; Agustina Dalimunthe, Yulia; Irvan

    2017-12-01

    Many students are confused in choosing their own field of specialization, ultimately choosing areas of specialization that are incompatible with a variety of reasons such as just following a friend or because of the area of interest of many choices without knowing whether they have Competencies in the chosen field of interest. This research aims to apply Clustering method with Fuzzy C-means algorithm to classify students in the chosen interest field. The Fuzzy C-Means algorithm is one of the easiest and often used algorithms in data grouping techniques because it makes efficient estimates and does not require many parameters. Several studies have led to the conclusion that the Fuzzy C-Means algorithm can be used to group data based on certain attributes. In this research will be used Fuzzy C-Means algorithm to classify student data based on the value of core subjects in the selection of specialization field. This study also tested the accuracy of the Fuzzy C-Means algorithm in the determination of interest area. The study was conducted on the STT-Harapan Medan Information System Study program, and the object of research is the value of all students of STT-Harapan Medan Information System Study Program 2012. From this research, it is expected to get the specialization field, according to the students' ability based on the prerequisite principal value.

  19. Dynamic balance in turbulent reconnection

    NASA Astrophysics Data System (ADS)

    Yokoi, N.; Higashimori, K.; Hoshino, M.

    2012-12-01

    Dynamic balance between the enhancement and suppression of transports due to turbulence in magnetic reconnection is discussed analytically and numerically by considering the interaction of the large-scale field structures with the small-scale turbulence in a consistent manner. Turbulence is expected to play an important role in bridging small and large scales related to magnetic reconnection. The configurations of the mean-field structure are determined by turbulence through the effective transport. At the same time, statistical properties of turbulence are determined by the mean-field structure through the production mechanisms of turbulence. This suggests that turbulence and mean fields should be considered simultaneously in a self-consistent manner. Following the theoretical prediction on the interaction between the mean-fields and turbulence in magnetic reconnection presented by Yokoi and Hoshino (2011), a self-consistent model for the turbulent reconnection is constructed. In the model, the mean-field equations for compressible magnetohydrodynamics are treated with the turbulence effects incorporated through the turbulence correlation such as the Reynolds stress and turbulent electromotive force. Transport coefficients appearing in the expression for these correlations are not adjustable parameters but are determined through the transport equations of the turbulent statistical quantities such as the turbulent MHD energy, the turbulent cross helicity. One of the prominent features of this reconnection model lies in the point that turbulence is not implemented as a prescribed one, but the generation and sustainment of turbulence through the mean-field inhomogeneities are treated. The theoretical predictions are confirmed by the numerical simulation of the model equations. These predictions include the quadrupole cross helicity distribution around the reconnection region, enhancement of reconnection rate due to turbulence, localization of the reconnection region through the cross-helicity effect, etc. Some implications to the satellite observation of the magnetic reconnection will be also given. Reference: Yokoi, N. and Hoshino, M. (2011) Physics of Plasmas, 18, 111208.

  20. Methodology for developing life tables for sessile insects in the field using the Whitefly, Bemisia tabaci, in cotton as a model system

    USDA-ARS?s Scientific Manuscript database

    Life tables provide a means of measuring the schedules of birth and death from populations over time. They also can be used to quantify the sources and rates of mortality in populations, which has a variety of applications in ecology, including agricultural ecosystems. Horizontal, or cohort-based, l...

  1. Dense matter theory: A simple classical approach

    NASA Astrophysics Data System (ADS)

    Savić, P.; Čelebonović, V.

    1994-07-01

    In the sixties, the first author and by P. Savić and R. Kašanin started developing a mean-field theory of dense matter. It is based on the Coulomb interaction, supplemented by a microscopic selection rule and a set of experimentally founded postulates. Applications of the theory range from the calculation of models of planetary internal structure to DAC experiments.

  2. A physical data model for fields and agents

    NASA Astrophysics Data System (ADS)

    de Jong, Kor; de Bakker, Merijn; Karssenberg, Derek

    2016-04-01

    Two approaches exist in simulation modeling: agent-based and field-based modeling. In agent-based (or individual-based) simulation modeling, the entities representing the system's state are represented by objects, which are bounded in space and time. Individual objects, like an animal, a house, or a more abstract entity like a country's economy, have properties representing their state. In an agent-based model this state is manipulated. In field-based modeling, the entities representing the system's state are represented by fields. Fields capture the state of a continuous property within a spatial extent, examples of which are elevation, atmospheric pressure, and water flow velocity. With respect to the technology used to create these models, the domains of agent-based and field-based modeling have often been separate worlds. In environmental modeling, widely used logical data models include feature data models for point, line and polygon objects, and the raster data model for fields. Simulation models are often either agent-based or field-based, even though the modeled system might contain both entities that are better represented by individuals and entities that are better represented by fields. We think that the reason for this dichotomy in kinds of models might be that the traditional object and field data models underlying those models are relatively low level. We have developed a higher level conceptual data model for representing both non-spatial and spatial objects, and spatial fields (De Bakker et al. 2016). Based on this conceptual data model we designed a logical and physical data model for representing many kinds of data, including the kinds used in earth system modeling (e.g. hydrological and ecological models). The goal of this work is to be able to create high level code and tools for the creation of models in which entities are representable by both objects and fields. Our conceptual data model is capable of representing the traditional feature data models and the raster data model, among many other data models. Our physical data model is capable of storing a first set of kinds of data, like omnipresent scalars, mobile spatio-temporal points and property values, and spatio-temporal rasters. With our poster we will provide an overview of the physical data model expressed in HDF5 and show examples of how it can be used to capture both object- and field-based information. References De Bakker, M, K. de Jong, D. Karssenberg. 2016. A conceptual data model and language for fields and agents. European Geosciences Union, EGU General Assembly, 2016, Vienna.

  3. Interplay between the local information based behavioral responses and the epidemic spreading in complex networks.

    PubMed

    Liu, Can; Xie, Jia-Rong; Chen, Han-Shuang; Zhang, Hai-Feng; Tang, Ming

    2015-10-01

    The spreading of an infectious disease can trigger human behavior responses to the disease, which in turn plays a crucial role on the spreading of epidemic. In this study, to illustrate the impacts of the human behavioral responses, a new class of individuals, S(F), is introduced to the classical susceptible-infected-recovered model. In the model, S(F) state represents that susceptible individuals who take self-initiate protective measures to lower the probability of being infected, and a susceptible individual may go to S(F) state with a response rate when contacting an infectious neighbor. Via the percolation method, the theoretical formulas for the epidemic threshold as well as the prevalence of epidemic are derived. Our finding indicates that, with the increasing of the response rate, the epidemic threshold is enhanced and the prevalence of epidemic is reduced. The analytical results are also verified by the numerical simulations. In addition, we demonstrate that, because the mean field method neglects the dynamic correlations, a wrong result based on the mean field method is obtained-the epidemic threshold is not related to the response rate, i.e., the additional S(F) state has no impact on the epidemic threshold.

  4. Quantum mean-field approximation for lattice quantum models: Truncating quantum correlations and retaining classical ones

    NASA Astrophysics Data System (ADS)

    Malpetti, Daniele; Roscilde, Tommaso

    2017-02-01

    The mean-field approximation is at the heart of our understanding of complex systems, despite its fundamental limitation of completely neglecting correlations between the elementary constituents. In a recent work [Phys. Rev. Lett. 117, 130401 (2016), 10.1103/PhysRevLett.117.130401], we have shown that in quantum many-body systems at finite temperature, two-point correlations can be formally separated into a thermal part and a quantum part and that quantum correlations are generically found to decay exponentially at finite temperature, with a characteristic, temperature-dependent quantum coherence length. The existence of these two different forms of correlation in quantum many-body systems suggests the possibility of formulating an approximation, which affects quantum correlations only, without preventing the correct description of classical fluctuations at all length scales. Focusing on lattice boson and quantum Ising models, we make use of the path-integral formulation of quantum statistical mechanics to introduce such an approximation, which we dub quantum mean-field (QMF) approach, and which can be readily generalized to a cluster form (cluster QMF or cQMF). The cQMF approximation reduces to cluster mean-field theory at T =0 , while at any finite temperature it produces a family of systematically improved, semi-classical approximations to the quantum statistical mechanics of the lattice theory at hand. Contrary to standard MF approximations, the correct nature of thermal critical phenomena is captured by any cluster size. In the two exemplary cases of the two-dimensional quantum Ising model and of two-dimensional quantum rotors, we study systematically the convergence of the cQMF approximation towards the exact result, and show that the convergence is typically linear or sublinear in the boundary-to-bulk ratio of the clusters as T →0 , while it becomes faster than linear as T grows. These results pave the way towards the development of semiclassical numerical approaches based on an approximate, yet systematically improved account of quantum correlations.

  5. MIRO Computational Model

    NASA Technical Reports Server (NTRS)

    Broderick, Daniel

    2010-01-01

    A computational model calculates the excitation of water rotational levels and emission-line spectra in a cometary coma with applications for the Micro-wave Instrument for Rosetta Orbiter (MIRO). MIRO is a millimeter-submillimeter spectrometer that will be used to study the nature of cometary nuclei, the physical processes of outgassing, and the formation of the head region of a comet (coma). The computational model is a means to interpret the data measured by MIRO. The model is based on the accelerated Monte Carlo method, which performs a random angular, spatial, and frequency sampling of the radiation field to calculate the local average intensity of the field. With the model, the water rotational level populations in the cometary coma and the line profiles for the emission from the water molecules as a function of cometary parameters (such as outgassing rate, gas temperature, and gas and electron density) and observation parameters (such as distance to the comet and beam width) are calculated.

  6. MEAN-FIELD MODELING OF AN α{sup 2} DYNAMO COUPLED WITH DIRECT NUMERICAL SIMULATIONS OF RIGIDLY ROTATING CONVECTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masada, Youhei; Sano, Takayoshi, E-mail: ymasada@harbor.kobe-u.ac.jp, E-mail: sano@ile.osaka-u.ac.jp

    2014-10-10

    The mechanism of large-scale dynamos in rigidly rotating stratified convection is explored by direct numerical simulations (DNS) in Cartesian geometry. A mean-field dynamo model is also constructed using turbulent velocity profiles consistently extracted from the corresponding DNS results. By quantitative comparison between the DNS and our mean-field model, it is demonstrated that the oscillatory α{sup 2} dynamo wave, excited and sustained in the convection zone, is responsible for large-scale magnetic activities such as cyclic polarity reversal and spatiotemporal migration. The results provide strong evidence that a nonuniformity of the α-effect, which is a natural outcome of rotating stratified convection, canmore » be an important prerequisite for large-scale stellar dynamos, even without the Ω-effect.« less

  7. Experimental validation of a finite-difference model for the prediction of transcranial ultrasound fields based on CT images

    NASA Astrophysics Data System (ADS)

    Bouchoux, Guillaume; Bader, Kenneth B.; Korfhagen, Joseph J.; Raymond, Jason L.; Shivashankar, Ravishankar; Abruzzo, Todd A.; Holland, Christy K.

    2012-12-01

    The prevalence of stroke worldwide and the paucity of effective therapies have triggered interest in the use of transcranial ultrasound as an adjuvant to thrombolytic therapy. Previous studies have shown that 120 kHz ultrasound enhanced thrombolysis and allowed efficient penetration through the temporal bone. The objective of our study was to develop an accurate finite-difference model of acoustic propagation through the skull based on computed tomography (CT) images. The computational approach, which neglected shear waves, was compared with a simple analytical model including shear waves. Acoustic pressure fields from a two-element annular array (120 and 60 kHz) were acquired in vitro in four human skulls. Simulations were performed using registered CT scans and a source term determined by acoustic holography. Mean errors below 14% were found between simulated pressure fields and corresponding measurements. Intracranial peak pressures were systematically underestimated and reflections from the contralateral bone were overestimated. Determination of the acoustic impedance of the bone from the CT images was the likely source of error. High correlation between predictions and measurements (R2 = 0.93 and R2 = 0.88 for transmitted and reflected waves amplitude, respectively) demonstrated that this model is suitable for a quantitative estimation of acoustic fields generated during 40-200 kHz ultrasound-enhanced ischemic stroke treatment.

  8. The mean magnetic field of the sun: Observations at Stanford

    NASA Technical Reports Server (NTRS)

    Scherrer, P. H.; Wilcox, J. M.; Svalgaard, L.; Duvall, T. L., Jr.; Dittmer, P. H.; Gustafson, E. K.

    1977-01-01

    A solar telescope was built at Stanford University to study the organization and evolution of large-scale solar magnetic fields and velocities. The observations are made using a Babcock-type magnetograph which is connected to a 22.9 m vertical Littrow spectrograph. Sun-as-a-star integrated light measurements of the mean solar magnetic field were made daily since May 1975. The typical mean field magnitude is about 0.15 gauss with typical measurement error less than 0.05 gauss. The mean field polarity pattern is essentially identical to the interplanetary magnetic field sector structure (seen near the earth with a 4 day lag). The differences in the observed structures can be understood in terms of a warped current sheet model.

  9. Polarizable multipolar electrostatics for cholesterol

    NASA Astrophysics Data System (ADS)

    Fletcher, Timothy L.; Popelier, Paul L. A.

    2016-08-01

    FFLUX is a novel force field under development for biomolecular modelling, and is based on topological atoms and the machine learning method kriging. Successful kriging models have been obtained for realistic electrostatics of amino acids, small peptides, and some carbohydrates but here, for the first time, we construct kriging models for a sizeable ligand of great importance, which is cholesterol. Cholesterol's mean total (internal) electrostatic energy prediction error amounts to 3.9 kJ mol-1, which pleasingly falls below the threshold of 1 kcal mol-1 often cited for accurate biomolecular modelling. We present a detailed analysis of the error distributions.

  10. Monitoring and modelling of white dwarfs with extremely weak magnetic fields. WD 2047+372 and WD 2359-434

    NASA Astrophysics Data System (ADS)

    Landstreet, J. D.; Bagnulo, S.; Valyavin, G.; Valeev, A. F.

    2017-11-01

    Magnetic fields are detected in a few percent of white dwarfs. The number of such magnetic white dwarfs known is now some hundreds. Fields range in strength from a few kG to several hundred MG. Almost all the known magnetic white dwarfs have a mean field modulus ≥1 MG. We are trying to fill a major gap in observational knowledge at the low field limit (≤200 kG) using circular spectro-polarimetry. In this paper we report the discovery and monitoring of strong, periodic magnetic variability in two previously discovered "super-weak field" magnetic white dwarfs, WD 2047+372 and WD 2359-434. WD 2047+372 has a mean longitudinal field that reverses between about -12 and + 15 kG, with a period of 0.243 d, while its mean field modulus appears nearly constant at 60 kG. The observations can be interpreted in terms of a dipolar field tilted with respect to the stellar rotation axis. WD 2359-434 always shows a weak positive longitudinal field with values between about 0 and + 12 kG, varying only weakly with stellar rotation, while the mean field modulus varies between about 50 and 100 kG. The rotation period is found to be 0.112 d using the variable shape of the Hα line core, consistent with available photometry. The field of this star appears to be much more complex than a dipole, and is probably not axisymmetric. Available photometry shows that WD 2359-434 is a light variable with an amplitude of only 0.005 mag; our own photometry shows that if WD 2047+372 is photometrically variable, the amplitude is below about 0.01 mag. These are the first models for magnetic white dwarfs with fields below about 100 kG based on magnetic measurements through the full stellar rotation. They reveal two very different magnetic surface configurations, and that, contrary to simple ohmic decay theory, WD 2359-434 has a much more complex surface field than the much younger WD 2047+372. Based, in part, on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere, Chile, under observing programmes 095.D-0264 and 097.D-0264, and obtained from the ESO/ST-ECF Science Archive Facility; in part, on observations made with the William Herschel Telescope, operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias; and in part on observations obtained at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique of France, and the University of Hawaii.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laitinen, T.; Dalla, S., E-mail: tlmlaitinen@uclan.ac.uk

    Current particle transport models describe the propagation of charged particles across the mean field direction in turbulent plasmas as diffusion. However, recent studies suggest that at short timescales, such as soon after solar energetic particle (SEP) injection, particles remain on turbulently meandering field lines, which results in nondiffusive initial propagation across the mean magnetic field. In this work, we use a new technique to investigate how the particles are displaced from their original field lines, and we quantify the parameters of the transition from field-aligned particle propagation along meandering field lines to particle diffusion across the mean magnetic field. Wemore » show that the initial decoupling of the particles from the field lines is slow, and particles remain within a Larmor radius from their initial meandering field lines for tens to hundreds of Larmor periods, for 0.1–10 MeV protons in turbulence conditions typical of the solar wind at 1 au. Subsequently, particles decouple from their initial field lines and after hundreds to thousands of Larmor periods reach time-asymptotic diffusive behavior consistent with particle diffusion across the mean field caused by the meandering of the field lines. We show that the typical duration of the prediffusive phase, hours to tens of hours for 10 MeV protons in 1 au solar wind turbulence conditions, is significant for SEP propagation to 1 au and must be taken into account when modeling SEP propagation in the interplanetary space.« less

  12. Visual Field Defects and Retinal Ganglion Cell Losses in Human Glaucoma Patients

    PubMed Central

    Harwerth, Ronald S.; Quigley, Harry A.

    2007-01-01

    Objective The depth of visual field defects are correlated with retinal ganglion cell densities in experimental glaucoma. This study was to determine whether a similar structure-function relationship holds for human glaucoma. Methods The study was based on retinal ganglion cell densities and visual thresholds of patients with documented glaucoma (Kerrigan-Baumrind, et al.) The data were analyzed by a model that predicted ganglion cell densities from standard clinical perimetry, which were then compared to histologic cell counts. Results The model, without free parameters, produced accurate and relatively precise quantification of ganglion cell densities associated with visual field defects. For 437 sets of data, the unity correlation for predicted vs. measured cell densities had a coefficient of determination of 0.39. The mean absolute deviation of the predicted vs. measured values was 2.59 dB, the mean and SD of the distribution of residual errors of prediction was -0.26 ± 3.22 dB. Conclusions Visual field defects by standard clinical perimetry are proportional to neural losses caused by glaucoma. Clinical Relevance The evidence for quantitative structure-function relationships provides a scientific basis of interpreting glaucomatous neuropathy from visual thresholds and supports the application of standard perimetry to establish the stage of the disease. PMID:16769839

  13. Fluid Flow Prediction with Development System Interwell Connectivity Influence

    NASA Astrophysics Data System (ADS)

    Bolshakov, M.; Deeva, T.; Pustovskikh, A.

    2016-03-01

    In this paper interwell connectivity has been studied. First of all, literature review of existing methods was made which is divided into three groups: Statistically-Based Methods, Material (fluid) Propagation-Based Methods and Potential (pressure) Change Propagation-Based Method. The disadvantages of the first and second groups are as follows: methods do not involve fluid flow through porous media, ignore any changes of well conditions (BHP, skin factor, etc.). The last group considers changes of well conditions and fluid flow through porous media. In this work Capacitance method (CM) has been chosen for research. This method is based on material balance and uses weight coefficients lambdas to assess well influence. In the next step synthetic model was created for examining CM. This model consists of an injection well and a production well. CM gave good results, it means that flow rates which were calculated by analytical method (CM) show matching with flow rate in model. Further new synthetic model was created which includes six production and one injection wells. This model represents seven-spot pattern. To obtain lambdas weight coefficients, the delta function was entered using by minimization algorithm. Also synthetic model which has three injectors and thirteen producer wells was created. This model simulates seven-spot pattern production system. Finally Capacitance method (CM) has been adjusted on real data of oil Field Ω. In this case CM does not give enough satisfying results in terms of field data liquid rate. In conclusion, recommendations to simplify CM calculations were given. Field Ω is assumed to have one injection and one production wells. In this case, satisfying results for production rates and cumulative production were obtained.

  14. Saddles and dynamics in a solvable mean-field model

    NASA Astrophysics Data System (ADS)

    Angelani, L.; Ruocco, G.; Zamponi, F.

    2003-05-01

    We use the saddle-approach, recently introduced in the numerical investigation of simple model liquids, in the analysis of a mean-field solvable system. The investigated system is the k-trigonometric model, a k-body interaction mean field system, that generalizes the trigonometric model introduced by Madan and Keyes [J. Chem. Phys. 98, 3342 (1993)] and that has been recently introduced to investigate the relationship between thermodynamics and topology of the configuration space. We find a close relationship between the properties of saddles (stationary points of the potential energy surface) visited by the system and the dynamics. In particular the temperature dependence of saddle order follows that of the diffusivity, both having an Arrhenius behavior at low temperature and a similar shape in the whole temperature range. Our results confirm the general usefulness of the saddle-approach in the interpretation of dynamical processes taking place in interacting systems.

  15. Improved ensemble-mean forecasting of ENSO events by a zero-mean stochastic error model of an intermediate coupled model

    NASA Astrophysics Data System (ADS)

    Zheng, Fei; Zhu, Jiang

    2017-04-01

    How to design a reliable ensemble prediction strategy with considering the major uncertainties of a forecasting system is a crucial issue for performing an ensemble forecast. In this study, a new stochastic perturbation technique is developed to improve the prediction skills of El Niño-Southern Oscillation (ENSO) through using an intermediate coupled model. We first estimate and analyze the model uncertainties from the ensemble Kalman filter analysis results through assimilating the observed sea surface temperatures. Then, based on the pre-analyzed properties of model errors, we develop a zero-mean stochastic model-error model to characterize the model uncertainties mainly induced by the missed physical processes of the original model (e.g., stochastic atmospheric forcing, extra-tropical effects, Indian Ocean Dipole). Finally, we perturb each member of an ensemble forecast at each step by the developed stochastic model-error model during the 12-month forecasting process, and add the zero-mean perturbations into the physical fields to mimic the presence of missing processes and high-frequency stochastic noises. The impacts of stochastic model-error perturbations on ENSO deterministic predictions are examined by performing two sets of 21-yr hindcast experiments, which are initialized from the same initial conditions and differentiated by whether they consider the stochastic perturbations. The comparison results show that the stochastic perturbations have a significant effect on improving the ensemble-mean prediction skills during the entire 12-month forecasting process. This improvement occurs mainly because the nonlinear terms in the model can form a positive ensemble-mean from a series of zero-mean perturbations, which reduces the forecasting biases and then corrects the forecast through this nonlinear heating mechanism.

  16. Physical Modeling of Activation Energy in Organic Semiconductor Devices based on Energy and Momentum Conservations

    PubMed Central

    Mao, Ling-Feng; Ning, H.; Hu, Changjun; Lu, Zhaolin; Wang, Gaofeng

    2016-01-01

    Field effect mobility in an organic device is determined by the activation energy. A new physical model of the activation energy is proposed by virtue of the energy and momentum conservation equations. The dependencies of the activation energy on the gate voltage and the drain voltage, which were observed in the experiments in the previous independent literature, can be well explained using the proposed model. Moreover, the expression in the proposed model, which has clear physical meanings in all parameters, can have the same mathematical form as the well-known Meyer-Neldel relation, which lacks of clear physical meanings in some of its parameters since it is a phenomenological model. Thus it not only describes a physical mechanism but also offers a possibility to design the next generation of high-performance optoelectronics and integrated flexible circuits by optimizing device physical parameter. PMID:27103586

  17. A unified framework for heat and mass transport at the atomic scale

    NASA Astrophysics Data System (ADS)

    Ponga, Mauricio; Sun, Dingyi

    2018-04-01

    We present a unified framework to simulate heat and mass transport in systems of particles. The proposed framework is based on kinematic mean field theory and uses a phenomenological master equation to compute effective transport rates between particles without the need to evaluate operators. We exploit this advantage and apply the model to simulate transport phenomena at the nanoscale. We demonstrate that, when calibrated to experimentally-measured transport coefficients, the model can accurately predict transient and steady state temperature and concentration profiles even in scenarios where the length of the device is comparable to the mean free path of the carriers. Through several example applications, we demonstrate the validity of our model for all classes of materials, including ones that, until now, would have been outside the domain of computational feasibility.

  18. HOSVD-Based 3D Active Appearance Model: Segmentation of Lung Fields in CT Images.

    PubMed

    Wang, Qingzhu; Kang, Wanjun; Hu, Haihui; Wang, Bin

    2016-07-01

    An Active Appearance Model (AAM) is a computer vision model which can be used to effectively segment lung fields in CT images. However, the fitting result is often inadequate when the lungs are affected by high-density pathologies. To overcome this problem, we propose a Higher-order Singular Value Decomposition (HOSVD)-based Three-dimensional (3D) AAM. An evaluation was performed on 310 diseased lungs form the Lung Image Database Consortium Image Collection. Other contemporary AAMs operate directly on patterns represented by vectors, i.e., before applying the AAM to a 3D lung volume,it has to be vectorized first into a vector pattern by some technique like concatenation. However, some implicit structural or local contextual information may be lost in this transformation. According to the nature of the 3D lung volume, HOSVD is introduced to represent and process the lung in tensor space. Our method can not only directly operate on the original 3D tensor patterns, but also efficiently reduce the computer memory usage. The evaluation resulted in an average Dice coefficient of 97.0 % ± 0.59 %, a mean absolute surface distance error of 1.0403 ± 0.5716 mm, a mean border positioning errors of 0.9187 ± 0.5381 pixel, and a Hausdorff Distance of 20.4064 ± 4.3855, respectively. Experimental results showed that our methods delivered significant and better segmentation results, compared with the three other model-based lung segmentation approaches, namely 3D Snake, 3D ASM and 3D AAM.

  19. The LUE data model for representation of agents and fields

    NASA Astrophysics Data System (ADS)

    de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek

    2017-04-01

    Traditionally, agents-based and field-based modelling environments use different data models to represent the state of information they manipulate. In agent-based modelling, involving the representation of phenomena as objects bounded in space and time, agents are often represented by classes, each of which represents a particular kind of agent and all its properties. Such classes can be used to represent entities like people, birds, cars and countries. In field-based modelling, involving the representation of the environment as continuous fields, fields are often represented by a discretization of space, using multidimensional arrays, each storing mostly a single attribute. Such arrays can be used to represent the elevation of the land-surface, the pH of the soil, or the population density in an area, for example. Representing a population of agents by class instances grouped in collections is an intuitive way of organizing information. A drawback, though, is that models in which class instances grouping properties are stored in collections are less efficient (execute slower) than models in which collections of properties are grouped. The field representation, on the other hand, is convenient for the efficient execution of models. Another drawback is that, because the data models used are so different, integrating agent-based and field-based models becomes difficult, since the model builder has to deal with multiple concepts, and often multiple modelling environments. With the development of the LUE data model [1] we aim at representing agents and fields within a single paradigm, by combining the advantages of the data models used in agent-based and field-based data modelling. This removes the barrier for writing integrated agent-based and field-based models. The resulting data model is intuitive to use and allows for efficient execution of models. LUE is both a high-level conceptual data model and a low-level physical data model. The LUE conceptual data model is a generalization of the data models used in agent-based and field-based modelling. The LUE physical data model [2] is an implementation of the LUE conceptual data model in HDF5. In our presentation we will provide details of our approach to organizing information about agents and fields. We will show examples of agent and field data represented by the conceptual and physical data model. References: [1] de Bakker, M.P., de Jong, K., Schmitz, O., Karssenberg, D., 2016. Design and demonstration of a data model to integrate agent-based and field-based modelling. Environmental Modelling and Software. http://dx.doi.org/10.1016/j.envsoft.2016.11.016 [2] de Jong, K., 2017. LUE source code. https://github.com/pcraster/lue

  20. Three-Fluid Magnetohydrodynamic Modeling of the Solar Wind in the Outer Heliosphere

    NASA Technical Reports Server (NTRS)

    Usmanov, Arcadi V.; Goldstein, Melvyn L.; Matthaeus, William H.

    2011-01-01

    We have developed a three-fluid, fully three-dimensional magnetohydrodynamic model of the solar wind plasma in the outer heliosphere as a co-moving system of solar wind protons, electrons, and interstellar pickup protons, with separate energy equations for each species. Our approach takes into account the effects of electron heat conduction and dissipation of Alfvenic turbulence on the spatial evolution of the solar wind plasma and interplanetary magnetic fields. The turbulence transport model is based on the Reynolds decomposition of physical variables into mean and fluctuating components and uses the turbulent phenomenologies that describe the conversion of fluctuation energy into heat due to a turbulent cascade. We solve the coupled set of the three-fluid equations for the mean-field solar wind and the turbulence equations for the turbulence energy, cross helicity, and correlation length. The equations are written in the rotating frame of reference and include heating by turbulent dissipation, energy transfer from interstellar pickup protons to solar wind protons, and solar wind deceleration due to the interaction with the interstellar hydrogen. The numerical solution is constructed by the time relaxation method in the region from 0.3 to 100 AU. Initial results from the novel model are presented.

  1. Spatial two-photon coherence of the entangled field produced by down-conversion using a partially spatially coherent pump beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jha, Anand Kumar; Boyd, Robert W.

    2010-01-15

    We study the spatial coherence properties of the entangled two-photon field produced by parametric down-conversion (PDC) when the pump field is, spatially, a partially coherent beam. By explicitly treating the case of a pump beam of the Gaussian Schell-model type, we show that in PDC the spatial coherence properties of the pump field get entirely transferred to the spatial coherence properties of the down-converted two-photon field. As one important consequence of this study, we find that, for two-qubit states based on the position correlations of the two-photon field, the maximum achievable entanglement, as quantified by concurrence, is bounded by themore » degree of spatial coherence of the pump field. These results could be important by providing a means of controlling the entanglement of down-converted photons by tailoring the degree of coherence of the pump field.« less

  2. Deterministic Mean-Field Ensemble Kalman Filtering

    DOE PAGES

    Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul

    2016-05-03

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d

  3. Deterministic Mean-Field Ensemble Kalman Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortiz, Gerardo, E-mail: ortizg@indiana.edu; Cobanera, Emilio

    We investigate Majorana modes of number-conserving fermionic superfluids from both basic physics principles, and concrete models perspectives. After reviewing a criterion for establishing topological superfluidity in interacting systems, based on many-body fermionic parity switches, we reveal the emergence of zero-energy modes anticommuting with fermionic parity. Those many-body Majorana modes are constructed as coherent superpositions of states with different number of fermions. While realization of Majorana modes beyond mean field is plausible, we show that the challenge to quantum-control them is compounded by particle-conservation, and more realistic protocols will have to balance engineering needs with astringent constraints coming from superselection rules.more » Majorana modes in number-conserving systems are the result of a peculiar interplay between quantum statistics, fermionic parity, and an unusual form of spontaneous symmetry breaking. We test these ideas on the Richardson–Gaudin–Kitaev chain, a number-conserving model solvable by way of the algebraic Bethe ansatz, and equivalent in mean field to a long-range Kitaev chain.« less

  5. A curious relationship between Potts glass models

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Chiaki

    2015-08-01

    A Potts glass model proposed by Nishimori and Stephen [H. Nishimori, M.J. Stephen, Phys. Rev. B 27, 5644 (1983)] is analyzed by means of the replica mean field theory. This model is a discrete model, has a gauge symmetry, and is called the Potts gauge glass model. By comparing the present results with the results of the conventional Potts glass model, we find the coincidences and differences between the models. We find a coincidence that the property for the Potts glass phase in this model is coincident with that in the conventional model at the mean field level. We find a difference that, unlike in the case of the conventional p-state Potts glass model, this system for large p does not become ferromagnetic at low temperature under a concentration of ferromagnetic interaction. The present results support the act of numerically investigating the present model for study of the Potts glass phase in finite dimensions.

  6. Assimilation of Altimeter Data into a Quasigeostrophic Model of the Gulf Stream System. Part 1; Dynamical Considerations

    NASA Technical Reports Server (NTRS)

    Capotondi, Antonietta; Malanotte-Rizzoli, Paola; Holland, William R.

    1995-01-01

    The dynamical consequences of constraining a numerical model with sea surface height data have been investigated. The model used for this study is a quasigeostrophic model of the Gulf Stream region. The data that have been assimilated are maps of sea surface height obtained as the superposition of sea surface height variability deduced from the Geosat altimeter measurements and a mean field constructed from historical hydrographic data. The method used for assimilating the data is the nudging technique. Nudging has been implemented in such a way as to achieve a high degree of convergence of the surface model fields toward the observations. The assimilation of the surface data is thus equivalent to the prescription of a surface pressure boundary condition. The authors analyzed the mechanisms of the model adjustment and the characteristics of the resultant equilibrium state when the surface data are assimilated. Since the surface data are the superposition of a mean component and an eddy component, in order to understand the relative role of these two components in determining the characteristics of the final equilibrium state, two different experiments have been considered: in the first experiment only the climatological mean field is assimilated, while in the second experiment the total surface streamfunction field (mean plus eddies) has been used. It is shown that the model behavior in the presence of the surface data constraint can be conveniently described in terms of baroclinic Fofonoff modes. The prescribed mean component of the surface data acts as a 'surface topography' in this problem. Its presence determines a distortion of the geostrophic contours in the subsurface layers, thus constraining the mean circulation in those layers. The intensity of the mean flow is determined by the inflow/outflow conditions at the open boundaries, as well as by eddy forcing and dissipation.

  7. Finite Element Modelling of a Field-Sensed Magnetic Suspended System for Accurate Proximity Measurement Based on a Sensor Fusion Algorithm with Unscented Kalman Filter

    PubMed Central

    Chowdhury, Amor; Sarjaš, Andrej

    2016-01-01

    The presented paper describes accurate distance measurement for a field-sensed magnetic suspension system. The proximity measurement is based on a Hall effect sensor. The proximity sensor is installed directly on the lower surface of the electro-magnet, which means that it is very sensitive to external magnetic influences and disturbances. External disturbances interfere with the information signal and reduce the usability and reliability of the proximity measurements and, consequently, the whole application operation. A sensor fusion algorithm is deployed for the aforementioned reasons. The sensor fusion algorithm is based on the Unscented Kalman Filter, where a nonlinear dynamic model was derived with the Finite Element Modelling approach. The advantage of such modelling is a more accurate dynamic model parameter estimation, especially in the case when the real structure, materials and dimensions of the real-time application are known. The novelty of the paper is the design of a compact electro-magnetic actuator with a built-in low cost proximity sensor for accurate proximity measurement of the magnetic object. The paper successively presents a modelling procedure with the finite element method, design and parameter settings of a sensor fusion algorithm with Unscented Kalman Filter and, finally, the implementation procedure and results of real-time operation. PMID:27649197

  8. Finite Element Modelling of a Field-Sensed Magnetic Suspended System for Accurate Proximity Measurement Based on a Sensor Fusion Algorithm with Unscented Kalman Filter.

    PubMed

    Chowdhury, Amor; Sarjaš, Andrej

    2016-09-15

    The presented paper describes accurate distance measurement for a field-sensed magnetic suspension system. The proximity measurement is based on a Hall effect sensor. The proximity sensor is installed directly on the lower surface of the electro-magnet, which means that it is very sensitive to external magnetic influences and disturbances. External disturbances interfere with the information signal and reduce the usability and reliability of the proximity measurements and, consequently, the whole application operation. A sensor fusion algorithm is deployed for the aforementioned reasons. The sensor fusion algorithm is based on the Unscented Kalman Filter, where a nonlinear dynamic model was derived with the Finite Element Modelling approach. The advantage of such modelling is a more accurate dynamic model parameter estimation, especially in the case when the real structure, materials and dimensions of the real-time application are known. The novelty of the paper is the design of a compact electro-magnetic actuator with a built-in low cost proximity sensor for accurate proximity measurement of the magnetic object. The paper successively presents a modelling procedure with the finite element method, design and parameter settings of a sensor fusion algorithm with Unscented Kalman Filter and, finally, the implementation procedure and results of real-time operation.

  9. Estimation of Particulate Mass and Manganese Exposure Levels among Welders

    PubMed Central

    Hobson, Angela; Seixas, Noah; Sterling, David; Racette, Brad A.

    2011-01-01

    Background: Welders are frequently exposed to Manganese (Mn), which may increase the risk of neurological impairment. Historical exposure estimates for welding-exposed workers are needed for epidemiological studies evaluating the relationship between welding and neurological or other health outcomes. The objective of this study was to develop and validate a multivariate model to estimate quantitative levels of welding fume exposures based on welding particulate mass and Mn concentrations reported in the published literature. Methods: Articles that described welding particulate and Mn exposures during field welding activities were identified through a comprehensive literature search. Summary measures of exposure and related determinants such as year of sampling, welding process performed, type of ventilation used, degree of enclosure, base metal, and location of sampling filter were extracted from each article. The natural log of the reported arithmetic mean exposure level was used as the dependent variable in model building, while the independent variables included the exposure determinants. Cross-validation was performed to aid in model selection and to evaluate the generalizability of the models. Results: A total of 33 particulate and 27 Mn means were included in the regression analysis. The final model explained 76% of the variability in the mean exposures and included welding process and degree of enclosure as predictors. There was very little change in the explained variability and root mean squared error between the final model and its cross-validation model indicating the final model is robust given the available data. Conclusions: This model may be improved with more detailed exposure determinants; however, the relatively large amount of variance explained by the final model along with the positive generalizability results of the cross-validation increases the confidence that the estimates derived from this model can be used for estimating welder exposures in absence of individual measurement data. PMID:20870928

  10. An Energy-Based Hysteresis Model for Magnetostrictive Transducers

    NASA Technical Reports Server (NTRS)

    Calkins, F. T.; Smith, R. C.; Flatau, A. B.

    1997-01-01

    This paper addresses the modeling of hysteresis in magnetostrictive transducers. This is considered in the context of control applications which require an accurate characterization of the relation between input currents and strains output by the transducer. This relation typically exhibits significant nonlinearities and hysteresis due to inherent properties of magnetostrictive materials. The characterization considered here is based upon the Jiles-Atherton mean field model for ferromagnetic hysteresis in combination with a quadratic moment rotation model for magnetostriction. As demonstrated through comparison with experimental data, the magnetization model very adequately quantifies both major and minor loops under various operating conditions. The combined model can then be used to accurately characterize output strains at moderate drive levels. The advantages to this model lie in the small number (six) of required parameters and the flexibility it exhibits in a variety of operating conditions.

  11. Field optimization method of a dual-axis atomic magnetometer based on frequency-response and dynamics

    NASA Astrophysics Data System (ADS)

    Xing, Li; Quan, Wei; Fan, Wenfeng; Li, Rujie; Jiang, Liwei; Fang, Jiancheng

    2018-05-01

    The frequency-response and dynamics of a dual-axis spin-exchange-relaxation-free (SERF) atomic magnetometer are investigated by means of transfer function analysis. The frequency-response at different bias magnetic fields is tested to demonstrate the effect of the residual magnetic field. The resonance frequency of alkali atoms and magnetic linewidth can be obtained simultaneously through our theoretical model. The coefficient of determination of the fitting results is superior to 0.995 with 95% confidence bounds. Additionally, step responses are applied to analyze the dynamics of the control system and the effect of imperfections. Finally, a noise-limited magnetic field resolution of 15 fT {{\\sqrt{Hz}}-1} has been achieved for our dual-axis SERF atomic magnetometer through magnetic field optimization.

  12. Species Distribution 2.0: An Accurate Time- and Cost-Effective Method of Prospection Using Street View Imagery

    PubMed Central

    Schwoertzig, Eugénie; Millon, Alexandre

    2016-01-01

    Species occurrence data provide crucial information for biodiversity studies in the current context of global environmental changes. Such studies often rely on a limited number of occurrence data collected in the field and on pseudo-absences arbitrarily chosen within the study area, which reduces the value of these studies. To overcome this issue, we propose an alternative method of prospection using geo-located street view imagery (SVI). Following a standardised protocol of virtual prospection using both vertical (aerial photographs) and horizontal (SVI) perceptions, we have surveyed 1097 randomly selected cells across Spain (0.1x0.1 degree, i.e. 20% of Spain) for the presence of Arundo donax L. (Poaceae). In total we have detected A. donax in 345 cells, thus substantially expanding beyond the now two-centuries-old field-derived record, which described A. donax only 216 cells. Among the field occurrence cells, 81.1% were confirmed by SVI prospection to be consistent with species presence. In addition, we recorded, by SVI prospection, 752 absences, i.e. cells where A. donax was considered absent. We have also compared the outcomes of climatic niche modeling based on SVI data against those based on field data. Using generalized linear models fitted with bioclimatic predictors, we have found SVI data to provide far more compelling results in terms of niche modeling than does field data as classically used in SDM. This original, cost- and time-effective method provides the means to accurately locate highly visible taxa, reinforce absence data, and predict species distribution without long and expensive in situ prospection. At this time, the majority of available SVI data is restricted to human-disturbed environments that have road networks. However, SVI is becoming increasingly available in natural areas, which means the technique has considerable potential to become an important factor in future biodiversity studies. PMID:26751565

  13. Application of 3D triangulations of airborne laser scanning data to estimate boreal forest leaf area index

    NASA Astrophysics Data System (ADS)

    Majasalmi, Titta; Korhonen, Lauri; Korpela, Ilkka; Vauhkonen, Jari

    2017-07-01

    We propose 3D triangulations of airborne Laser Scanning (ALS) point clouds as a new approach to derive 3D canopy structures and to estimate forest canopy effective LAI (LAIe). Computational geometry and topological connectivity were employed to filter the triangulations to yield a quasi-optimal relationship with the field measured LAIe. The optimal filtering parameters were predicted based on ALS height metrics, emulating the production of maps of LAIe and canopy volume for large areas. The LAIe from triangulations was validated with field measured LAIe and compared with a reference LAIe calculated from ALS data using logarithmic model based on Beer's law. Canopy transmittance was estimated using All Echo Cover Index (ACI), and the mean projection of unit foliage area (β) was obtained using no-intercept regression with field measured LAIe. We investigated the influence species and season on the triangulated LAIe and demonstrated the relationship between triangulated LAIe and canopy volume. Our data is from 115 forest plots located at the southern boreal forest area in Finland and for each plot three different ALS datasets were available to apply the triangulations. The triangulation approach was found applicable for both leaf-on and leaf-off datasets after initial calibration. Results showed the Root Mean Square Errors (RMSEs) between LAIe from triangulations and field measured values agreed the most using the highest pulse density data (RMSE = 0.63, the coefficient of determination (R2) = 0.53). Yet, the LAIe calculated using ACI-index agreed better with the field measured LAIe (RMSE = 0.53 and R2 = 0.70). The best models to predict the optimal alpha value contained the ACI-index, which indicates that within-crown transmittance is accounted by the triangulation approach. The cover indices may be recommended for retrieving LAIe only, but for applications which require more sophisticated information on canopy shape and volume, such as radiative transfer models, the triangulation approach may be preferred.

  14. A Modeling Study of Oceanic Response to Daily and Monthly Surface Forcing

    NASA Technical Reports Server (NTRS)

    Sui, Chung-Hsiung; Li, Xiao-Fan; Rienecker, Michele M.; Lau, William K.-M.; Einaudi, Franco (Technical Monitor)

    2001-01-01

    The goal of this study is to investigate the effect of high-frequency surface forcing (wind stresses and heat fluxes) on upper-ocean response. We use the reduced-gravity quasi-isopycnal ocean model by Schopf and Loughe (1995) for this study. Two experiments are performed: one with daily and the other with monthly surface forcing. The two experiments are referred to as DD and MM, respectively. The daily surface wind stress is produced from the SSM/I wind data (Atlas et al. 1991) using the drag coefficient of Large and Pond (1982). The surface latent and sensible heat fluxes are estimated using the atmospheric mixed layer model by Seager et al. (1995) with the time-varying air temperature and specific humidity from the NCEP-NCAR reanalysis (Kalnay et al. 1996). The radiation is based on climatological shortwave radiation from the Earth Radiation Budget Experiment (ERBE) [Harrison et al. 1993] and the daily GEWEX SRB data. The ocean model domain is restricted to the Pacific Ocean with realistic land boundaries. At the southern boundary the model temperature and salinity are relaxed to the Levitus (1994) climatology. The time-mean SST distribution from MM is close to the observed SST climatology while the mean SST field from DD is about 1.5 C cooler. To identify the responsible processes, we examined the mean heat budgets and the heat balance during the first year (when the difference developed) in the two experiments. The analysis reveals that this is contributed by two factors. One is the difference in latent heat flux. The other is the difference in mixing processes. To further evaluate the responsible processes, we repeated the DD experiment by reducing the based vertical diffusion from 1e-4 to 0.5e-5. The resultant SST field becomes quite closer to the observed SST field. SST variability from the two experiments is generally similar, but the equatorial SST differences between the two experiments show interannual variations. We are investigating the possible mechanisms responsible for the different responses.

  15. Mean Field Type Control with Congestion (II): An Augmented Lagrangian Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Achdou, Yves, E-mail: achdou@ljll.univ-paris-diderot.fr; Laurière, Mathieu

    This work deals with a numerical method for solving a mean-field type control problem with congestion. It is the continuation of an article by the same authors, in which suitably defined weak solutions of the system of partial differential equations arising from the model were discussed and existence and uniqueness were proved. Here, the focus is put on numerical methods: a monotone finite difference scheme is proposed and shown to have a variational interpretation. Then an Alternating Direction Method of Multipliers for solving the variational problem is addressed. It is based on an augmented Lagrangian. Two kinds of boundary conditionsmore » are considered: periodic conditions and more realistic boundary conditions associated to state constrained problems. Various test cases and numerical results are presented.« less

  16. Bottom-up modeling of damage in heterogeneous quasi-brittle solids

    NASA Astrophysics Data System (ADS)

    Rinaldi, Antonio

    2013-03-01

    The theoretical modeling of multisite cracking in quasi-brittle materials is a complex damage problem, hard to model with traditional methods of fracture mechanics due to its multiscale nature and to strain localization induced by microcracks interaction. Macroscale "effective" elastic models can be conveniently applied if a suitable Helmholtz free energy function is identified for a given material scenario. Del Piero and Truskinovsky (Continuum Mech Thermodyn 21:141-171, 2009), among other authors, investigated macroscale continuum solutions capable of matching—in a top-down view—the phenomenology of the damage process for quasi-brittle materials regardless of the microstructure. On the contrary, this paper features a physically based solution method that starts from the direct consideration of the microscale properties and, in a bottom-up view, recovers a continuum elastic description. This procedure is illustrated for a simple one-dimensional problem of this type, a bar modeled stretched by an axial displacement, where the bar is modeled as a 2D random lattice of decohesive spring elements of finite strength. The (microscale) data from simulations are used to identify the "exact" (macro-) damage parameter and to build up the (macro-) Helmholtz function for the equivalent elastic model, bridging the macroscale approach by Del Piero and Truskinovsky. The elastic approach, coupled with microstructural knowledge, becomes a more powerful tool to reproduce a broad class of macroscopic material responses by changing the convexity-concavity of the Helmholtz energy. The analysis points out that mean-field statistics are appropriate prior to damage localization but max-field statistics are better suited in the softening regime up to failure, where microstrain fluctuation needs to be incorporated in the continuum model. This observation is of consequence to revise mean-field damage models from literature and to calibrate Nth gradient continuum models.

  17. Advanced prior modeling for 3D bright field electron tomography

    NASA Astrophysics Data System (ADS)

    Sreehari, Suhas; Venkatakrishnan, S. V.; Drummy, Lawrence F.; Simmons, Jeffrey P.; Bouman, Charles A.

    2015-03-01

    Many important imaging problems in material science involve reconstruction of images containing repetitive non-local structures. Model-based iterative reconstruction (MBIR) could in principle exploit such redundancies through the selection of a log prior probability term. However, in practice, determining such a log prior term that accounts for the similarity between distant structures in the image is quite challenging. Much progress has been made in the development of denoising algorithms like non-local means and BM3D, and these are known to successfully capture non-local redundancies in images. But the fact that these denoising operations are not explicitly formulated as cost functions makes it unclear as to how to incorporate them in the MBIR framework. In this paper, we formulate a solution to bright field electron tomography by augmenting the existing bright field MBIR method to incorporate any non-local denoising operator as a prior model. We accomplish this using a framework we call plug-and-play priors that decouples the log likelihood and the log prior probability terms in the MBIR cost function. We specifically use 3D non-local means (NLM) as the prior model in the plug-and-play framework, and showcase high quality tomographic reconstructions of a simulated aluminum spheres dataset, and two real datasets of aluminum spheres and ferritin structures. We observe that streak and smear artifacts are visibly suppressed, and that edges are preserved. Also, we report lower RMSE values compared to the conventional MBIR reconstruction using qGGMRF as the prior model.

  18. Identification of Hot Moments and Hot Spots for Real-Time Adaptive Control of Multi-scale Environmental Sensor Networks

    NASA Astrophysics Data System (ADS)

    Wietsma, T.; Minsker, B. S.

    2012-12-01

    Increased sensor throughput combined with decreasing hardware costs has led to a disruptive growth in data volume. This disruption, popularly termed "the data deluge," has placed new demands for cyberinfrastructure and information technology skills among researchers in many academic fields, including the environmental sciences. Adaptive sampling has been well established as an effective means of improving network resource efficiency (energy, bandwidth) without sacrificing sample set quality relative to traditional uniform sampling. However, using adaptive sampling for the explicit purpose of improving resolution over events -- situations displaying intermittent dynamics and unique hydrogeological signatures -- is relatively new. In this paper, we define hot spots and hot moments in terms of sensor signal activity as measured through discrete Fourier analysis. Following this frequency-based approach, we apply the Nyquist-Shannon sampling theorem, a fundamental contribution from signal processing that led to the field of information theory, for analysis of uni- and multivariate environmental signal data. In the scope of multi-scale environmental sensor networks, we present several sampling control algorithms, derived from the Nyquist-Shannon theorem, that operate at local (field sensor), regional (base station for aggregation of field sensor data), and global (Cloud-based, computationally intensive models) scales. Evaluated over soil moisture data, results indicate significantly greater sample density during precipitation events while reducing overall sample volume. Using these algorithms as indicators rather than control mechanisms, we also discuss opportunities for spatio-temporal modeling as a tool for planning/modifying sensor network deployments. Locally adaptive model based on Nyquist-Shannon sampling theorem Pareto frontiers for local, regional, and global models relative to uniform sampling. Objectives are (1) overall sampling efficiency and (2) sampling efficiency during hot moments as identified using heuristic approach.

  19. Core surface magnetic field evolution 2000-2010

    NASA Astrophysics Data System (ADS)

    Finlay, C. C.; Jackson, A.; Gillet, N.; Olsen, N.

    2012-05-01

    We present new dedicated core surface field models spanning the decade from 2000.0 to 2010.0. These models, called gufm-sat, are based on CHAMP, Ørsted and SAC-C satellite observations along with annual differences of processed observatory monthly means. A spatial parametrization of spherical harmonics up to degree and order 24 and a temporal parametrization of sixth-order B-splines with 0.25 yr knot spacing is employed. Models were constructed by minimizing an absolute deviation measure of misfit along with measures of spatial and temporal complexity at the core surface. We investigate traditional quadratic or maximum entropy regularization in space, and second or third time derivative regularization in time. Entropy regularization allows the construction of models with approximately constant spectral slope at the core surface, avoiding both the divergence characteristic of the crustal field and the unrealistic rapid decay typical of quadratic regularization at degrees above 12. We describe in detail aspects of the models that are relevant to core dynamics. Secular variation and secular acceleration are found to be of lower amplitude under the Pacific hemisphere where the core field is weaker. Rapid field evolution is observed under the eastern Indian Ocean associated with the growth and drift of an intense low latitude flux patch. We also find that the present axial dipole decay arises from a combination of subtle changes in the southern hemisphere field morphology.

  20. Fine reservoir structure modeling based upon 3D visualized stratigraphic correlation between horizontal wells: methodology and its application

    NASA Astrophysics Data System (ADS)

    Chenghua, Ou; Chaochun, Li; Siyuan, Huang; Sheng, James J.; Yuan, Xu

    2017-12-01

    As the platform-based horizontal well production mode has been widely applied in petroleum industry, building a reliable fine reservoir structure model by using horizontal well stratigraphic correlation has become very important. Horizontal wells usually extend between the upper and bottom boundaries of the target formation, with limited penetration points. Using these limited penetration points to conduct well deviation correction means the formation depth information obtained is not accurate, which makes it hard to build a fine structure model. In order to solve this problem, a method of fine reservoir structure modeling, based on 3D visualized stratigraphic correlation among horizontal wells, is proposed. This method can increase the accuracy when estimating the depth of the penetration points, and can also effectively predict the top and bottom interfaces in the horizontal penetrating section. Moreover, this method will greatly increase not only the number of points of depth data available, but also the accuracy of these data, which achieves the goal of building a reliable fine reservoir structure model by using the stratigraphic correlation among horizontal wells. Using this method, four 3D fine structure layer models have been successfully built of a specimen shale gas field with platform-based horizontal well production mode. The shale gas field is located to the east of Sichuan Basin, China; the successful application of the method has proven its feasibility and reliability.

  1. How to Define the Mean Square Amplitude of Solar Wind Fluctuations With Respect to the Local Mean Magnetic Field

    NASA Astrophysics Data System (ADS)

    Podesta, John J.

    2017-12-01

    Over the last decade it has become popular to analyze turbulent solar wind fluctuations with respect to a coordinate system aligned with the local mean magnetic field. This useful analysis technique has provided new information and new insights about the nature of solar wind fluctuations and provided some support for phenomenological theories of MHD turbulence based on the ideas of Goldreich and Sridhar. At the same time it has drawn criticism suggesting that the use of a scale-dependent local mean field is somehow inconsistent or irreconcilable with traditional analysis techniques based on second-order structure functions and power spectra that, for stationary time series, are defined with respect to the constant (scale-independent) ensemble average magnetic field. Here it is shown that for fluctuations with power law spectra, such as those observed in solar wind turbulence, it is possible to define the local mean magnetic field in a special way such that the total mean square amplitude (trace amplitude) of turbulent fluctuations is approximately the same, scale by scale, as that obtained using traditional second-order structure functions or power spectra. This fact should dispel criticism concerning the physical validity or practical usefulness of the local mean magnetic field in these applications.

  2. Model of a thin film optical fiber fluorosensor

    NASA Technical Reports Server (NTRS)

    Egalon, Claudio O.; Rogowski, Robert S.

    1991-01-01

    The efficiency of core-light injection from sources in the cladding of an optical fiber is modeled analytically by means of the exact field solution of a step-profile fiber. The analysis is based on the techniques by Marcuse (1988) in which the sources are treated as infinitesimal electric currents with random phase and orientation that excite radiation fields and bound modes. Expressions are developed based on an infinite cladding approximation which yield the power efficiency for a fiber coated with fluorescent sources in the core/cladding interface. Marcuse's results are confirmed for the case of a weakly guiding cylindrical fiber with fluorescent sources uniformly distributed in the cladding, and the power efficiency is shown to be practically constant for variable wavelengths and core radii. The most efficient fibers have the thin film located at the core/cladding boundary, and fibers with larger differences in the indices of refraction are shown to be the most efficient.

  3. Hamiltonian closures in fluid models for plasmas

    NASA Astrophysics Data System (ADS)

    Tassi, Emanuele

    2017-11-01

    This article reviews recent activity on the Hamiltonian formulation of fluid models for plasmas in the non-dissipative limit, with emphasis on the relations between the fluid closures adopted for the different models and the Hamiltonian structures. The review focuses on results obtained during the last decade, but a few classical results are also described, in order to illustrate connections with the most recent developments. With the hope of making the review accessible not only to specialists in the field, an introduction to the mathematical tools applied in the Hamiltonian formalism for continuum models is provided. Subsequently, we review the Hamiltonian formulation of models based on the magnetohydrodynamics description, including those based on the adiabatic and double adiabatic closure. It is shown how Dirac's theory of constrained Hamiltonian systems can be applied to impose the incompressibility closure on a magnetohydrodynamic model and how an extended version of barotropic magnetohydrodynamics, accounting for two-fluid effects, is amenable to a Hamiltonian formulation. Hamiltonian reduced fluid models, valid in the presence of a strong magnetic field, are also reviewed. In particular, reduced magnetohydrodynamics and models assuming cold ions and different closures for the electron fluid are discussed. Hamiltonian models relaxing the cold-ion assumption are then introduced. These include models where finite Larmor radius effects are added by means of the gyromap technique, and gyrofluid models. Numerical simulations of Hamiltonian reduced fluid models investigating the phenomenon of magnetic reconnection are illustrated. The last part of the review concerns recent results based on the derivation of closures preserving a Hamiltonian structure, based on the Hamiltonian structure of parent kinetic models. Identification of such closures for fluid models derived from kinetic systems based on the Vlasov and drift-kinetic equations are presented, and connections with previously discussed fluid models are pointed out.

  4. Multi-Decadal Records of Stratospheric Composition and Their Relationship to Stratospheric Circulation Change

    NASA Technical Reports Server (NTRS)

    Douglass, Anne R.; Strahan, Susan E.; Oman, Luke D.; Stolarski, Richard S.

    2017-01-01

    Constituent evolution for 1990-2015 simulated using the Global Modeling Initiative chemistry and transport model driven by meteorological fields from the Modern-Era Retrospective analysis for Research and Applications version 2 (MERRA-2) is compared with three sources of observations: ground-based column measurements of HNO3 and HCl from two stations in the Network for the Detection of Atmospheric Composition Change (NDACC, 1990- ongoing), profiles of CH4 from the Halogen Occultation Experiment (HALOE) on the Upper Atmosphere Research Satellite (UARS, 1992-2005), and profiles of N2O from the Microwave Limb Sounder on the Earth Observing System satellite Aura (2005- ongoing). The differences between observed and simulated values are shown to be time dependent, with better agreement after 2000 compared with the prior decade. Furthermore, the differences between observed and simulated HNO3 and HCl columns are shown to be correlated with each other, suggesting that issues with the simulated transport and mixing cause the differences during the 1990s and that these issues are less important during the later years. Because the simulated fields are related to mean age in the lower stratosphere, we use these comparisons to evaluate the time dependence of mean age. The ongoing NDACC column observations provide critical information necessary to substantiate trends in mean age obtained using fields from MERRA-2 or any other reanalysis products.

  5. A transport equation for the scalar dissipation in reacting flows with variable density: First results

    NASA Technical Reports Server (NTRS)

    Mantel, T.

    1993-01-01

    Although the different regimes of premixed combustion are not well defined, most of the recent developments in turbulent combustion modeling are led in the so-called flamelet regime. The goal of these models is to give a realistic expression to the mean reaction rate (w). Several methods can be used to estimate (w). Bray and coworkers (Libby & Bray 1980, Bray 1985, Bray & Libby 1986) express the instantaneous reaction rate by means of a flamelet library and a frequency which describes the local interaction between the laminar flamelets and the turbulent flowfield. In another way, the mean reaction rate can be directly connected to the flame surface density (Sigma). This quantity can be given by the transport equation of the coherent flame model initially proposed by Marble & Broadwell 1977 and developed elsewhere. The mean reaction rate, (w), can also be estimated thanks to the evolution of an arbitrary scalar field G(x, t) = G(sub O) which represents the flame sheet. G(x, t) is obtained from the G-equation proposed by Williams 1985, Kerstein et al. 1988 and Peters 1993. Another possibility proposed in a recent study by Mantel & Borghi 1991, where a transport equation for the mean dissipation rate (epsilon(sub c)) of the progress variable c is used to determine (w). In their model, Mantel & Borghi 1991 considered a medium with constant density and constant diffusivity in the determination of the transport equation for (epsilon(sub c)). A comparison of different flamelet models made by Duclos et al. 1993 shows the realistic behavior of this model even in the case of constant density. Our objective in this present report is to present preliminary results on the study of this equation in the case of variable density and variable diffusivity. Assumptions of constant pressure and a Lewis number equal to unity allow us to significantly simplify the equation. A systematic order of magnitude analysis based on adequate scale relations is performed on each term of the equation. As in the case of constant density and constant diffusivity, the effects of stretching of the scalar field by the turbulent strain field, of local curvature, and of chemical reactions are predominant. In this preliminary work, we suggest closure models for certain terms, which will be validated after comparisons with DNS data.

  6. Groundwater availability in the Crouch Branch and McQueen Branch aquifers, Chesterfield County, South Carolina, 1900-2012

    USGS Publications Warehouse

    Campbell, Bruce G.; Landmeyer, James E.

    2014-01-01

    Chesterfield County is located in the northeastern part of South Carolina along the southern border of North Carolina and is primarily underlain by unconsolidated sediments of Late Cretaceous age and younger of the Atlantic Coastal Plain. Approximately 20 percent of Chesterfield County is in the Piedmont Physiographic Province, and this area of the county is not included in this study. These Atlantic Coastal Plain sediments compose two productive aquifers: the Crouch Branch aquifer that is present at land surface across most of the county and the deeper, semi-confined McQueen Branch aquifer. Most of the potable water supplied to residents of Chesterfield County is produced from the Crouch Branch and McQueen Branch aquifers by a well field located near McBee, South Carolina, in the southwestern part of the county. Overall, groundwater availability is good to very good in most of Chesterfield County, especially the area around and to the south of McBee, South Carolina. The eastern part of Chesterfield County does not have as abundant groundwater resources but resources are generally adequate for domestic purposes. The primary purpose of this study was to determine groundwater-flow rates, flow directions, and changes in water budgets over time for the Crouch Branch and McQueen Branch aquifers in the Chesterfield County area. This goal was accomplished by using the U.S. Geological Survey finite-difference MODFLOW groundwater-flow code to construct and calibrate a groundwater-flow model of the Atlantic Coastal Plain of Chesterfield County. The model was created with a uniform grid size of 300 by 300 feet to facilitate a more accurate simulation of groundwater-surface-water interactions. The model consists of 617 rows from north to south extending about 35 miles and 884 columns from west to east extending about 50 miles, yielding a total area of about 1,750 square miles. However, the active part of the modeled area, or the part where groundwater flow is simulated, totaled about 1,117 square miles. Major types of data used as input to the model included groundwater levels, groundwater-use data, and hydrostratigraphic data, along with estimates and measurements of stream base flows made specifically for this study. The groundwater-flow model was calibrated to groundwater-level and stream base-flow conditions from 1900 to 2012 using 39 stress periods. The model was calibrated with an automated parameter-estimation approach using the computer program PEST, and the model used regularized inversion and pilot points. The groundwater-flow model was calibrated using field data that included groundwater levels that had been collected between 1940 and 2012 from 239 wells and base-flow measurements from 44 locations distributed within the study area. To better understand recharge and inter-aquifer interactions, seven wells were equipped with continuous groundwater-level recording equipment during the course of the study, between 2008 and 2012. These water levels were included in the model calibration process. The observed groundwater levels were compared to the simulated ones, and acceptable calibration fits were achieved. Root mean square error for the simulated groundwater levels compared to all observed groundwater levels was 9.3 feet for the Crouch Branch aquifer and 8.6 feet for the McQueen Branch aquifer. The calibrated groundwater-flow model was then used to calculate groundwater budgets for the entire study area and for two sub-areas. The sub-areas are the Alligator Rural Water and Sewer Company well field near McBee, South Carolina, and the Carolina Sandhills National Wildlife Refuge acquisition boundary area. For the overall model area, recharge rates vary from 56 to 1,679 million gallons per day (Mgal/d) with a mean of 737 Mgal/d over the simulation period (1900–2012). The simulated water budget for the streams and rivers varies from 653 to 1,127 Mgal/d with a mean of 944 Mgal/d. The simulated “storage-in term” ranges from 0 to 565 Mgal/d with a mean of 276 Mgal/d. The simulated “storage-out term” has a range of 0 to 552 Mgal/d with a mean of 77 Mgal/d. Groundwater budgets for the McBee, South Carolina, area and the Carolina Sandhills National Wildlife Refuge acquisition area had similar results. An analysis of the effects of past and current groundwater withdrawals on base flows in the McBee area indicated a negligible effect of pumping from the Alligator Rural Water and Sewer well field on local stream base flows. Simulate base flows for 2012 for selected streams in and around the McBee area were similar with and without simulated groundwater withdrawals from the well field. Removing all pumping from the model for the entire simulation period (1900–2012) produces a negligible difference in increased base flow for the selected streams. The 2012 flow for Lower Alligator Creek was 5.04 Mgal/d with the wells pumping and 5.08 Mgal/d without the wells pumping; this represents the largest difference in simulated flows for the six streams.

  7. The effect of turbulence strength on meandering field lines and Solar Energetic Particle event extents

    NASA Astrophysics Data System (ADS)

    Laitinen, Timo; Effenberger, Frederic; Kopp, Andreas; Dalla, Silvia

    2018-02-01

    Insights into the processes of Solar Energetic Particle (SEP) propagation are essential for understanding how solar eruptions affect the radiation environment of near-Earth space. SEP propagation is influenced by turbulent magnetic fields in the solar wind, resulting in stochastic transport of the particles from their acceleration site to Earth. While the conventional approach for SEP modelling focuses mainly on the transport of particles along the mean Parker spiral magnetic field, multi-spacecraft observations suggest that the cross-field propagation shapes the SEP fluxes at Earth strongly. However, adding cross-field transport of SEPs as spatial diffusion has been shown to be insufficient in modelling the SEP events without use of unrealistically large cross-field diffusion coefficients. Recently, Laitinen et al. [ApJL 773 (2013b); A&A 591 (2016)] demonstrated that the early-time propagation of energetic particles across the mean field direction in turbulent fields is not diffusive, with the particles propagating along meandering field lines. This early-time transport mode results in fast access of the particles across the mean field direction, in agreement with the SEP observations. In this work, we study the propagation of SEPs within the new transport paradigm, and demonstrate the significance of turbulence strength on the evolution of the SEP radiation environment near Earth. We calculate the transport parameters consistently using a turbulence transport model, parametrised by the SEP parallel scattering mean free path at 1 AU, λ∥*, and show that the parallel and cross-field transport are connected, with conditions resulting in slow parallel transport corresponding to wider events. We find a scaling σφ,max∝(1/λ∥*)1/4 for the Gaussian fitting of the longitudinal distribution of maximum intensities. The longitudes with highest intensities are shifted towards the west for strong scattering conditions. Our results emphasise the importance of understanding both the SEP transport and the interplanetary turbulence conditions for modelling and predicting the SEP radiation environment at Earth.

  8. Forecasting Lightning Threat using Cloud-Resolving Model Simulations

    NASA Technical Reports Server (NTRS)

    McCaul, Eugene W., Jr.; Goodman, Steven J.; LaCasse, Katherine M.; Cecil, Daniel J.

    2008-01-01

    Two new approaches are proposed and developed for making time and space dependent, quantitative short-term forecasts of lightning threat, and a blend of these approaches is devised that capitalizes on the strengths of each. The new methods are distinctive in that they are based entirely on the ice-phase hydrometeor fields generated by regional cloud-resolving numerical simulations, such as those produced by the WRF model. These methods are justified by established observational evidence linking aspects of the precipitating ice hydrometeor fields to total flash rates. The methods are straightforward and easy to implement, and offer an effective near-term alternative to the incorporation of complex and costly cloud electrification schemes into numerical models. One method is based on upward fluxes of precipitating ice hydrometeors in the mixed phase region at the-15 C level, while the second method is based on the vertically integrated amounts of ice hydrometeors in each model grid column. Each method can be calibrated by comparing domain-wide statistics of the peak values of simulated flash rate proxy fields against domain-wide peak total lightning flash rate density data from observations. Tests show that the first method is able to capture much of the temporal variability of the lightning threat, while the second method does a better job of depicting the areal coverage of the threat. Our blended solution is designed to retain most of the temporal sensitivity of the first method, while adding the improved spatial coverage of the second. Exploratory tests for selected North Alabama cases show that, because WRF can distinguish the general character of most convective events, our methods show promise as a means of generating quantitatively realistic fields of lightning threat. However, because the models tend to have more difficulty in predicting the instantaneous placement of storms, forecasts of the detailed location of the lightning threat based on single simulations can be in error. Although these model shortcomings presently limit the precision of lightning threat forecasts from individual runs of current generation models,the techniques proposed herein should continue to be applicable as newer and more accurate physically-based model versions, physical parameterizations, initialization techniques and ensembles of forecasts become available.

  9. Modelisation agregee de chauffe-eau electriques commandes par champ moyen pour la gestion des charges dans un reseau

    NASA Astrophysics Data System (ADS)

    Losseau, Romain

    The ongoing energy transition is about to entail important changes in the way we use and manage energy. In this view, smart grids are expected to play a significant part through the use of intelligent storage techniques. Initiated in 2014, the SmartDesc project follows this trend to create an innovative load management program by exploiting the thermal storage associated with electric water heaters existing in residential households. The device control algorithms rely on the recent theory of mean field games to achieve a decentralized control of the water heaters temperatures producing an aggregate optimal trajectory, designed to smooth the electric demand of a neighborhood. Currently, this theory does not include power and temperature constraints due to the tank heating system or necessary for the user's safety and comfort. Therefore, a trajectory violating these constraints would not be feasible and would not induce the forecast load smoothing. This master's thesis presents a method to detect the non-feasability, of a target trajectory based on the Kolmogorov equations associated with the controlled electric water heaters and suggests a way to correct it so as to make it achievable under constraints. First, a partial differential equations based model of the water heaters under temperature constraints is presented. Subsequently, a numerical scheme is developed to simulate it, and applied to the mean field control. The results of the mean field control with and without constraints are compared, and non-feasabilities of the target trajectory are highlighted upon violations. The last part of the thesis is dedicated to developing an accelerated version of the mean field and a method of correcting the target trajectory so as to enlarge as much as possible the set of achievable profiles.

  10. DTU candidate field models for IGRF-12 and the CHAOS-5 geomagnetic field model

    NASA Astrophysics Data System (ADS)

    Finlay, Christopher C.; Olsen, Nils; Tøffner-Clausen, Lars

    2015-07-01

    We present DTU's candidate field models for IGRF-12 and the parent field model from which they were derived, CHAOS-5. Ten months of magnetic field observations from ESA's Swarm mission, together with up-to-date ground observatory monthly means, were used to supplement the data sources previously used to construct CHAOS-4. The internal field part of CHAOS-5, from which our IGRF-12 candidate models were extracted, is time-dependent up to spherical harmonic degree 20 and involves sixth-order splines with a 0.5 year knot spacing. In CHAOS-5, compared with CHAOS-4, we update only the low-degree internal field model (degrees 1 to 24) and the associated external field model. The high-degree internal field (degrees 25 to 90) is taken from the same model CHAOS-4h, based on low-altitude CHAMP data, which was used in CHAOS-4. We find that CHAOS-5 is able to consistently fit magnetic field data from six independent low Earth orbit satellites: Ørsted, CHAMP, SAC-C and the three Swarm satellites (A, B and C). It also adequately describes the secular variation measured at ground observatories. CHAOS-5 thus contributes to an initial validation of the quality of the Swarm magnetic data, in particular demonstrating that Huber weighted rms model residuals to Swarm vector field data are lower than those to Ørsted and CHAMP vector data (when either one or two star cameras were operating). CHAOS-5 shows three pulses of secular acceleration at the core surface over the past decade; the 2006 and 2009 pulses have previously been documented, but the 2013 pulse has only recently been identified. The spatial signature of the 2013 pulse at the core surface, under the Atlantic sector where it is strongest, is well correlated with the 2006 pulse, but anti-correlated with the 2009 pulse.

  11. Schrödinger Approach to Mean Field Games

    NASA Astrophysics Data System (ADS)

    Swiecicki, Igor; Gobron, Thierry; Ullmo, Denis

    2016-03-01

    Mean field games (MFG) provide a theoretical frame to model socioeconomic systems. In this Letter, we study a particular class of MFG that shows strong analogies with the nonlinear Schrödinger and Gross-Pitaevskii equations introduced in physics to describe a variety of physical phenomena. Using this bridge, many results and techniques developed along the years in the latter context can be transferred to the former, which provides both a new domain of application for the nonlinear Schrödinger equation and a new and fruitful approach in the study of mean field games. Utilizing this approach, we analyze in detail a population dynamics model in which the "players" are under a strong incentive to coordinate themselves.

  12. The Use of Mixed Effects Models for Obtaining Low-Cost Ecosystem Carbon Stock Estimates in Mangroves of the Asia-Pacific

    NASA Astrophysics Data System (ADS)

    Bukoski, J. J.; Broadhead, J. S.; Donato, D.; Murdiyarso, D.; Gregoire, T. G.

    2016-12-01

    Mangroves provide extensive ecosystem services that support both local livelihoods and international environmental goals, including coastal protection, water filtration, biodiversity conservation and the sequestration of carbon (C). While voluntary C market projects that seek to preserve and enhance forest C stocks offer a potential means of generating finance for mangrove conservation, their implementation faces barriers due to the high costs of quantifying C stocks through measurement, reporting and verification (MRV) activities. To streamline MRV activities in mangrove C forestry projects, we develop predictive models for (i) biomass-based C stocks, and (ii) soil-based C stocks for the mangroves of the Asia-Pacific. We use linear mixed effect models to account for spatial correlation in modeling the expected C as a function of stand attributes. The most parsimonious biomass model predicts total biomass C stocks as a function of both basal area and the interaction between latitude and basal area, whereas the most parsimonious soil C model predicts soil C stocks as a function of the logarithmic transformations of both latitude and basal area. Random effects are specified by site for both models, and are found to explain a substantial proportion of variance within the estimation datasets. The root mean square error (RMSE) of the biomass C model is approximated at 24.6 Mg/ha (18.4% of mean biomass C in the dataset), whereas the RMSE of the soil C model is estimated at 4.9 mg C/cm 3 (14.1% of mean soil C). A substantial proportion of the variation in soil C, however, is explained by the random effects and thus the use of the SOC model may be most valuable for sites in which field measurements of soil C exist.

  13. Minimum Electric Field Exposure for Seizure Induction with Electroconvulsive Therapy and Magnetic Seizure Therapy.

    PubMed

    Lee, Won H; Lisanby, Sarah H; Laine, Andrew F; Peterchev, Angel V

    2017-05-01

    Lowering and individualizing the current amplitude in electroconvulsive therapy (ECT) has been proposed as a means to produce stimulation closer to the neural activation threshold and more focal seizure induction, which could potentially reduce cognitive side effects. However, the effect of current amplitude on the electric field (E-field) in the brain has not been previously linked to the current amplitude threshold for seizure induction. We coupled MRI-based E-field models with amplitude titrations of motor threshold (MT) and seizure threshold (ST) in four nonhuman primates (NHPs) to determine the strength, distribution, and focality of stimulation in the brain for four ECT electrode configurations (bilateral, bifrontal, right-unilateral, and frontomedial) and magnetic seizure therapy (MST) with cap coil on vertex. At the amplitude-titrated ST, the stimulated brain subvolume (23-63%) was significantly less than for conventional ECT with high, fixed current (94-99%). The focality of amplitude-titrated right-unilateral ECT (25%) was comparable to cap coil MST (23%), demonstrating that ECT with a low current amplitude and focal electrode placement can induce seizures with E-field as focal as MST, although these electrode and coil configurations affect differently specific brain regions. Individualizing the current amplitude reduced interindividual variation in the stimulation focality by 40-53% for ECT and 26% for MST, supporting amplitude individualization as a means of dosing especially for ECT. There was an overall significant correlation between the measured amplitude-titrated ST and the prediction of the E-field models, supporting a potential role of these models in dosing of ECT and MST. These findings may guide the development of seizure therapy dosing paradigms with improved risk/benefit ratio.

  14. Static quadrupolar susceptibility for a Blume-Emery-Griffiths model based on the mean-field approximation

    NASA Astrophysics Data System (ADS)

    Pawlak, A.; Gülpınar, G.; Erdem, R.; Ağartıoğlu, M.

    2015-12-01

    The expressions for the dipolar and quadrupolar susceptibilities are obtained within the mean-field approximation in the Blume-Emery-Griffiths model. Temperature as well as crystal field dependences of the susceptibilities are investigated for two different phase diagram topologies which take place for K/J=3 and K/J=5.0.Their behavior near the second and first order transition points as well as multi-critical points such as tricritical, triple and critical endpoint is presented. It is found that in addition to the jumps connected with the phase transitions there are broad peaks in the quadrupolar susceptibility. It is indicated that these broad peaks lie on a prolongation of the first-order line from a triple point to a critical point ending the line of first-order transitions between two distinct paramagnetic phases. It is argued that the broad peaks are a reminiscence of very strong quadrupolar fluctuations at the critical point. The results reveal the fact that near ferromagnetic-paramagnetic phase transitions the quadrupolar susceptibility generally shows a jump whereas near the phase transition between two distinct paramagnetic phases it is an edge-like.

  15. Three-dimensional holoscopic image coding scheme using high-efficiency video coding with kernel-based minimum mean-square-error estimation

    NASA Astrophysics Data System (ADS)

    Liu, Deyang; An, Ping; Ma, Ran; Yang, Chao; Shen, Liquan; Li, Kai

    2016-07-01

    Three-dimensional (3-D) holoscopic imaging, also known as integral imaging, light field imaging, or plenoptic imaging, can provide natural and fatigue-free 3-D visualization. However, a large amount of data is required to represent the 3-D holoscopic content. Therefore, efficient coding schemes for this particular type of image are needed. A 3-D holoscopic image coding scheme with kernel-based minimum mean square error (MMSE) estimation is proposed. In the proposed scheme, the coding block is predicted by an MMSE estimator under statistical modeling. In order to obtain the signal statistical behavior, kernel density estimation (KDE) is utilized to estimate the probability density function of the statistical modeling. As bandwidth estimation (BE) is a key issue in the KDE problem, we also propose a BE method based on kernel trick. The experimental results demonstrate that the proposed scheme can achieve a better rate-distortion performance and a better visual rendering quality.

  16. Monthly mean forecast experiments with the GISS model

    NASA Technical Reports Server (NTRS)

    Spar, J.; Atlas, R. M.; Kuo, E.

    1976-01-01

    The GISS general circulation model was used to compute global monthly mean forecasts for January 1973, 1974, and 1975 from initial conditions on the first day of each month and constant sea surface temperatures. Forecasts were evaluated in terms of global and hemispheric energetics, zonally averaged meridional and vertical profiles, forecast error statistics, and monthly mean synoptic fields. Although it generated a realistic mean meridional structure, the model did not adequately reproduce the observed interannual variations in the large scale monthly mean energetics and zonally averaged circulation. The monthly mean sea level pressure field was not predicted satisfactorily, but annual changes in the Icelandic low were simulated. The impact of temporal sea surface temperature variations on the forecasts was investigated by comparing two parallel forecasts for January 1974, one using climatological ocean temperatures and the other observed daily ocean temperatures. The use of daily updated sea surface temperatures produced no discernible beneficial effect.

  17. Tangent map intermittency as an approximate analysis of intermittency in a high dimensional fully stochastic dynamical system: The Tangled Nature model.

    PubMed

    Diaz-Ruelas, Alvaro; Jeldtoft Jensen, Henrik; Piovani, Duccio; Robledo, Alberto

    2016-12-01

    It is well known that low-dimensional nonlinear deterministic maps close to a tangent bifurcation exhibit intermittency and this circumstance has been exploited, e.g., by Procaccia and Schuster [Phys. Rev. A 28, 1210 (1983)], to develop a general theory of 1/f spectra. This suggests it is interesting to study the extent to which the behavior of a high-dimensional stochastic system can be described by such tangent maps. The Tangled Nature (TaNa) Model of evolutionary ecology is an ideal candidate for such a study, a significant model as it is capable of reproducing a broad range of the phenomenology of macroevolution and ecosystems. The TaNa model exhibits strong intermittency reminiscent of punctuated equilibrium and, like the fossil record of mass extinction, the intermittency in the model is found to be non-stationary, a feature typical of many complex systems. We derive a mean-field version for the evolution of the likelihood function controlling the reproduction of species and find a local map close to tangency. This mean-field map, by our own local approximation, is able to describe qualitatively only one episode of the intermittent dynamics of the full TaNa model. To complement this result, we construct a complete nonlinear dynamical system model consisting of successive tangent bifurcations that generates time evolution patterns resembling those of the full TaNa model in macroscopic scales. The switch from one tangent bifurcation to the next in the sequences produced in this model is stochastic in nature, based on criteria obtained from the local mean-field approximation, and capable of imitating the changing set of types of species and total population in the TaNa model. The model combines full deterministic dynamics with instantaneous parameter random jumps at stochastically drawn times. In spite of the limitations of our approach, which entails a drastic collapse of degrees of freedom, the description of a high-dimensional model system in terms of a low-dimensional one appears to be illuminating.

  18. MEAN-FIELD SOLAR DYNAMO MODELS WITH A STRONG MERIDIONAL FLOW AT THE BOTTOM OF THE CONVECTION ZONE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pipin, V. V.; Kosovichev, A. G.

    2011-09-01

    This paper presents a study of kinematic axisymmetric mean-field dynamo models for the case of meridional circulation with a deep-seated stagnation point and a strong return flow at the bottom of the convection zone. This kind of circulation follows from mean-field models of the angular momentum balance in the solar convection zone. The dynamo models include turbulent sources of the large-scale poloidal magnetic field production due to kinetic helicity and a combined effect due to the Coriolis force and large-scale electric current. In these models the toroidal magnetic field, which is responsible for sunspot production, is concentrated at the bottommore » of the convection zone and is transported to low-latitude regions by a meridional flow. The meridional component of the poloidal field is also concentrated at the bottom of the convection zone, while the radial component is concentrated in near-polar regions. We show that it is possible for this type of meridional circulation to construct kinematic dynamo models that resemble in some aspects the sunspot magnetic activity cycle. However, in the near-equatorial regions the phase relation between the toroidal and poloidal components disagrees with observations. We also show that the period of the magnetic cycle may not always monotonically decrease with the increase of the meridional flow speed. Thus, for further progress it is important to determine the structure of the meridional circulation, which is one of the critical properties, from helioseismology observations.« less

  19. Subgrid-scale stresses and scalar fluxes constructed by the multi-scale turnover Lagrangian map

    NASA Astrophysics Data System (ADS)

    AL-Bairmani, Sukaina; Li, Yi; Rosales, Carlos; Xie, Zheng-tong

    2017-04-01

    The multi-scale turnover Lagrangian map (MTLM) [C. Rosales and C. Meneveau, "Anomalous scaling and intermittency in three-dimensional synthetic turbulence," Phys. Rev. E 78, 016313 (2008)] uses nested multi-scale Lagrangian advection of fluid particles to distort a Gaussian velocity field and, as a result, generate non-Gaussian synthetic velocity fields. Passive scalar fields can be generated with the procedure when the fluid particles carry a scalar property [C. Rosales, "Synthetic three-dimensional turbulent passive scalar fields via the minimal Lagrangian map," Phys. Fluids 23, 075106 (2011)]. The synthetic fields have been shown to possess highly realistic statistics characterizing small scale intermittency, geometrical structures, and vortex dynamics. In this paper, we present a study of the synthetic fields using the filtering approach. This approach, which has not been pursued so far, provides insights on the potential applications of the synthetic fields in large eddy simulations and subgrid-scale (SGS) modelling. The MTLM method is first generalized to model scalar fields produced by an imposed linear mean profile. We then calculate the subgrid-scale stress, SGS scalar flux, SGS scalar variance, as well as related quantities from the synthetic fields. Comparison with direct numerical simulations (DNSs) shows that the synthetic fields reproduce the probability distributions of the SGS energy and scalar dissipation rather well. Related geometrical statistics also display close agreement with DNS results. The synthetic fields slightly under-estimate the mean SGS energy dissipation and slightly over-predict the mean SGS scalar variance dissipation. In general, the synthetic fields tend to slightly under-estimate the probability of large fluctuations for most quantities we have examined. Small scale anisotropy in the scalar field originated from the imposed mean gradient is captured. The sensitivity of the synthetic fields on the input spectra is assessed by using truncated spectra or model spectra as the input. Analyses show that most of the SGS statistics agree well with those from MTLM fields with DNS spectra as the input. For the mean SGS energy dissipation, some significant deviation is observed. However, it is shown that the deviation can be parametrized by the input energy spectrum, which demonstrates the robustness of the MTLM procedure.

  20. Reconciling Structural and Thermodynamic Predictions Using All-Atom and Coarse-Grain Force Fields: The Case of Charged Oligo-Arginine Translocation into DMPC Bilayers

    PubMed Central

    2015-01-01

    Using the translocation of short, charged cationic oligo-arginine peptides (mono-, di-, and triarginine) from bulk aqueous solution into model DMPC bilayers, we explore the question of the similarity of thermodynamic and structural predictions obtained from molecular dynamics simulations using all-atom and Martini coarse-grain force fields. Specifically, we estimate potentials of mean force associated with translocation using standard all-atom (CHARMM36 lipid) and polarizable and nonpolarizable Martini force fields, as well as a series of modified Martini-based parameter sets. We find that we are able to reproduce qualitative features of potentials of mean force of single amino acid side chain analogues into model bilayers. In particular, modifications of peptide–water and peptide–membrane interactions allow prediction of free energy minima at the bilayer–water interface as obtained with all-atom force fields. In the case of oligo-arginine peptides, the modified parameter sets predict interfacial free energy minima as well as free energy barriers in almost quantitative agreement with all-atom force field based simulations. Interfacial free energy minima predicted by a modified coarse-grained parameter set are −2.51, −4.28, and −5.42 for mono-, di-, and triarginine; corresponding values from all-atom simulations are −0.83, −3.33, and −3.29, respectively, all in units of kcal/mol. We found that a stronger interaction between oligo-arginine and the membrane components and a weaker interaction between oligo-arginine and water are crucial for producing such minima in PMFs using the polarizable CG model. The difference between bulk aqueous and bilayer center states predicted by the modified coarse-grain force field are 11.71, 14.14, and 16.53 kcal/mol, and those by the all-atom model are 6.94, 8.64, and 12.80 kcal/mol; those are of almost the same order of magnitude. Our simulations also demonstrate a remarkable similarity in the structural aspects of the ensemble of configurations generated using the all-atom and coarse-grain force fields. Both resolutions show that oligo-arginine peptides adopt preferential orientations as they translocate into the bilayer. The guiding theme centers on charged groups maintaining coordination with polar and charged bilayer components as well as local water. We also observe similar behaviors related with membrane deformations. PMID:25290376

  1. Reconciling structural and thermodynamic predictions using all-atom and coarse-grain force fields: the case of charged oligo-arginine translocation into DMPC bilayers.

    PubMed

    Hu, Yuan; Sinha, Sudipta Kumar; Patel, Sandeep

    2014-10-16

    Using the translocation of short, charged cationic oligo-arginine peptides (mono-, di-, and triarginine) from bulk aqueous solution into model DMPC bilayers, we explore the question of the similarity of thermodynamic and structural predictions obtained from molecular dynamics simulations using all-atom and Martini coarse-grain force fields. Specifically, we estimate potentials of mean force associated with translocation using standard all-atom (CHARMM36 lipid) and polarizable and nonpolarizable Martini force fields, as well as a series of modified Martini-based parameter sets. We find that we are able to reproduce qualitative features of potentials of mean force of single amino acid side chain analogues into model bilayers. In particular, modifications of peptide-water and peptide-membrane interactions allow prediction of free energy minima at the bilayer-water interface as obtained with all-atom force fields. In the case of oligo-arginine peptides, the modified parameter sets predict interfacial free energy minima as well as free energy barriers in almost quantitative agreement with all-atom force field based simulations. Interfacial free energy minima predicted by a modified coarse-grained parameter set are -2.51, -4.28, and -5.42 for mono-, di-, and triarginine; corresponding values from all-atom simulations are -0.83, -3.33, and -3.29, respectively, all in units of kcal/mol. We found that a stronger interaction between oligo-arginine and the membrane components and a weaker interaction between oligo-arginine and water are crucial for producing such minima in PMFs using the polarizable CG model. The difference between bulk aqueous and bilayer center states predicted by the modified coarse-grain force field are 11.71, 14.14, and 16.53 kcal/mol, and those by the all-atom model are 6.94, 8.64, and 12.80 kcal/mol; those are of almost the same order of magnitude. Our simulations also demonstrate a remarkable similarity in the structural aspects of the ensemble of configurations generated using the all-atom and coarse-grain force fields. Both resolutions show that oligo-arginine peptides adopt preferential orientations as they translocate into the bilayer. The guiding theme centers on charged groups maintaining coordination with polar and charged bilayer components as well as local water. We also observe similar behaviors related with membrane deformations.

  2. Effects of anisotropies in turbulent magnetic diffusion in mean-field solar dynamo models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pipin, V. V.; Kosovichev, A. G.

    2014-04-10

    We study how anisotropies of turbulent diffusion affect the evolution of large-scale magnetic fields and the dynamo process on the Sun. The effect of anisotropy is calculated in a mean-field magnetohydrodynamics framework assuming that triple correlations provide relaxation to the turbulent electromotive force (so-called the 'minimal τ-approximation'). We examine two types of mean-field dynamo models: the well-known benchmark flux-transport model and a distributed-dynamo model with a subsurface rotational shear layer. For both models, we investigate effects of the double- and triple-cell meridional circulation, recently suggested by helioseismology and numerical simulations. To characterize the anisotropy effects, we introduce a parameter ofmore » anisotropy as a ratio of the radial and horizontal intensities of turbulent mixing. It is found that the anisotropy affects the distribution of magnetic fields inside the convection zone. The concentration of the magnetic flux near the bottom and top boundaries of the convection zone is greater when the anisotropy is stronger. It is shown that the critical dynamo number and the dynamo period approach to constant values for large values of the anisotropy parameter. The anisotropy reduces the overlap of toroidal magnetic fields generated in subsequent dynamo cycles, in the time-latitude 'butterfly' diagram. If we assume that sunspots are formed in the vicinity of the subsurface shear layer, then the distributed dynamo model with the anisotropic diffusivity satisfies the observational constraints from helioseismology and is consistent with the value of effective turbulent diffusion estimated from the dynamics of surface magnetic fields.« less

  3. Loop expansion around the Bethe approximation through the M-layer construction

    NASA Astrophysics Data System (ADS)

    Altieri, Ada; Chiara Angelini, Maria; Lucibello, Carlo; Parisi, Giorgio; Ricci-Tersenghi, Federico; Rizzo, Tommaso

    2017-11-01

    For every physical model defined on a generic graph or factor graph, the Bethe M-layer construction allows building a different model for which the Bethe approximation is exact in the large M limit, and coincides with the original model for M=1 . The 1/M perturbative series is then expressed by a diagrammatic loop expansion in terms of so-called fat diagrams. Our motivation is to study some important second-order phase transitions that do exist on the Bethe lattice, but are either qualitatively different or absent in the corresponding fully connected case. In this case, the standard approach based on a perturbative expansion around the naive mean field theory (essentially a fully connected model) fails. On physical grounds, we expect that when the construction is applied to a lattice in finite dimension there is a small region of the external parameters, close to the Bethe critical point, where strong deviations from mean-field behavior will be observed. In this region, the 1/M expansion for the corrections diverges, and can be the starting point for determining the correct non-mean-field critical exponents using renormalization group arguments. In the end, we will show that the critical series for the generic observable can be expressed as a sum of Feynman diagrams with the same numerical prefactors of field theories. However, the contribution of a given diagram is not evaluated by associating Gaussian propagators to its lines, as in field theories: one has to consider the graph as a portion of the original lattice, replacing the internal lines with appropriate one-dimensional chains, and attaching to the internal points the appropriate number of infinite-size Bethe trees to restore the correct local connectivity of the original model. The actual contribution of each (fat) diagram is the so-called line-connected observable, which also includes contributions from sub-diagrams with appropriate prefactors. In order to compute the corrections near to the critical point, Feynman diagrams (with their symmetry factors) can be read directly from the appropriate field-theoretical literature; the computation of momentum integrals is also quite similar; the extra work consists of computing the line-connected observable of the associated fat diagram in the limit of all lines becoming infinitely long.

  4. The research on construction and application of machining process knowledge base

    NASA Astrophysics Data System (ADS)

    Zhao, Tan; Qiao, Lihong; Qie, Yifan; Guo, Kai

    2018-03-01

    In order to realize the application of knowledge in machining process design, from the perspective of knowledge in the application of computer aided process planning(CAPP), a hierarchical structure of knowledge classification is established according to the characteristics of mechanical engineering field. The expression of machining process knowledge is structured by means of production rules and the object-oriented methods. Three kinds of knowledge base models are constructed according to the representation of machining process knowledge. In this paper, the definition and classification of machining process knowledge, knowledge model, and the application flow of the process design based on the knowledge base are given, and the main steps of the design decision of the machine tool are carried out as an application by using the knowledge base.

  5. Predicting carbon dioxide and energy fluxes across global FLUXNET sites with regression algorithms

    DOE PAGES

    Tramontana, Gianluca; Jung, Martin; Schwalm, Christopher R.; ...

    2016-07-29

    Spatio-temporal fields of land–atmosphere fluxes derived from data-driven models can complement simulations by process-based land surface models. While a number of strategies for empirical models with eddy-covariance flux data have been applied, a systematic intercomparison of these methods has been missing so far. In this study, we performed a cross-validation experiment for predicting carbon dioxide, latent heat, sensible heat and net radiation fluxes across different ecosystem types with 11 machine learning (ML) methods from four different classes (kernel methods, neural networks, tree methods, and regression splines). We applied two complementary setups: (1) 8-day average fluxes based on remotely sensed data andmore » (2) daily mean fluxes based on meteorological data and a mean seasonal cycle of remotely sensed variables. The patterns of predictions from different ML and experimental setups were highly consistent. There were systematic differences in performance among the fluxes, with the following ascending order: net ecosystem exchange ( R 2 < 0.5), ecosystem respiration ( R 2 > 0.6), gross primary production ( R 2> 0.7), latent heat ( R 2 > 0.7), sensible heat ( R 2 > 0.7), and net radiation ( R 2 > 0.8). The ML methods predicted the across-site variability and the mean seasonal cycle of the observed fluxes very well ( R 2 > 0.7), while the 8-day deviations from the mean seasonal cycle were not well predicted ( R 2 < 0.5). Fluxes were better predicted at forested and temperate climate sites than at sites in extreme climates or less represented by training data (e.g., the tropics). Finally, the evaluated large ensemble of ML-based models will be the basis of new global flux products.« less

  6. Predicting carbon dioxide and energy fluxes across global FLUXNET sites with regression algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tramontana, Gianluca; Jung, Martin; Schwalm, Christopher R.

    Spatio-temporal fields of land–atmosphere fluxes derived from data-driven models can complement simulations by process-based land surface models. While a number of strategies for empirical models with eddy-covariance flux data have been applied, a systematic intercomparison of these methods has been missing so far. In this study, we performed a cross-validation experiment for predicting carbon dioxide, latent heat, sensible heat and net radiation fluxes across different ecosystem types with 11 machine learning (ML) methods from four different classes (kernel methods, neural networks, tree methods, and regression splines). We applied two complementary setups: (1) 8-day average fluxes based on remotely sensed data andmore » (2) daily mean fluxes based on meteorological data and a mean seasonal cycle of remotely sensed variables. The patterns of predictions from different ML and experimental setups were highly consistent. There were systematic differences in performance among the fluxes, with the following ascending order: net ecosystem exchange ( R 2 < 0.5), ecosystem respiration ( R 2 > 0.6), gross primary production ( R 2> 0.7), latent heat ( R 2 > 0.7), sensible heat ( R 2 > 0.7), and net radiation ( R 2 > 0.8). The ML methods predicted the across-site variability and the mean seasonal cycle of the observed fluxes very well ( R 2 > 0.7), while the 8-day deviations from the mean seasonal cycle were not well predicted ( R 2 < 0.5). Fluxes were better predicted at forested and temperate climate sites than at sites in extreme climates or less represented by training data (e.g., the tropics). Finally, the evaluated large ensemble of ML-based models will be the basis of new global flux products.« less

  7. Coagulation kinetics beyond mean field theory using an optimised Poisson representation.

    PubMed

    Burnett, James; Ford, Ian J

    2015-05-21

    Binary particle coagulation can be modelled as the repeated random process of the combination of two particles to form a third. The kinetics may be represented by population rate equations based on a mean field assumption, according to which the rate of aggregation is taken to be proportional to the product of the mean populations of the two participants, but this can be a poor approximation when the mean populations are small. However, using the Poisson representation, it is possible to derive a set of rate equations that go beyond mean field theory, describing pseudo-populations that are continuous, noisy, and complex, but where averaging over the noise and initial conditions gives the mean of the physical population. Such an approach is explored for the simple case of a size-independent rate of coagulation between particles. Analytical results are compared with numerical computations and with results derived by other means. In the numerical work, we encounter instabilities that can be eliminated using a suitable "gauge" transformation of the problem [P. D. Drummond, Eur. Phys. J. B 38, 617 (2004)] which we show to be equivalent to the application of the Cameron-Martin-Girsanov formula describing a shift in a probability measure. The cost of such a procedure is to introduce additional statistical noise into the numerical results, but we identify an optimised gauge transformation where this difficulty is minimal for the main properties of interest. For more complicated systems, such an approach is likely to be computationally cheaper than Monte Carlo simulation.

  8. Improved estimates of partial volume coefficients from noisy brain MRI using spatial context.

    PubMed

    Manjón, José V; Tohka, Jussi; Robles, Montserrat

    2010-11-01

    This paper addresses the problem of accurate voxel-level estimation of tissue proportions in the human brain magnetic resonance imaging (MRI). Due to the finite resolution of acquisition systems, MRI voxels can contain contributions from more than a single tissue type. The voxel-level estimation of this fractional content is known as partial volume coefficient estimation. In the present work, two new methods to calculate the partial volume coefficients under noisy conditions are introduced and compared with current similar methods. Concretely, a novel Markov Random Field model allowing sharp transitions between partial volume coefficients of neighbouring voxels and an advanced non-local means filtering technique are proposed to reduce the errors due to random noise in the partial volume coefficient estimation. In addition, a comparison was made to find out how the different methodologies affect the measurement of the brain tissue type volumes. Based on the obtained results, the main conclusions are that (1) both Markov Random Field modelling and non-local means filtering improved the partial volume coefficient estimation results, and (2) non-local means filtering was the better of the two strategies for partial volume coefficient estimation. Copyright 2010 Elsevier Inc. All rights reserved.

  9. Computational Model of Primary Visual Cortex Combining Visual Attention for Action Recognition

    PubMed Central

    Shu, Na; Gao, Zhiyong; Chen, Xiangan; Liu, Haihua

    2015-01-01

    Humans can easily understand other people’s actions through visual systems, while computers cannot. Therefore, a new bio-inspired computational model is proposed in this paper aiming for automatic action recognition. The model focuses on dynamic properties of neurons and neural networks in the primary visual cortex (V1), and simulates the procedure of information processing in V1, which consists of visual perception, visual attention and representation of human action. In our model, a family of the three-dimensional spatial-temporal correlative Gabor filters is used to model the dynamic properties of the classical receptive field of V1 simple cell tuned to different speeds and orientations in time for detection of spatiotemporal information from video sequences. Based on the inhibitory effect of stimuli outside the classical receptive field caused by lateral connections of spiking neuron networks in V1, we propose surround suppressive operator to further process spatiotemporal information. Visual attention model based on perceptual grouping is integrated into our model to filter and group different regions. Moreover, in order to represent the human action, we consider the characteristic of the neural code: mean motion map based on analysis of spike trains generated by spiking neurons. The experimental evaluation on some publicly available action datasets and comparison with the state-of-the-art approaches demonstrate the superior performance of the proposed model. PMID:26132270

  10. Rotational symmetry breaking toward a string-valence bond solid phase in frustrated J1 -J2 transverse field Ising model

    NASA Astrophysics Data System (ADS)

    Sadrzadeh, M.; Langari, A.

    2018-06-01

    We study the effect of quantum fluctuations by means of a transverse magnetic field (Γ) on the highly degenerate ground state of antiferromagnetic J1 -J2 Ising model on the square lattice, at the limit J2 /J1 = 0.5 . We show that harmonic quantum fluctuations based on single spin flips can not lift such degeneracy, however an-harmonic quantum fluctuations based on multi spin cluster flip excitations lift the degeneracy toward a unique ground state with string-valence bond solid (VBS) nature. A cluster operator formalism has been implemented to incorporate an-harmonic quantum fluctuations. We show that cluster-type excitations of the model lead not only to lower the excitation energy compared with a single-spin flip but also to lift the extensive degeneracy in favor of a string-VBS state, which breaks lattice rotational symmetry with only two fold degeneracy. The tendency toward the broken symmetry state is justified by numerical exact diagonalization. Moreover, we introduce a map to find the relation between the present model on the checkerboard and square lattices.

  11. Critical frontier of the triangular Ising antiferromagnet in a field

    NASA Astrophysics Data System (ADS)

    Qian, Xiaofeng; Wegewijs, Maarten; Blöte, Henk W.

    2004-03-01

    We study the critical line of the triangular Ising antiferromagnet in an external magnetic field by means of a finite-size analysis of results obtained by transfer-matrix and Monte Carlo techniques. We compare the shape of the critical line with predictions of two different theoretical scenarios. Both scenarios, while plausible, involve assumptions. The first scenario is based on the generalization of the model to a vertex model, and the assumption that the exact analytic form of the critical manifold of this vertex model is determined by the zeroes of an O(2) gauge-invariant polynomial in the vertex weights. However, it is not possible to fit the coefficients of such polynomials of orders up to 10, such as to reproduce the numerical data for the critical points. The second theoretical prediction is based on the assumption that a renormalization mapping exists of the Ising model on the Coulomb gas, and analysis of the resulting renormalization equations. It leads to a shape of the critical line that is inconsistent with the first prediction, but consistent with the numerical data.

  12. A sea-land segmentation algorithm based on multi-feature fusion for a large-field remote sensing image

    NASA Astrophysics Data System (ADS)

    Li, Jing; Xie, Weixin; Pei, Jihong

    2018-03-01

    Sea-land segmentation is one of the key technologies of sea target detection in remote sensing images. At present, the existing algorithms have the problems of low accuracy, low universality and poor automatic performance. This paper puts forward a sea-land segmentation algorithm based on multi-feature fusion for a large-field remote sensing image removing island. Firstly, the coastline data is extracted and all of land area is labeled by using the geographic information in large-field remote sensing image. Secondly, three features (local entropy, local texture and local gradient mean) is extracted in the sea-land border area, and the three features combine a 3D feature vector. And then the MultiGaussian model is adopted to describe 3D feature vectors of sea background in the edge of the coastline. Based on this multi-gaussian sea background model, the sea pixels and land pixels near coastline are classified more precise. Finally, the coarse segmentation result and the fine segmentation result are fused to obtain the accurate sea-land segmentation. Comparing and analyzing the experimental results by subjective vision, it shows that the proposed method has high segmentation accuracy, wide applicability and strong anti-disturbance ability.

  13. Simulation-Based Joint Estimation of Body Deformation and Elasticity Parameters for Medical Image Analysis

    PubMed Central

    Foskey, Mark; Niethammer, Marc; Krajcevski, Pavel; Lin, Ming C.

    2014-01-01

    Estimation of tissue stiffness is an important means of noninvasive cancer detection. Existing elasticity reconstruction methods usually depend on a dense displacement field (inferred from ultrasound or MR images) and known external forces. Many imaging modalities, however, cannot provide details within an organ and therefore cannot provide such a displacement field. Furthermore, force exertion and measurement can be difficult for some internal organs, making boundary forces another missing parameter. We propose a general method for estimating elasticity and boundary forces automatically using an iterative optimization framework, given the desired (target) output surface. During the optimization, the input model is deformed by the simulator, and an objective function based on the distance between the deformed surface and the target surface is minimized numerically. The optimization framework does not depend on a particular simulation method and is therefore suitable for different physical models. We show a positive correlation between clinical prostate cancer stage (a clinical measure of severity) and the recovered elasticity of the organ. Since the surface correspondence is established, our method also provides a non-rigid image registration, where the quality of the deformation fields is guaranteed, as they are computed using a physics-based simulation. PMID:22893381

  14. Developing a methodology for the inverse estimation of root architectural parameters from field based sampling schemes

    NASA Astrophysics Data System (ADS)

    Morandage, Shehan; Schnepf, Andrea; Vanderborght, Jan; Javaux, Mathieu; Leitner, Daniel; Laloy, Eric; Vereecken, Harry

    2017-04-01

    Root traits are increasingly important in breading of new crop varieties. E.g., longer and fewer lateral roots are suggested to improve drought resistance of wheat. Thus, detailed root architectural parameters are important. However, classical field sampling of roots only provides more aggregated information such as root length density (coring), root counts per area (trenches) or root arrival curves at certain depths (rhizotubes). We investigate the possibility of obtaining the information about root system architecture of plants using field based classical root sampling schemes, based on sensitivity analysis and inverse parameter estimation. This methodology was developed based on a virtual experiment where a root architectural model was used to simulate root system development in a field, parameterized for winter wheat. This information provided the ground truth which is normally unknown in a real field experiment. The three sampling schemes coring, trenching, and rhizotubes where virtually applied to and aggregated information computed. Morris OAT global sensitivity analysis method was then performed to determine the most sensitive parameters of root architecture model for the three different sampling methods. The estimated means and the standard deviation of elementary effects of a total number of 37 parameters were evaluated. Upper and lower bounds of the parameters were obtained based on literature and published data of winter wheat root architectural parameters. Root length density profiles of coring, arrival curve characteristics observed in rhizotubes, and root counts in grids of trench profile method were evaluated statistically to investigate the influence of each parameter using five different error functions. Number of branches, insertion angle inter-nodal distance, and elongation rates are the most sensitive parameters and the parameter sensitivity varies slightly with the depth. Most parameters and their interaction with the other parameters show highly nonlinear effect to the model output. The most sensitive parameters will be subject to inverse estimation from the virtual field sampling data using DREAMzs algorithm. The estimated parameters can then be compared with the ground truth in order to determine the suitability of the sampling schemes to identify specific traits or parameters of the root growth model.

  15. Assessment of mean-field microkinetic models for CO methanation on stepped metal surfaces using accelerated kinetic Monte Carlo

    NASA Astrophysics Data System (ADS)

    Andersen, Mie; Plaisance, Craig P.; Reuter, Karsten

    2017-10-01

    First-principles screening studies aimed at predicting the catalytic activity of transition metal (TM) catalysts have traditionally been based on mean-field (MF) microkinetic models, which neglect the effect of spatial correlations in the adsorbate layer. Here we critically assess the accuracy of such models for the specific case of CO methanation over stepped metals by comparing to spatially resolved kinetic Monte Carlo (kMC) simulations. We find that the typical low diffusion barriers offered by metal surfaces can be significantly increased at step sites, which results in persisting correlations in the adsorbate layer. As a consequence, MF models may overestimate the catalytic activity of TM catalysts by several orders of magnitude. The potential higher accuracy of kMC models comes at a higher computational cost, which can be especially challenging for surface reactions on metals due to a large disparity in the time scales of different processes. In order to overcome this issue, we implement and test a recently developed algorithm for achieving temporal acceleration of kMC simulations. While the algorithm overall performs quite well, we identify some challenging cases which may lead to a breakdown of acceleration algorithms and discuss possible directions for future algorithm development.

  16. Development of an RF-EMF Exposure Surrogate for Epidemiologic Research.

    PubMed

    Roser, Katharina; Schoeni, Anna; Bürgi, Alfred; Röösli, Martin

    2015-05-22

    Exposure assessment is a crucial part in studying potential effects of RF-EMF. Using data from the HERMES study on adolescents, we developed an integrative exposure surrogate combining near-field and far-field RF-EMF exposure in a single brain and whole-body exposure measure. Contributions from far-field sources were modelled by propagation modelling and multivariable regression modelling using personal measurements. Contributions from near-field sources were assessed from both, questionnaires and mobile phone operator records. Mean cumulative brain and whole-body doses were 1559.7 mJ/kg and 339.9 mJ/kg per day, respectively. 98.4% of the brain dose originated from near-field sources, mainly from GSM mobile phone calls (93.1%) and from DECT phone calls (4.8%). Main contributors to the whole-body dose were GSM mobile phone calls (69.0%), use of computer, laptop and tablet connected to WLAN (12.2%) and data traffic on the mobile phone via WLAN (6.5%). The exposure from mobile phone base stations contributed 1.8% to the whole-body dose, while uplink exposure from other people's mobile phones contributed 3.6%. In conclusion, the proposed approach is considered useful to combine near-field and far-field exposure to an integrative exposure surrogate for exposure assessment in epidemiologic studies. However, substantial uncertainties remain about exposure contributions from various near-field and far-field sources.

  17. Development of an RF-EMF Exposure Surrogate for Epidemiologic Research

    PubMed Central

    Roser, Katharina; Schoeni, Anna; Bürgi, Alfred; Röösli, Martin

    2015-01-01

    Exposure assessment is a crucial part in studying potential effects of RF-EMF. Using data from the HERMES study on adolescents, we developed an integrative exposure surrogate combining near-field and far-field RF-EMF exposure in a single brain and whole-body exposure measure. Contributions from far-field sources were modelled by propagation modelling and multivariable regression modelling using personal measurements. Contributions from near-field sources were assessed from both, questionnaires and mobile phone operator records. Mean cumulative brain and whole-body doses were 1559.7 mJ/kg and 339.9 mJ/kg per day, respectively. 98.4% of the brain dose originated from near-field sources, mainly from GSM mobile phone calls (93.1%) and from DECT phone calls (4.8%). Main contributors to the whole-body dose were GSM mobile phone calls (69.0%), use of computer, laptop and tablet connected to WLAN (12.2%) and data traffic on the mobile phone via WLAN (6.5%). The exposure from mobile phone base stations contributed 1.8% to the whole-body dose, while uplink exposure from other people’s mobile phones contributed 3.6%. In conclusion, the proposed approach is considered useful to combine near-field and far-field exposure to an integrative exposure surrogate for exposure assessment in epidemiologic studies. However, substantial uncertainties remain about exposure contributions from various near-field and far-field sources. PMID:26006132

  18. MO-FG-CAMPUS-TeP3-01: A Model of Baseline Shift to Improve Robustness of Proton Therapy Treatments of Moving Tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Souris, K; Barragan Montero, A; Di Perri, D

    Purpose: The shift in mean position of a moving tumor also known as “baseline shift”, has been modeled, in order to automatically generate uncertainty scenarios for the assessment and robust optimization of proton therapy treatments in lung cancer. Methods: An average CT scan and a Mid-Position CT scan (MidPCT) of the patient at the planning time are first generated from a 4D-CT data. The mean position of the tumor along the breathing cycle is represented by the GTV contour in the MidPCT. Several studies reported both systematic and random variations of the mean tumor position from fraction to fraction. Ourmore » model can simulate this baseline shift by generating a local deformation field that moves the tumor on all phases of the 4D-CT, without creating any non-physical artifact. The deformation field is comprised of normal and tangential components with respect to the lung wall in order to allow the tumor to slip within the lung instead of deforming the lung surface. The deformation field is eventually smoothed in order to enforce its continuity. Two 4D-CT series acquired at 1 week of interval were used to validate the model. Results: Based on the first 4D-CT set, the model was able to generate a third 4D-CT that reproduced the 5.8 mm baseline-shift measured in the second 4D-CT. Water equivalent thickness (WET) of the voxels have been computed for the 3 average CTs. The root mean square deviation of the WET in the GTV is 0.34 mm between week 1 and week 2, and 0.08 mm between the simulated data and week 2. Conclusion: Our model can be used to automatically generate uncertainty scenarios for robustness analysis of a proton therapy plan. The generated scenarios can also feed a TPS equipped with a robust optimizer. Kevin Souris, Ana Barragan, and Dario Di Perri are financially supported by Televie Grants from F.R.S.-FNRS.« less

  19. The Poisson-Helmholtz-Boltzmann model.

    PubMed

    Bohinc, K; Shrestha, A; May, S

    2011-10-01

    We present a mean-field model of a one-component electrolyte solution where the mobile ions interact not only via Coulomb interactions but also through a repulsive non-electrostatic Yukawa potential. Our choice of the Yukawa potential represents a simple model for solvent-mediated interactions between ions. We employ a local formulation of the mean-field free energy through the use of two auxiliary potentials, an electrostatic and a non-electrostatic potential. Functional minimization of the mean-field free energy leads to two coupled local differential equations, the Poisson-Boltzmann equation and the Helmholtz-Boltzmann equation. Their boundary conditions account for the sources of both the electrostatic and non-electrostatic interactions on the surface of all macroions that reside in the solution. We analyze a specific example, two like-charged planar surfaces with their mobile counterions forming the electrolyte solution. For this system we calculate the pressure between the two surfaces, and we analyze its dependence on the strength of the Yukawa potential and on the non-electrostatic interactions of the mobile ions with the planar macroion surfaces. In addition, we demonstrate that our mean-field model is consistent with the contact theorem, and we outline its generalization to arbitrary interaction potentials through the use of a Laplace transformation. © EDP Sciences / Società Italiana di Fisica / Springer-Verlag 2011

  20. Topological bifurcations in a model society of reasonable contrarians

    NASA Astrophysics Data System (ADS)

    Bagnoli, Franco; Rechtman, Raúl

    2013-12-01

    People are often divided into conformists and contrarians, the former tending to align to the majority opinion in their neighborhood and the latter tending to disagree with that majority. In practice, however, the contrarian tendency is rarely followed when there is an overwhelming majority with a given opinion, which denotes a social norm. Such reasonable contrarian behavior is often considered a mark of independent thought and can be a useful strategy in financial markets. We present the opinion dynamics of a society of reasonable contrarian agents. The model is a cellular automaton of Ising type, with antiferromagnetic pair interactions modeling contrarianism and plaquette terms modeling social norms. We introduce the entropy of the collective variable as a way of comparing deterministic (mean-field) and probabilistic (simulations) bifurcation diagrams. In the mean-field approximation the model exhibits bifurcations and a chaotic phase, interpreted as coherent oscillations of the whole society. However, in a one-dimensional spatial arrangement one observes incoherent oscillations and a constant average. In simulations on Watts-Strogatz networks with a small-world effect the mean-field behavior is recovered, with a bifurcation diagram that resembles the mean-field one but where the rewiring probability is used as the control parameter. Similar bifurcation diagrams are found for scale-free networks, and we are able to compute an effective connectivity for such networks.

  1. Topological bifurcations in a model society of reasonable contrarians.

    PubMed

    Bagnoli, Franco; Rechtman, Raúl

    2013-12-01

    People are often divided into conformists and contrarians, the former tending to align to the majority opinion in their neighborhood and the latter tending to disagree with that majority. In practice, however, the contrarian tendency is rarely followed when there is an overwhelming majority with a given opinion, which denotes a social norm. Such reasonable contrarian behavior is often considered a mark of independent thought and can be a useful strategy in financial markets. We present the opinion dynamics of a society of reasonable contrarian agents. The model is a cellular automaton of Ising type, with antiferromagnetic pair interactions modeling contrarianism and plaquette terms modeling social norms. We introduce the entropy of the collective variable as a way of comparing deterministic (mean-field) and probabilistic (simulations) bifurcation diagrams. In the mean-field approximation the model exhibits bifurcations and a chaotic phase, interpreted as coherent oscillations of the whole society. However, in a one-dimensional spatial arrangement one observes incoherent oscillations and a constant average. In simulations on Watts-Strogatz networks with a small-world effect the mean-field behavior is recovered, with a bifurcation diagram that resembles the mean-field one but where the rewiring probability is used as the control parameter. Similar bifurcation diagrams are found for scale-free networks, and we are able to compute an effective connectivity for such networks.

  2. Combining symmetry collective states with coupled-cluster theory: Lessons from the Agassi model Hamiltonian

    NASA Astrophysics Data System (ADS)

    Hermes, Matthew R.; Dukelsky, Jorge; Scuseria, Gustavo E.

    2017-06-01

    The failures of single-reference coupled-cluster theory for strongly correlated many-body systems is flagged at the mean-field level by the spontaneous breaking of one or more physical symmetries of the Hamiltonian. Restoring the symmetry of the mean-field determinant by projection reveals that coupled-cluster theory fails because it factorizes high-order excitation amplitudes incorrectly. However, symmetry-projected mean-field wave functions do not account sufficiently for dynamic (or weak) correlation. Here we pursue a merger of symmetry projection and coupled-cluster theory, following previous work along these lines that utilized the simple Lipkin model system as a test bed [J. Chem. Phys. 146, 054110 (2017), 10.1063/1.4974989]. We generalize the concept of a symmetry-projected mean-field wave function to the concept of a symmetry projected state, in which the factorization of high-order excitation amplitudes in terms of low-order ones is guided by symmetry projection and is not exponential, and combine them with coupled-cluster theory in order to model the ground state of the Agassi Hamiltonian. This model has two separate channels of correlation and two separate physical symmetries which are broken under strong correlation. We show how the combination of symmetry collective states and coupled-cluster theory is effective in obtaining correlation energies and order parameters of the Agassi model throughout its phase diagram.

  3. Modelling of induced electric fields based on incompletely known magnetic fields

    NASA Astrophysics Data System (ADS)

    Laakso, Ilkka; De Santis, Valerio; Cruciani, Silvano; Campi, Tommaso; Feliziani, Mauro

    2017-08-01

    Determining the induced electric fields in the human body is a fundamental problem in bioelectromagnetics that is important for both evaluation of safety of electromagnetic fields and medical applications. However, existing techniques for numerical modelling of induced electric fields require detailed information about the sources of the magnetic field, which may be unknown or difficult to model in realistic scenarios. Here, we show how induced electric fields can accurately be determined in the case where the magnetic fields are known only approximately, e.g. based on field measurements. The robustness of our approach is shown in numerical simulations for both idealized and realistic scenarios featuring a personalized MRI-based head model. The approach allows for modelling of the induced electric fields in biological bodies directly based on real-world magnetic field measurements.

  4. LES-based generation of high-frequency fluctuation in wind turbulence obtained by meteorological model

    NASA Astrophysics Data System (ADS)

    Tamura, Tetsuro; Kawaguchi, Masaharu; Kawai, Hidenori; Tao, Tao

    2017-11-01

    The connection between a meso-scale model and a micro-scale large eddy simulation (LES) is significant to simulate the micro-scale meteorological problem such as strong convective events due to the typhoon or the tornado using LES. In these problems the mean velocity profiles and the mean wind directions change with time according to the movement of the typhoons or tornadoes. Although, a fine grid micro-scale LES could not be connected to a coarse grid meso-scale WRF directly. In LES when the grid is suddenly refined at the interface of nested grids which is normal to the mean advection the resolved shear stresses decrease due to the interpolation errors and the delay of the generation of smaller scale turbulence that can be resolved on the finer mesh. For the estimation of wind gust disaster the peak wind acting on buildings and structures has to be correctly predicted. In the case of meteorological model the velocity fluctuations have a tendency of diffusive variation without the high frequency component due to the numerically filtering effects. In order to predict the peak value of wind velocity with good accuracy, this paper proposes a LES-based method for generating the higher frequency components of velocity and temperature fields obtained by meteorological model.

  5. Local gravity field modeling using spherical radial basis functions and a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Mahbuby, Hany; Safari, Abdolreza; Foroughi, Ismael

    2017-05-01

    Spherical Radial Basis Functions (SRBFs) can express the local gravity field model of the Earth if they are parameterized optimally on or below the Bjerhammar sphere. This parameterization is generally defined as the shape of the base functions, their number, center locations, bandwidths, and scale coefficients. The number/location and bandwidths of the base functions are the most important parameters for accurately representing the gravity field; once they are determined, the scale coefficients can then be computed accordingly. In this study, the point-mass kernel, as the simplest shape of SRBFs, is chosen to evaluate the synthesized free-air gravity anomalies over the rough area in Auvergne and GNSS/Leveling points (synthetic height anomalies) are used to validate the results. A two-step automatic approach is proposed to determine the optimum distribution of the base functions. First, the location of the base functions and their bandwidths are found using the genetic algorithm; second, the conjugate gradient least squares method is employed to estimate the scale coefficients. The proposed methodology shows promising results. On the one hand, when using the genetic algorithm, the base functions do not need to be set to a regular grid and they can move according to the roughness of topography. In this way, the models meet the desired accuracy with a low number of base functions. On the other hand, the conjugate gradient method removes the bias between derived quasigeoid heights from the model and from the GNSS/leveling points; this means there is no need for a corrector surface. The numerical test on the area of interest revealed an RMS of 0.48 mGal for the differences between predicted and observed gravity anomalies, and a corresponding 9 cm for the differences in GNSS/leveling points.

  6. Quantum Critical Higgs

    NASA Astrophysics Data System (ADS)

    Bellazzini, Brando; Csáki, Csaba; Hubisz, Jay; Lee, Seung J.; Serra, Javi; Terning, John

    2016-10-01

    The appearance of the light Higgs boson at the LHC is difficult to explain, particularly in light of naturalness arguments in quantum field theory. However, light scalars can appear in condensed matter systems when parameters (like the amount of doping) are tuned to a critical point. At zero temperature these quantum critical points are directly analogous to the finely tuned standard model. In this paper, we explore a class of models with a Higgs near a quantum critical point that exhibits non-mean-field behavior. We discuss the parametrization of the effects of a Higgs emerging from such a critical point in terms of form factors, and present two simple realistic scenarios based on either generalized free fields or a 5D dual in anti-de Sitter space. For both of these models, we consider the processes g g →Z Z and g g →h h , which can be used to gain information about the Higgs scaling dimension and IR transition scale from the experimental data.

  7. Dynamic phase transitions of the Blume-Emery-Griffiths model under an oscillating external magnetic field by the path probability method

    NASA Astrophysics Data System (ADS)

    Ertaş, Mehmet; Keskin, Mustafa

    2015-03-01

    By using the path probability method (PPM) with point distribution, we study the dynamic phase transitions (DPTs) in the Blume-Emery-Griffiths (BEG) model under an oscillating external magnetic field. The phases in the model are obtained by solving the dynamic equations for the average order parameters and a disordered phase, ordered phase and four mixed phases are found. We also investigate the thermal behavior of the dynamic order parameters to analyze the nature dynamic transitions as well as to obtain the DPT temperatures. The dynamic phase diagrams are presented in three different planes in which exhibit the dynamic tricritical point, double critical end point, critical end point, quadrupole point, triple point as well as the reentrant behavior, strongly depending on the values of the system parameters. We compare and discuss the dynamic phase diagrams with dynamic phase diagrams that were obtained within the Glauber-type stochastic dynamics based on the mean-field theory.

  8. The velocity field of clusters of galaxies within 100 megaparsecs. II - Northern clusters

    NASA Technical Reports Server (NTRS)

    Mould, J. R.; Akeson, R. L.; Bothun, G. D.; Han, M.; Huchra, J. P.; Roth, J.; Schommer, R. A.

    1993-01-01

    Distances and peculiar velocities for galaxies in eight clusters and groups have been determined by means of the near-infrared Tully-Fisher relation. With the possible exception of a group halfway between us and the Hercules Cluster, we observe peculiar velocities of the same order as the measuring errors of about 400 km/s. The present sample is drawn from the northern Galactic hemisphere and delineates a quiet region in the Hubble flow. This contrasts with the large-scale flows seen in the Hydra-Centaurus and Perseus-Pisces regions. We compare the observed peculiar velocities with predictions based upon the gravity field inferred from the IRAS redshift survey. The differences between the observed and predicted peculiar motions are generally small, except near dense structures, where the observed motions exceed the predictions by significant amounts. Kinematic models of the velocity field are also compared with the data. We cannot distinguish between parameterized models with a great attractor or models with a bulk flow.

  9. Hybrid approaches for multiple-species stochastic reaction-diffusion models

    NASA Astrophysics Data System (ADS)

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen

    2015-10-01

    Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.

  10. Hybrid approaches for multiple-species stochastic reaction-diffusion models.

    PubMed

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K; Byrne, Helen

    2015-10-15

    Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.

  11. Hybrid approaches for multiple-species stochastic reaction–diffusion models

    PubMed Central

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen

    2015-01-01

    Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. PMID:26478601

  12. An energetics-based honeybee nectar-foraging model used to assess the potential for landscape-level pesticide exposure dilution

    PubMed Central

    Focks, Andreas; Belgers, Dick; van der Steen, Jozef J.M.; Boesten, Jos J.T.I.; Roessink, Ivo

    2016-01-01

    Estimating the exposure of honeybees to pesticides on a landscape scale requires models of their spatial foraging behaviour. For this purpose, we developed a mechanistic, energetics-based model for a single day of nectar foraging in complex landscape mosaics. Net energetic efficiency determined resource patch choice. In one version of the model a single optimal patch was selected each hour. In another version, recruitment of foragers was simulated and several patches could be exploited simultaneously. Resource availability changed during the day due to depletion and/or intrinsic properties of the resource (anthesis). The model accounted for the impact of patch distance and size, resource depletion and replenishment, competition with other nectar foragers, and seasonal and diurnal patterns in availability of nectar-providing crops and wild flowers. From the model we derived simple rules for resource patch selection, e.g., for landscapes with mass-flowering crops only, net energetic efficiency would be proportional to the ratio of the energetic content of the nectar divided by distance to the hive. We also determined maximum distances at which resources like oilseed rape and clover were still energetically attractive. We used the model to assess the potential for pesticide exposure dilution in landscapes of different composition and complexity. Dilution means a lower concentration in nectar arriving at the hive compared to the concentration in nectar at a treated field and can result from foraging effort being diverted away from treated fields. Applying the model for all possible hive locations over a large area, distributions of dilution factors were obtained that were characterised by their 90-percentile value. For an area for which detailed spatial data on crops and off-field semi-natural habitats were available, we tested three landscape management scenarios that were expected to lead to exposure dilution: providing alternative resources than the target crop (oilseed rape) in the form of (i) other untreated crop fields, (ii) flower strips of different widths at field edges (off-crop in-field resources), and (iii) resources on off-field (semi-natural) habitats. For both model versions, significant dilution occurred only when alternative resource patches were equal or more attractive than oilseed rape, nearby and numerous and only in case of flower strips and off-field habitats. On an area-base, flower strips were more than one order of magnitude more effective than off-field habitats, the main reason being that flower strips had an optimal location. The two model versions differed in the predicted number of resource patches exploited over the day, but mainly in landscapes with numerous small resource patches. In landscapes consisting of few large resource patches (crop fields) both versions predicted the use of a small number of patches. PMID:27602273

  13. Scaling with System Size of the Lyapunov Exponents for the Hamiltonian Mean Field Model

    NASA Astrophysics Data System (ADS)

    Manos, Thanos; Ruffo, Stefano

    2011-12-01

    The Hamiltonian Mean Field model is a prototype for systems with long-range interactions. It describes the motion of N particles moving on a ring, coupled with an infinite-range potential. The model has a second-order phase transition at the energy density Uc =3/4 and its dynamics is exactly described by the Vlasov equation in the N→∞ limit. Its chaotic properties have been investigated in the past, but the determination of the scaling with N of the Lyapunov Spectrum (LS) of the model remains a challenging open problem. Here we show that the N -1/3 scaling of the Maximal Lyapunov Exponent (MLE), found in previous numerical and analytical studies, extends to the full LS; scaling is "precocious" for the LS, meaning that it becomes manifest for a much smaller number of particles than the one needed to check the scaling for the MLE. Besides that, the N -1/3 scaling appears to be valid not only for U>Uc , as suggested by theoretical approaches based on a random matrix approximation, but also below a threshold energy Ut ≈0.2. Using a recently proposed method (GALI) devised to rapidly check the chaotic or regular nature of an orbit, we find that Ut is also the energy at which a sharp transition from weak to strong chaos is present in the phase-space of the model. Around this energy the phase of the vector order parameter of the model becomes strongly time dependent, inducing a significant untrapping of particles from a nonlinear resonance.

  14. Meridional Circulation Dynamics from 3D Magnetohydrodynamic Global Simulations of Solar Convection

    NASA Astrophysics Data System (ADS)

    Passos, Dário; Charbonneau, Paul; Miesch, Mark

    2015-02-01

    The form of solar meridional circulation is a very important ingredient for mean field flux transport dynamo models. However, a shroud of mystery still surrounds this large-scale flow, given that its measurement using current helioseismic techniques is challenging. In this work, we use results from three-dimensional global simulations of solar convection to infer the dynamical behavior of the established meridional circulation. We make a direct comparison between the meridional circulation that arises in these simulations and the latest observations. Based on our results, we argue that there should be an equatorward flow at the base of the convection zone at mid-latitudes, below the current maximum depth helioseismic measures can probe (0.75 {{R}⊙ }). We also provide physical arguments to justify this behavior. The simulations indicate that the meridional circulation undergoes substantial changes in morphology as the magnetic cycle unfolds. We close by discussing the importance of these dynamical changes for current methods of observation which involve long averaging periods of helioseismic data. Also noteworthy is the fact that these topological changes indicate a rich interaction between magnetic fields and plasma flows, which challenges the ubiquitous kinematic approach used in the vast majority of mean field dynamo simulations.

  15. WE-G-BRD-09: Novel MRI Compatible Electron Accelerator for MRI-Linac Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whelan, B; Keall, P; Gierman, S

    Purpose: MRI guided radiotherapy is a rapidly growing field; however current linacs are not designed to operate in MRI fringe fields. As such, current MRI- Linac systems require magnetic shielding, impairing MR image quality and system flexibility. Here, we present a bespoke electron accelerator concept with robust operation in in-line magnetic fields. Methods: For in-line MRI-Linac systems, electron gun performance is the major constraint on accelerator performance. To overcome this, we propose placing a cathode directly within the first accelerating cavity. Such a configuration is used extensively in high energy particle physics, but not previously for radiotherapy. Benchmarked computational modellingmore » (CST, Darmstadt, Germany) was employed to design and assess a 5.5 cell side coupled accelerator with a temperature limited thermionic cathode in the first accelerating cell. This simulation was coupled to magnetic fields from a 1T MRI model to assess robustness in magnetic fields for Source to Isocenter Distance between 1 and 2 meters. Performance was compared to a conventional electron gun based system in the same magnetic field. Results: A temperature limited cathode (work function 1.8eV, temperature 1245K, emission constant 60A/K/cm{sup 2}) will emit a mean current density of 24mA/mm{sup 2} (Richardson’s Law). We modeled a circular cathode with radius 2mm and mean current 300mA. Capture efficiency of the device was 43%, resulting in target current of 130 mA. The electron beam had a FWHM of 0.2mm, and mean energy of 5.9MeV (interquartile spread of 0.1MeV). Such an electron beam is suitable for radiotherapy, comparing favourably to conventional systems. This model was robust to operation the MRI fringe field, with a maximum current loss of 6% compared to 85% for the conventional system. Conclusion: The bespoke electron accelerator is robust to operation in in-line magnetic fields. This will enable MRI-Linacs with no accelerator magnetic shielding, and minimise painstaking optimisation of the MRI fringe field. This work was supported by US (NIH) and Australian (NHMRC & Cancer Institute NSW) government research funding. In addition, I would like to thank cancer institute NSW and the Ingham Institute for scholarship support.« less

  16. A new spherical model for computing the radiation field available for photolysis and heating at twilight

    NASA Technical Reports Server (NTRS)

    Dahlback, Arne; Stamnes, Knut

    1991-01-01

    Accurate computation of atmospheric photodissociation and heating rates is needed in photochemical models. These quantities are proportional to the mean intensity of the solar radiation penetrating to various levels in the atmosphere. For large solar zenith angles a solution of the radiative transfer equation valid for a spherical atmosphere is required in order to obtain accurate values of the mean intensity. Such a solution based on a perturbation technique combined with the discrete ordinate method is presented. Mean intensity calculations are carried out for various solar zenith angles. These results are compared with calculations from a plane parallel radiative transfer model in order to assess the importance of using correct geometry around sunrise and sunset. This comparison shows, in agreement with previous investigations, that for solar zenith angles less than 90 deg adequate solutions are obtained for plane parallel geometry as long as spherical geometry is used to compute the direct beam attenuation; but for solar zenith angles greater than 90 deg this pseudospherical plane parallel approximation overstimates the mean intensity.

  17. A Secular Variation Model for Igrf-12 Based on Swarm Data and Inverse Geodynamo Modelling

    NASA Astrophysics Data System (ADS)

    Fournier, A.; Aubert, J.; Erwan, T.

    2014-12-01

    We are proposing a secular variation candidate model for the 12th generation of the international geomagnetic reference field, spanning the years 2015-2020. The novelty of our approach stands in the initialization of a 5-yr long integration of a numerical model of Earth's dynamo by means of inverse geodynamo modelling, as introduced by Aubert (GJI, 2014). This inverse technique combines the information coming from the observations (in the form of an instantaneous estimate of the Gauss coefficients for the magnetic field and its secular variation) with that coming from the multivariate statistics of a free run of a numerical model of the geodynamo. The Gauss coefficients and their error covariance properties are determined from Swarm data along the lines detailed by Thébault et al. (EPS, 2010). The numerical model of the geodynamo is the so-called Coupled Earth Dynamo model (Aubert et al., Nature, 2013), whose variability possesses a strong level of similarity with that of the geomagnetic field. We illustrate and assess the potential of this methodology by applying it to recent time intervals, with an initialization based on CHAMP data, and conclude by presenting our SV candidate, whose initialization is based on the 1st year of Swarm data This work is supported by the French "Agence Nationale de la Recherche" under the grant ANR-11-BS56-011 (http://avsgeomag.ipgp.fr) and by the CNES. References: Aubert, J., Geophys. J. Int. 197, 1321-1334, 2014, doi: 10.1093/gji/ggu064 Aubert, J., Finlay, C., Fournier, F. Nature 502, 219-223, 2013, doi: 10.1038/nature12574 Thébault E. , A. Chulliat, S. Maus, G. Hulot, B. Langais, A. Chambodut and M. Menvielle, Earth Planets Space, Vol. 62 (No. 10), pp. 753-763, 2010.

  18. AVHRR-Based Polar Pathfinder Products: Evaluation, Enhancement, and Transition to MODIS

    NASA Technical Reports Server (NTRS)

    Fowler, Charles; Maslanik, James; Stone, Robert; Stroeve, Julienne; Emery, William

    2001-01-01

    The AVHRR-Based Polar Pathfinder (APP) products include calibrated AVHRR channel data, surface temperatures, albedo, satellite scan and solar geometries, and a cloud mask composited into twice- per-day images, and daily averaged fields of sea ice motion, for regions poleward of 50 deg. latitude. Our goals under this grant, in general, are four-fold: 1. To quantify the APP accuracy and sources of error by comparing Pathfinder products with field measurements. 2. To determine the consistency of mean fields and trends in comparison with longer time series of available station data and forecast model output. 3. To investigate the consistency of the products between the different AVHRR instruments over the 1982-present period of the NOAA program. 4. To compare an annual cycle of the AVHRR Pathfinder products with MODIS to establish a baseline for extending Pathfinder-type products into the new ESE period. Year One tasks include intercomparisons of the Pathfinder products with field measurements, testing of algorithm assumptions, collection of field data, and further validation and possible improvement of the multi-sensor ice motion fields. Achievements for these tasks are summarized below.

  19. Model to interpret pulsed-field-gradient NMR data including memory and superdispersion effects.

    PubMed

    Néel, Marie-Christine; Bauer, Daniela; Fleury, Marc

    2014-06-01

    We propose a versatile model specifically designed for the quantitative interpretation of NMR velocimetry data. We use the concept of mobile or immobile tracer particles applied in dispersion theory in its Lagrangian form, adding two mechanisms: (i) independent random arrests of finite average representing intermittent periods of very low velocity zones in the mean flow direction and (ii) the possibility of unexpectedly long (but rare) displacements simulating the occurrence of very high velocities in the porous medium. Based on mathematical properties related to subordinated Lévy processes, we give analytical expressions of the signals recorded in pulsed-field-gradient NMR experiments. We illustrate how to use the model for quantifying dispersion from NMR data recorded for water flowing through a homogeneous grain pack column in single- and two-phase flow conditions.

  20. Mean-field scaling of the superfluid to Mott insulator transition in a 2D optical superlattice.

    NASA Astrophysics Data System (ADS)

    Okano, Masayuki; Thomas, Claire; Barter, Thomas; Leung, Tsz-Him; Jo, Gyu-Boong; Guzman, Jennie; Kimchi, Itamar; Vishwanath, Ashvin; Stamper-Kurn, Dan

    2017-04-01

    Quantum gases within optical lattices provide a nearly ideal experimental representation of the Bose-Hubbard model. The mean-field treatment of this model predicts properties of non-zero temperature lattice-trapped gasses to be insensitive to the specific lattice geometry once system energies are scaled by the lattice coordination number z. We examine an ultracold Bose gas of rubidium atoms prepared within a two-dimensional lattice whose geometry can be tuned between two configurations, triangular and kagome, for which z varies from six to four, respectively. Measurements of the coherent fraction of the gas thereby provide a quantitative test of the mean-field scaling prediction. We observe the suppression of superfluidity upon decreasing z, and find our results to be consistent with the predicted mean-field scaling. These optical lattice systems can offer a way to study paradigmatic solid-state phenomena in highly controlled crystal structures. This work was supported by the NSF and by the Army Research Office with funding from the DARPA OLE program.

  1. Model of Procedure Usage – Results from a Qualitative Study to Inform Design of Computer-Based Procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johanna H Oxstrand; Katya L Le Blanc

    The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts wemore » are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory, the Institute for Energy Technology, and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field operators. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do this. The underlying philosophy in the research effort is “Stop – Start – Continue”, i.e. what features from the use of paper-based procedures should we not incorporate (Stop), what should we keep (Continue), and what new features or work processes should be added (Start). One step in identifying the Stop – Start – Continue was to conduct a baseline study where affordances related to the current usage of paper-based procedures were identified. The purpose of the study was to develop a model of paper based procedure use which will help to identify desirable features for computer based procedure prototypes. Affordances such as note taking, markups, sharing procedures between fellow coworkers, the use of multiple procedures at once, etc. were considered. The model describes which affordances associated with paper based procedures should be transferred to computer-based procedures as well as what features should not be incorporated. The model also provides a means to identify what new features not present in paper based procedures need to be added to the computer-based procedures to further enhance performance. The next step is to use the requirements and specifications to develop concepts and prototypes of computer-based procedures. User tests and other data collection efforts will be conducted to ensure that the real issues with field procedures and their usage are being addressed and solved in the best manner possible. This paper describes the baseline study, the construction of the model of procedure use, and the requirements and specifications for computer-based procedures that were developed based on the model. It also addresses how the model and the insights gained from it were used to develop concepts and prototypes for computer based procedures.« less

  2. Symbolic discrete event system specification

    NASA Technical Reports Server (NTRS)

    Zeigler, Bernard P.; Chi, Sungdo

    1992-01-01

    Extending discrete event modeling formalisms to facilitate greater symbol manipulation capabilities is important to further their use in intelligent control and design of high autonomy systems. An extension to the DEVS formalism that facilitates symbolic expression of event times by extending the time base from the real numbers to the field of linear polynomials over the reals is defined. A simulation algorithm is developed to generate the branching trajectories resulting from the underlying nondeterminism. To efficiently manage symbolic constraints, a consistency checking algorithm for linear polynomial constraints based on feasibility checking algorithms borrowed from linear programming has been developed. The extended formalism offers a convenient means to conduct multiple, simultaneous explorations of model behaviors. Examples of application are given with concentration on fault model analysis.

  3. Upscaling of dilution and mixing using a trajectory based Spatial Markov random walk model in a periodic flow domain

    NASA Astrophysics Data System (ADS)

    Sund, Nicole L.; Porta, Giovanni M.; Bolster, Diogo

    2017-05-01

    The Spatial Markov Model (SMM) is an upscaled model that has been used successfully to predict effective mean transport across a broad range of hydrologic settings. Here we propose a novel variant of the SMM, applicable to spatially periodic systems. This SMM is built using particle trajectories, rather than travel times. By applying the proposed SMM to a simple benchmark problem we demonstrate that it can predict mean effective transport, when compared to data from fully resolved direct numerical simulations. Next we propose a methodology for using this SMM framework to predict measures of mixing and dilution, that do not just depend on mean concentrations, but are strongly impacted by pore-scale concentration fluctuations. We use information from trajectories of particles to downscale and reconstruct pore-scale approximate concentration fields from which mixing and dilution measures are then calculated. The comparison between measurements from fully resolved simulations and predictions with the SMM agree very favorably.

  4. Toward polarizable AMOEBA thermodynamics at fixed charge efficiency using a dual force field approach: application to organic crystals.

    PubMed

    Nessler, Ian J; Litman, Jacob M; Schnieders, Michael J

    2016-11-09

    First principles prediction of the structure, thermodynamics and solubility of organic molecular crystals, which play a central role in chemical, material, pharmaceutical and engineering sciences, challenges both potential energy functions and sampling methodologies. Here we calculate absolute crystal deposition thermodynamics using a novel dual force field approach whose goal is to maintain the accuracy of advanced multipole force fields (e.g. the polarizable AMOEBA model) while performing more than 95% of the sampling in an inexpensive fixed charge (FC) force field (e.g. OPLS-AA). Absolute crystal sublimation/deposition phase transition free energies were determined using an alchemical path that grows the crystalline state from a vapor reference state based on sampling with the OPLS-AA force field, followed by dual force field thermodynamic corrections to change between FC and AMOEBA resolutions at both end states (we denote the three step path as AMOEBA/FC). Importantly, whereas the phase transition requires on the order of 200 ns of sampling per compound, only 5 ns of sampling was needed for the dual force field thermodynamic corrections to reach a mean statistical uncertainty of 0.05 kcal mol -1 . For five organic compounds, the mean unsigned error between direct use of AMOEBA and the AMOEBA/FC dual force field path was only 0.2 kcal mol -1 and not statistically significant. Compared to experimental deposition thermodynamics, the mean unsigned error for AMOEBA/FC (1.4 kcal mol -1 ) was more than a factor of two smaller than uncorrected OPLS-AA (3.2 kcal mol -1 ). Overall, the dual force field thermodynamic corrections reduced condensed phase sampling in the expensive force field by a factor of 40, and may prove useful for protein stability or binding thermodynamics in the future.

  5. Progress in modeling solidification in molten salt coolants

    NASA Astrophysics Data System (ADS)

    Tano, Mauricio; Rubiolo, Pablo; Doche, Olivier

    2017-10-01

    Molten salts have been proposed as heat carrier media in the nuclear and concentrating solar power plants. Due to their high melting temperature, solidification of the salts is expected to occur during routine and accidental scenarios. Furthermore, passive safety systems based on the solidification of these salts are being studied. The following article presents new developments in the modeling of eutectic molten salts by means of a multiphase, multicomponent, phase-field model. Besides, an application of this methodology for the eutectic solidification process of the ternary system LiF-KF-NaF is presented. The model predictions are compared with a newly developed semi-analytical solution for directional eutectic solidification at stable growth rate. A good qualitative agreement is obtained between the two approaches. The results obtained with the phase-field model are then used for calculating the homogenized properties of the solid phase distribution. These properties can then be included in a mixture macroscale model, more suitable for industrial applications.

  6. Modeling and calculation of turbulent lifted diffusion flames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanders, J.P.H.; Lamers, A.P.G.G.

    1994-01-01

    Liftoff heights of turbulent diffusion flames have been modeled using the laminar diffusion flamelet concept of Peters and Williams. The strain rate of the smallest eddies is used as the stretch describing parameter, instead of the more common scalar dissipation rate. The h(U) curve, which is the mean liftoff height as a function of fuel exit velocity can be accurately predicted, while this was impossible with the scalar dissipation rate. Liftoff calculations performed in the flames as well as in the equivalent isothermal jets, using a standard k-[epsilon] turbulence model yield approximately the same correct slope for the h(U) curvemore » while the offset has to be reproduced by choosing an appropriate coefficient in the strain rate model. For the flame calculations a model for the pdf of the fluctuating flame base is proposed. The results are insensitive to its width. The temperature field is qualitatively different from the field calculated by Bradley et al. who used a premixed flamelet model for diffusion flames.« less

  7. Diagnostic and model dependent uncertainty of simulated Tibetan permafrost area

    USGS Publications Warehouse

    Wang, A.; Moore, J.C.; Cui, Xingquan; Ji, D.; Li, Q.; Zhang, N.; Wang, C.; Zhang, S.; Lawrence, D.M.; McGuire, A.D.; Zhang, W.; Delire, C.; Koven, C.; Saito, K.; MacDougall, A.; Burke, E.; Decharme, B.

    2016-01-01

     We perform a land-surface model intercomparison to investigate how the simulation of permafrost area on the Tibetan Plateau (TP) varies among six modern stand-alone land-surface models (CLM4.5, CoLM, ISBA, JULES, LPJ-GUESS, UVic). We also examine the variability in simulated permafrost area and distribution introduced by five different methods of diagnosing permafrost (from modeled monthly ground temperature, mean annual ground and air temperatures, air and surface frost indexes). There is good agreement (99 to 135  ×  104 km2) between the two diagnostic methods based on air temperature which are also consistent with the observation-based estimate of actual permafrost area (101  × 104 km2). However the uncertainty (1 to 128  ×  104 km2) using the three methods that require simulation of ground temperature is much greater. Moreover simulated permafrost distribution on the TP is generally only fair to poor for these three methods (diagnosis of permafrost from monthly, and mean annual ground temperature, and surface frost index), while permafrost distribution using air-temperature-based methods is generally good. Model evaluation at field sites highlights specific problems in process simulations likely related to soil texture specification, vegetation types and snow cover. Models are particularly poor at simulating permafrost distribution using the definition that soil temperature remains at or below 0 °C for 24 consecutive months, which requires reliable simulation of both mean annual ground temperatures and seasonal cycle, and hence is relatively demanding. Although models can produce better permafrost maps using mean annual ground temperature and surface frost index, analysis of simulated soil temperature profiles reveals substantial biases. The current generation of land-surface models need to reduce biases in simulated soil temperature profiles before reliable contemporary permafrost maps and predictions of changes in future permafrost distribution can be made for the Tibetan Plateau.

  8. Diagnostic and model dependent uncertainty of simulated Tibetan permafrost area

    NASA Astrophysics Data System (ADS)

    Wang, W.; Rinke, A.; Moore, J. C.; Cui, X.; Ji, D.; Li, Q.; Zhang, N.; Wang, C.; Zhang, S.; Lawrence, D. M.; McGuire, A. D.; Zhang, W.; Delire, C.; Koven, C.; Saito, K.; MacDougall, A.; Burke, E.; Decharme, B.

    2016-02-01

    We perform a land-surface model intercomparison to investigate how the simulation of permafrost area on the Tibetan Plateau (TP) varies among six modern stand-alone land-surface models (CLM4.5, CoLM, ISBA, JULES, LPJ-GUESS, UVic). We also examine the variability in simulated permafrost area and distribution introduced by five different methods of diagnosing permafrost (from modeled monthly ground temperature, mean annual ground and air temperatures, air and surface frost indexes). There is good agreement (99 to 135 × 104 km2) between the two diagnostic methods based on air temperature which are also consistent with the observation-based estimate of actual permafrost area (101 × 104 km2). However the uncertainty (1 to 128 × 104 km2) using the three methods that require simulation of ground temperature is much greater. Moreover simulated permafrost distribution on the TP is generally only fair to poor for these three methods (diagnosis of permafrost from monthly, and mean annual ground temperature, and surface frost index), while permafrost distribution using air-temperature-based methods is generally good. Model evaluation at field sites highlights specific problems in process simulations likely related to soil texture specification, vegetation types and snow cover. Models are particularly poor at simulating permafrost distribution using the definition that soil temperature remains at or below 0 °C for 24 consecutive months, which requires reliable simulation of both mean annual ground temperatures and seasonal cycle, and hence is relatively demanding. Although models can produce better permafrost maps using mean annual ground temperature and surface frost index, analysis of simulated soil temperature profiles reveals substantial biases. The current generation of land-surface models need to reduce biases in simulated soil temperature profiles before reliable contemporary permafrost maps and predictions of changes in future permafrost distribution can be made for the Tibetan Plateau.

  9. Modeling of the charge-state separation at ITEP experimental facility for material science based on a Bernas ion source.

    PubMed

    Barminova, H Y; Saratovskyh, M S

    2016-02-01

    The experiment automation system is supposed to be developed for experimental facility for material science at ITEP, based on a Bernas ion source. The program CAMFT is assumed to be involved into the program of the experiment automation. CAMFT is developed to simulate the intense charged particle bunch motion in the external magnetic fields with arbitrary geometry by means of the accurate solution of the particle motion equation. Program allows the consideration of the bunch intensity up to 10(10) ppb. Preliminary calculations are performed at ITEP supercomputer. The results of the simulation of the beam pre-acceleration and following turn in magnetic field are presented for different initial conditions.

  10. Modeling of the charge-state separation at ITEP experimental facility for material science based on a Bernas ion source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barminova, H. Y., E-mail: barminova@bk.ru; Saratovskyh, M. S.

    2016-02-15

    The experiment automation system is supposed to be developed for experimental facility for material science at ITEP, based on a Bernas ion source. The program CAMFT is assumed to be involved into the program of the experiment automation. CAMFT is developed to simulate the intense charged particle bunch motion in the external magnetic fields with arbitrary geometry by means of the accurate solution of the particle motion equation. Program allows the consideration of the bunch intensity up to 10{sup 10} ppb. Preliminary calculations are performed at ITEP supercomputer. The results of the simulation of the beam pre-acceleration and following turnmore » in magnetic field are presented for different initial conditions.« less

  11. Momentum-resolved spectroscopy of a Fermi liquid

    PubMed Central

    Doggen, Elmer V. H.; Kinnunen, Jami J.

    2015-01-01

    We consider a recent momentum-resolved radio-frequency spectroscopy experiment, in which Fermi liquid properties of a strongly interacting atomic Fermi gas were studied. Here we show that by extending the Brueckner-Goldstone model, we can formulate a theory that goes beyond basic mean-field theories and that can be used for studying spectroscopies of dilute atomic gases in the strongly interacting regime. The model hosts well-defined quasiparticles and works across a wide range of temperatures and interaction strengths. The theory provides excellent qualitative agreement with the experiment. Comparing the predictions of the present theory with the mean-field Bardeen-Cooper-Schrieffer theory yields insights into the role of pair correlations, Tan's contact, and the Hartree mean-field energy shift. PMID:25941948

  12. 3-D residual eddy current field characterisation: applied to diffusion weighted magnetic resonance imaging.

    PubMed

    O'Brien, Kieran; Daducci, Alessandro; Kickler, Nils; Lazeyras, Francois; Gruetter, Rolf; Feiweier, Thorsten; Krueger, Gunnar

    2013-08-01

    Clinical use of the Stejskal-Tanner diffusion weighted images is hampered by the geometric distortions that result from the large residual 3-D eddy current field induced. In this work, we aimed to predict, using linear response theory, the residual 3-D eddy current field required for geometric distortion correction based on phantom eddy current field measurements. The predicted 3-D eddy current field induced by the diffusion-weighting gradients was able to reduce the root mean square error of the residual eddy current field to ~1 Hz. The model's performance was tested on diffusion weighted images of four normal volunteers, following distortion correction, the quality of the Stejskal-Tanner diffusion-weighted images was found to have comparable quality to image registration based corrections (FSL) at low b-values. Unlike registration techniques the correction was not hindered by low SNR at high b-values, and results in improved image quality relative to FSL. Characterization of the 3-D eddy current field with linear response theory enables the prediction of the 3-D eddy current field required to correct eddy current induced geometric distortions for a wide range of clinical and high b-value protocols.

  13. Heterogeneous voter models

    NASA Astrophysics Data System (ADS)

    Masuda, Naoki; Gibert, N.; Redner, S.

    2010-07-01

    We introduce the heterogeneous voter model (HVM), in which each agent has its own intrinsic rate to change state, reflective of the heterogeneity of real people, and the partisan voter model (PVM), in which each agent has an innate and fixed preference for one of two possible opinion states. For the HVM, the time until consensus is reached is much longer than in the classic voter model. For the PVM in the mean-field limit, a population evolves to a preference-based state, where each agent tends to be aligned with its internal preference. For finite populations, discrete fluctuations ultimately lead to consensus being reached in a time that scales exponentially with population size.

  14. Strong and Flexible: Developing a Three-Tiered Curriculum for the Regional Central America Field Epidemiology Training Program

    PubMed Central

    Traicoff, Denise A.; Suarez-Rangel, Gloria; Espinosa-Wilkins, Yescenia; Lopez, Augusto; Diaz, Anaite; Caceres, Victor

    2017-01-01

    Field Epidemiology Training Programs (FETPs) are recognized worldwide as an effective means to strengthen countries’ capacity in epidemiology, surveillance, and outbreak response. FETPs are field-based, with minimum classroom time and maximum time in the field, providing public health services while participants achieve competency. The Central America FETP (CAFETP) uses a three-level pyramid model: basic, intermediate, and advanced. In 2006, a multidisciplinary team used a methodical process based on adult learning practices to construct a competency-based curriculum for the CAFETP. The curriculum was designed based on the tasks related to disease surveillance and field epidemiology that public health officers would conduct at multiple levels in the system. The team used a design process that engaged subject matter experts and considered the unique perspective of each country. The designers worked backwards from the competencies to define field activities, evaluation methods, and classroom components. The 2006 pyramid curriculum has been accredited for a master’s of science in field epidemiology by the Universidad del Valle de Guatemala and has been adapted by programs around the world. The team found the time and effort spent to familiarize subject matter experts with key adult learning principles was worthwhile because it provided a common framework to approach curriculum design. Early results of the redesigned curriculum indicate that the CAFETP supports consistent quality while allowing for specific country needs. PMID:28702503

  15. Monthly mean large-scale analyses of upper-tropospheric humidity and wind field divergence derived from three geostationary satellites

    NASA Technical Reports Server (NTRS)

    Schmetz, Johannes; Menzel, W. Paul; Velden, Christopher; Wu, Xiangqian; Vandeberg, Leo; Nieman, Steve; Hayden, Christopher; Holmlund, Kenneth; Geijo, Carlos

    1995-01-01

    This paper describes the results from a collaborative study between the European Space Operations Center, the European Organization for the Exploitation of Meteorological Satellites, the National Oceanic and Atmospheric Administration, and the Cooperative Institute for Meteorological Satellite Studies investigating the relationship between satellite-derived monthly mean fields of wind and humidity in the upper troposphere for March 1994. Three geostationary meteorological satellites GOES-7, Meteosat-3, and Meteosat-5 are used to cover an area from roughly 160 deg W to 50 deg E. The wind fields are derived from tracking features in successive images of upper-tropospheric water vapor (WV) as depicted in the 6.5-micron absorption band. The upper-tropospheric relative humidity (UTH) is inferred from measured water vapor radiances with a physical retrieval scheme based on radiative forward calculations. Quantitative information on large-scale circulation patterns in the upper-troposphere is possible with the dense spatial coverage of the WV wind vectors. The monthly mean wind field is used to estimate the large-scale divergence; values range between about-5 x 10(exp -6) and 5 x 10(exp 6)/s when averaged over a scale length of about 1000-2000 km. The spatial patterns of the UTH field and the divergence of the wind field closely resemble one another, suggesting that UTH patterns are principally determined by the large-scale circulation. Since the upper-tropospheric humidity absorbs upwelling radiation from lower-tropospheric levels and therefore contributes significantly to the atmospheric greenhouse effect, this work implies that studies on the climate relevance of water vapor should include three-dimensional modeling of the atmospheric dynamics. The fields of UTH and WV winds are useful parameters for a climate-monitoring system based on satellite data. The results from this 1-month analysis suggest the desirability of further GOES and Meteosat studies to characterize the changes in the upper-tropospheric moisture sources and sinks over the past decade.

  16. Effect of lost charged particles on the breakdown characteristics of the gaseous electrical discharge in non-uniform axial electric field

    NASA Astrophysics Data System (ADS)

    Noori, H.; Ranjbar, A. H.

    2017-10-01

    The secondary emission coefficient is a valuable parameter for numerical modeling of the discharge process in gaseous insulation. A theoretical model has been developed to consider the effects of the radial electric field, non-uniformity of the axial electric field, and radial diffusion of charged particles on the secondary emission coefficient. In the model, a modified breakdown criterion is employed to determine the effective secondary electron emission, γeff. Using the geometry factor gi which is introduced based on the effect of radial diffusion of charged particles on the fraction of ions which arrive at the cathode, the geometry-independent term of γeff (Δi) was obtained as a function of the energy of the incident ions on the cathode. The results show that Δi is approximately a unique function of the ion energy for the ratios of d/R = 39, 50, 77, 115, and 200. It means that the considered mechanisms in the model are responsible for the deviation from Paschen's law.

  17. Unbinding slave spins in the Anderson impurity model

    NASA Astrophysics Data System (ADS)

    Guerci, Daniele; Fabrizio, Michele

    2017-11-01

    We show that a generic single-orbital Anderson impurity model, lacking, for instance, any kind of particle-hole symmetry, can be exactly mapped without any constraint onto a resonant level model coupled to two Ising variables, which reduce to one if the hybridization is particle-hole symmetric. The mean-field solution of this model is found to be stable to unphysical spontaneous magnetization of the impurity, unlike the saddle-point solution in the standard slave-boson representation. Remarkably, the mean-field estimate of the Wilson ratio approaches the exact value RW=2 in the Kondo regime.

  18. A downscaling scheme for atmospheric variables to drive soil-vegetation-atmosphere transfer models

    NASA Astrophysics Data System (ADS)

    Schomburg, A.; Venema, V.; Lindau, R.; Ament, F.; Simmer, C.

    2010-09-01

    For driving soil-vegetation-transfer models or hydrological models, high-resolution atmospheric forcing data is needed. For most applications the resolution of atmospheric model output is too coarse. To avoid biases due to the non-linear processes, a downscaling system should predict the unresolved variability of the atmospheric forcing. For this purpose we derived a disaggregation system consisting of three steps: (1) a bi-quadratic spline-interpolation of the low-resolution data, (2) a so-called `deterministic' part, based on statistical rules between high-resolution surface variables and the desired atmospheric near-surface variables and (3) an autoregressive noise-generation step. The disaggregation system has been developed and tested based on high-resolution model output (400m horizontal grid spacing). A novel automatic search-algorithm has been developed for deriving the deterministic downscaling rules of step 2. When applied to the atmospheric variables of the lowest layer of the atmospheric COSMO-model, the disaggregation is able to adequately reconstruct the reference fields. Applying downscaling step 1 and 2, root mean square errors are decreased. Step 3 finally leads to a close match of the subgrid variability and temporal autocorrelation with the reference fields. The scheme can be applied to the output of atmospheric models, both for stand-alone offline simulations, and a fully coupled model system.

  19. Recognizing patterns of visual field loss using unsupervised machine learning

    NASA Astrophysics Data System (ADS)

    Yousefi, Siamak; Goldbaum, Michael H.; Zangwill, Linda M.; Medeiros, Felipe A.; Bowd, Christopher

    2014-03-01

    Glaucoma is a potentially blinding optic neuropathy that results in a decrease in visual sensitivity. Visual field abnormalities (decreased visual sensitivity on psychophysical tests) are the primary means of glaucoma diagnosis. One form of visual field testing is Frequency Doubling Technology (FDT) that tests sensitivity at 52 points within the visual field. Like other psychophysical tests used in clinical practice, FDT results yield specific patterns of defect indicative of the disease. We used Gaussian Mixture Model with Expectation Maximization (GEM), (EM is used to estimate the model parameters) to automatically separate FDT data into clusters of normal and abnormal eyes. Principal component analysis (PCA) was used to decompose each cluster into different axes (patterns). FDT measurements were obtained from 1,190 eyes with normal FDT results and 786 eyes with abnormal (i.e., glaucomatous) FDT results, recruited from a university-based, longitudinal, multi-center, clinical study on glaucoma. The GEM input was the 52-point FDT threshold sensitivities for all eyes. The optimal GEM model separated the FDT fields into 3 clusters. Cluster 1 contained 94% normal fields (94% specificity) and clusters 2 and 3 combined, contained 77% abnormal fields (77% sensitivity). For clusters 1, 2 and 3 the optimal number of PCA-identified axes were 2, 2 and 5, respectively. GEM with PCA successfully separated FDT fields from healthy and glaucoma eyes and identified familiar glaucomatous patterns of loss.

  20. Depiction of pneumothoraces in a large animal model using x-ray dark-field radiography.

    PubMed

    Hellbach, Katharina; Baehr, Andrea; De Marco, Fabio; Willer, Konstantin; Gromann, Lukas B; Herzen, Julia; Dmochewitz, Michaela; Auweter, Sigrid; Fingerle, Alexander A; Noël, Peter B; Rummeny, Ernst J; Yaroshenko, Andre; Maack, Hanns-Ingo; Pralow, Thomas; van der Heijden, Hendrik; Wieberneit, Nataly; Proksa, Roland; Koehler, Thomas; Rindt, Karsten; Schroeter, Tobias J; Mohr, Juergen; Bamberg, Fabian; Ertl-Wagner, Birgit; Pfeiffer, Franz; Reiser, Maximilian F

    2018-02-08

    The aim of this study was to assess the diagnostic value of x-ray dark-field radiography to detect pneumothoraces in a pig model. Eight pigs were imaged with an experimental grating-based large-animal dark-field scanner before and after induction of a unilateral pneumothorax. Image contrast-to-noise ratios between lung tissue and the air-filled pleural cavity were quantified for transmission and dark-field radiograms. The projected area in the object plane of the inflated lung was measured in dark-field images to quantify the collapse of lung parenchyma due to a pneumothorax. Means and standard deviations for lung sizes and signal intensities from dark-field and transmission images were tested for statistical significance using Student's two-tailed t-test for paired samples. The contrast-to-noise ratio between the air-filled pleural space of lateral pneumothoraces and lung tissue was significantly higher in the dark-field (3.65 ± 0.9) than in the transmission images (1.13 ± 1.1; p = 0.002). In case of dorsally located pneumothoraces, a significant decrease (-20.5%; p > 0.0001) in the projected area of inflated lung parenchyma was found after a pneumothorax was induced. Therefore, the detection of pneumothoraces in x-ray dark-field radiography was facilitated compared to transmission imaging in a large animal model.

  1. Filling-enforced nonsymmorphic Kondo semimetals in two dimensions

    NASA Astrophysics Data System (ADS)

    Pixley, J. H.; Lee, SungBin; Brandom, B.; Parameswaran, S. A.

    2017-08-01

    We study the competition between Kondo screening and frustrated magnetism on the nonsymmorphic Shastry-Sutherland Kondo lattice at a filling of two conduction electrons per unit cell. This model is known to host a set of gapless partially Kondo screened phases intermediate between the Kondo-destroyed paramagnet and the heavy Fermi liquid. Based on crystal symmetries, we argue that (i) both the paramagnet and the heavy Fermi liquid are semimetals protected by a glide symmetry; and (ii) partial Kondo screening breaks the symmetry, removing this protection and allowing the partially Kondo screened phase to be deformed into a Kondo insulator via a Lifshitz transition. We confirm these results using large-N mean-field theory and then use nonperturbative arguments to derive a generalized Luttinger sum rule constraining the phase structure of two-dimensional nonsymmorphic Kondo lattices beyond the mean-field limit.

  2. A mean-field theory for self-propelled particles interacting by velocity alignment mechanisms

    NASA Astrophysics Data System (ADS)

    Peruani, F.; Deutsch, A.; Bär, M.

    2008-04-01

    A mean-field approach (MFA) is proposed for the analysis of orientational order in a two-dimensional system of stochastic self-propelled particles interacting by local velocity alignment mechanism. The treatment is applied to the cases of ferromagnetic (F) and liquid-crystal (LC) alignment. In both cases, MFA yields a second order phase transition for a critical noise strength and a scaling exponent of 1/2 for the respective order parameters. We find that the critical noise amplitude ηc at which orientational order emerges in the LC case is smaller than in the F-alignment case, i.e. ηLC C<ηF C. A comparison with simulations of individual-based models with F- resp. LC-alignment shows that the predictions about the critical behavior and the qualitative relation between the respective critical noise amplitudes are correct.

  3. Mean-field theory of differential rotation in density stratified turbulent convection

    NASA Astrophysics Data System (ADS)

    Rogachevskii, I.

    2018-04-01

    A mean-field theory of differential rotation in a density stratified turbulent convection has been developed. This theory is based on the combined effects of the turbulent heat flux and anisotropy of turbulent convection on the Reynolds stress. A coupled system of dynamical budget equations consisting in the equations for the Reynolds stress, the entropy fluctuations and the turbulent heat flux has been solved. To close the system of these equations, the spectral approach, which is valid for large Reynolds and Péclet numbers, has been applied. The adopted model of the background turbulent convection takes into account an increase of the turbulence anisotropy and a decrease of the turbulent correlation time with the rotation rate. This theory yields the radial profile of the differential rotation which is in agreement with that for the solar differential rotation.

  4. Evolution of vector magnetic fields and the August 27 1990 X-3 flare

    NASA Technical Reports Server (NTRS)

    Wang, Haimin

    1992-01-01

    Vector magnetic fields in an active region of the sun are studied by means of continuous observations of magnetic-field evolution emphasizing magnetic shear build-up. The vector magnetograms are shown to measure magnetic fields correctly based on concurrent observations and a comparison of the transverse field with the H alpha fibril structure. The morphology and velocity pattern are examined, and these data and the shear build-up suggest that the active region's two major footprints are separated by a region with flows, new flux emergence, and several neutral lines. The magnetic shear appears to be caused by the collision and shear motion of two poles of opposite polarities. The transverse field is shown to turn from potential to sheared during the process of flux cancellation, and this effect can be incorporated into existing models of magnetic flux cancellation.

  5. An interacting spin-flip model for one-dimensional proton conduction

    NASA Astrophysics Data System (ADS)

    Chou, Tom

    2002-05-01

    A discrete asymmetric exclusion process (ASEP) is developed to model proton conduction along one-dimensional water wires. Each lattice site represents a water molecule that can be in only one of three states; protonated, left-pointing and right-pointing. Only a right- (left-) pointing water can accept a proton from its left (). Results of asymptotic mean field analysis and Monte Carlo simulations for the three-species, open boundary exclusion model are presented and compared. The mean field results for the steady-state proton current suggest a number of regimes analogous to the low and maximal current phases found in the single-species ASEP (Derrida B 1998 Phys. Rep. 301 65-83). We find that the mean field results are accurate (compared with lattice Monte Carlo simulations) only in certain regimes. Refinements and extensions including more elaborate forces and pore defects are also discussed.

  6. A new simple dynamo model for solar activity cycle

    NASA Astrophysics Data System (ADS)

    Yokoi, Nobumitsu; Schmitt, Dieter

    2015-04-01

    The solar magnetic activity cycle has been investigated in an elaborated manner with several types of dynamo models [1]. In most of the current mean-field approaches, the inhomogeneity of the large-scale flow is treated as an essential ingredient in the mean magnetic field equation whereas it is completely neglected in the turbulence equation. In this work, a new simple model for the solar activity cycle is proposed. The present model differs from the previous ones mainly in two points. First, in addition to the helicity coefficient α, we consider a term related to the cross helicity, which represents the effect of the inhomogeneous mean flow, in the turbulent electromotive force [2, 3]. Second, this transport coefficient (γ) is not treated as an adjustable parameter, but the evolution equation for γ is simultaneously solved. The basic scenario for the solar activity cycle in this approach is as follows: The toroidal field is induced by the toroidal rotation in mediation by the turbulent cross helicity. Then due to the α or helicity effect, the poloidal field is generated from the toroidal field. The poloidal field induced by the α effect produces a turbulent cross helicity whose sign is opposite to the original one (negative cross-helicity production). The cross helicity with this opposite sign induces a reversed toroidal field. Results of the eigenvalue analysis of the model equations are shown, which confirm the above scenario. References [1] Charbonneau, Living Rev. Solar Phys. 7, 3 (2010). [2] Yoshizawa, A. Phys. Fluids B 2, 1589 (1990). [3] Yokoi, N. Geophys. Astrophys. Fluid Dyn. 107, 114 (2013).

  7. Mean-field hierarchical equations for some A+BC catalytic reaction models

    NASA Astrophysics Data System (ADS)

    Cortés, Joaquín; Puschmann, Heinrich; Valencia, Eliana

    1998-10-01

    A mean-field study of the (A+BC→AC+1/2B2) system is developed from hierarchical equations, considering mechanisms that include dissociation, reaction with finite rates, desorption, and diffusion of the adsorbed species. The phase diagrams are compared to Monte Carlo simulations.

  8. Computation of misalignment and primary mirror astigmatism figure error of two-mirror telescopes

    NASA Astrophysics Data System (ADS)

    Gu, Zhiyuan; Wang, Yang; Ju, Guohao; Yan, Changxiang

    2018-01-01

    Active optics usually uses the computation models based on numerical methods to correct misalignments and figure errors at present. These methods can hardly lead to any insight into the aberration field dependencies that arise in the presence of the misalignments. An analytical alignment model based on third-order nodal aberration theory is presented for this problem, which can be utilized to compute the primary mirror astigmatic figure error and misalignments for two-mirror telescopes. Alignment simulations are conducted for an R-C telescope based on this analytical alignment model. It is shown that in the absence of wavefront measurement errors, wavefront measurements at only two field points are enough, and the correction process can be completed with only one alignment action. In the presence of wavefront measurement errors, increasing the number of field points for wavefront measurements can enhance the robustness of the alignment model. Monte Carlo simulation shows that, when -2 mm ≤ linear misalignment ≤ 2 mm, -0.1 deg ≤ angular misalignment ≤ 0.1 deg, and -0.2 λ ≤ astigmatism figure error (expressed as fringe Zernike coefficients C5 / C6, λ = 632.8 nm) ≤0.2 λ, the misaligned systems can be corrected to be close to nominal state without wavefront testing error. In addition, the root mean square deviation of RMS wavefront error of all the misaligned samples after being corrected is linearly related to wavefront testing error.

  9. Sampling procedures for throughfall monitoring: A simulation study

    NASA Astrophysics Data System (ADS)

    Zimmermann, Beate; Zimmermann, Alexander; Lark, Richard Murray; Elsenbeer, Helmut

    2010-01-01

    What is the most appropriate sampling scheme to estimate event-based average throughfall? A satisfactory answer to this seemingly simple question has yet to be found, a failure which we attribute to previous efforts' dependence on empirical studies. Here we try to answer this question by simulating stochastic throughfall fields based on parameters for statistical models of large monitoring data sets. We subsequently sampled these fields with different sampling designs and variable sample supports. We evaluated the performance of a particular sampling scheme with respect to the uncertainty of possible estimated means of throughfall volumes. Even for a relative error limit of 20%, an impractically large number of small, funnel-type collectors would be required to estimate mean throughfall, particularly for small events. While stratification of the target area is not superior to simple random sampling, cluster random sampling involves the risk of being less efficient. A larger sample support, e.g., the use of trough-type collectors, considerably reduces the necessary sample sizes and eliminates the sensitivity of the mean to outliers. Since the gain in time associated with the manual handling of troughs versus funnels depends on the local precipitation regime, the employment of automatically recording clusters of long troughs emerges as the most promising sampling scheme. Even so, a relative error of less than 5% appears out of reach for throughfall under heterogeneous canopies. We therefore suspect a considerable uncertainty of input parameters for interception models derived from measured throughfall, in particular, for those requiring data of small throughfall events.

  10. Big Data Analytics for Modelling and Forecasting of Geomagnetic Field Indices

    NASA Astrophysics Data System (ADS)

    Wei, H. L.

    2016-12-01

    A massive amount of data are produced and stored in research areas of space weather and space climate. However, the value of a vast majority of the data acquired every day may not be effectively or efficiently exploited in our daily practice when we try to forecast solar wind parameters and geomagnetic field indices using these recorded measurements or digital signals, probably due to the challenges stemming from the dealing with big data which are characterized by the 4V futures: volume (a massively large amount of data), variety (a great number of different types of data), velocity (a requirement of quick processing of the data), and veracity (the trustworthiness and usability of the data). In order to obtain more reliable and accurate predictive models for geomagnetic field indices, it requires that models should be developed from the big data analytics perspective (or it at least benefits from such a perspective). This study proposes a few data-based modelling frameworks which aim to produce more efficient predictive models for space weather parameters forecasting by means of system identification and big data analytics. More specifically, it aims to build more reliable mathematical models that characterise the relationship between solar wind parameters and geomagnetic filed indices, for example the dependent relationship of Dst and Kp indices on a few solar wind parameters and magnetic field indices, namely, solar wind velocity (V), southward interplanetary magnetic field (Bs), solar wind rectified electric field (VBs), and dynamic flow pressure (P). Examples are provided to illustrate how the proposed modelling approaches are applied to Dst and Kp index prediction.

  11. Mean-field methods in evolutionary duplication-innovation-loss models for the genome-level repertoire of protein domains.

    PubMed

    Angelini, A; Amato, A; Bianconi, G; Bassetti, B; Cosentino Lagomarsino, M

    2010-02-01

    We present a combined mean-field and simulation approach to different models describing the dynamics of classes formed by elements that can appear, disappear, or copy themselves. These models, related to a paradigm duplication-innovation model known as Chinese restaurant process, are devised to reproduce the scaling behavior observed in the genome-wide repertoire of protein domains of all known species. In view of these data, we discuss the qualitative and quantitative differences of the alternative model formulations, focusing in particular on the roles of element loss and of the specificity of empirical domain classes.

  12. Mean-field methods in evolutionary duplication-innovation-loss models for the genome-level repertoire of protein domains

    NASA Astrophysics Data System (ADS)

    Angelini, A.; Amato, A.; Bianconi, G.; Bassetti, B.; Cosentino Lagomarsino, M.

    2010-02-01

    We present a combined mean-field and simulation approach to different models describing the dynamics of classes formed by elements that can appear, disappear, or copy themselves. These models, related to a paradigm duplication-innovation model known as Chinese restaurant process, are devised to reproduce the scaling behavior observed in the genome-wide repertoire of protein domains of all known species. In view of these data, we discuss the qualitative and quantitative differences of the alternative model formulations, focusing in particular on the roles of element loss and of the specificity of empirical domain classes.

  13. Analysis of source regions and meteorological factors for the variability of spring PM10 concentrations in Seoul, Korea

    NASA Astrophysics Data System (ADS)

    Lee, Jangho; Kim, Kwang-Yul

    2018-02-01

    CSEOF analysis is applied for the springtime (March, April, May) daily PM10 concentrations measured at 23 Ministry of Environment stations in Seoul, Korea for the period of 2003-2012. Six meteorological variables at 12 pressure levels are also acquired from the ERA Interim reanalysis datasets. CSEOF analysis is conducted for each meteorological variable over East Asia. Regression analysis is conducted in CSEOF space between the PM10 concentrations and individual meteorological variables to identify associated atmospheric conditions for each CSEOF mode. By adding the regressed loading vectors with the mean meteorological fields, the daily atmospheric conditions are obtained for the first five CSEOF modes. Then, HYSPLIT model is run with the atmospheric conditions for each CSEOF mode in order to back trace the air parcels and dust reaching Seoul. The K-means clustering algorithm is applied to identify major source regions for each CSEOF mode of the PM10 concentrations in Seoul. Three main source regions identified based on the mean fields are: (1) northern Taklamakan Desert (NTD), (2) Gobi Desert and (GD), and (3) East China industrial area (ECI). The main source regions for the mean meteorological fields are consistent with those of previous study; 41% of the source locations are located in GD followed by ECI (37%) and NTD (21%). Back trajectory calculations based on CSEOF analysis of meteorological variables identify distinct source characteristics associated with each CSEOF mode and greatly facilitate the interpretation of the PM10 variability in Seoul in terms of transportation route and meteorological conditions including the source area.

  14. An analytical method to calculate equivalent fields to irregular symmetric and asymmetric photon fields.

    PubMed

    Tahmasebi Birgani, Mohamad J; Chegeni, Nahid; Zabihzadeh, Mansoor; Hamzian, Nima

    2014-01-01

    Equivalent field is frequently used for central axis depth-dose calculations of rectangular- and irregular-shaped photon beams. As most of the proposed models to calculate the equivalent square field are dosimetry based, a simple physical-based method to calculate the equivalent square field size was used as the basis of this study. The table of the sides of the equivalent square or rectangular fields was constructed and then compared with the well-known tables by BJR and Venselaar, et al. with the average relative error percentage of 2.5 ± 2.5% and 1.5 ± 1.5%, respectively. To evaluate the accuracy of this method, the percentage depth doses (PDDs) were measured for some special irregular symmetric and asymmetric treatment fields and their equivalent squares for Siemens Primus Plus linear accelerator for both energies, 6 and 18MV. The mean relative differences of PDDs measurement for these fields and their equivalent square was approximately 1% or less. As a result, this method can be employed to calculate equivalent field not only for rectangular fields but also for any irregular symmetric or asymmetric field. © 2013 American Association of Medical Dosimetrists Published by American Association of Medical Dosimetrists All rights reserved.

  15. An efficient formulation of Krylov's prediction model for train induced vibrations based on the dynamic reciprocity theorem.

    PubMed

    Degrande, G; Lombaert, G

    2001-09-01

    In Krylov's analytical prediction model, the free field vibration response during the passage of a train is written as the superposition of the effect of all sleeper forces, using Lamb's approximate solution for the Green's function of a halfspace. When this formulation is extended with the Green's functions of a layered soil, considerable computational effort is required if these Green's functions are needed in a wide range of source-receiver distances and frequencies. It is demonstrated in this paper how the free field response can alternatively be computed, using the dynamic reciprocity theorem, applied to moving loads. The formulation is based on the response of the soil due to the moving load distribution for a single axle load. The equations are written in the wave-number-frequency domain, accounting for the invariance of the geometry in the direction of the track. The approach allows for a very efficient calculation of the free field vibration response, distinguishing the quasistatic contribution from the effect of the sleeper passage frequency and its higher harmonics. The methodology is validated by means of in situ vibration measurements during the passage of a Thalys high-speed train on the track between Brussels and Paris. It is shown that the model has good predictive capabilities in the near field at low and high frequencies, but underestimates the response in the midfrequency band.

  16. Nonpoint Source Solute Transport Normal to Aquifer Bedding in Heterogeneous, Markov Chain Random Fields

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Harter, T.; Sivakumar, B.

    2005-12-01

    Facies-based geostatistical models have become important tools for the stochastic analysis of flow and transport processes in heterogeneous aquifers. However, little is known about the dependency of these processes on the parameters of facies- based geostatistical models. This study examines the nonpoint source solute transport normal to the major bedding plane in the presence of interconnected high conductivity (coarse- textured) facies in the aquifer medium and the dependence of the transport behavior upon the parameters of the constitutive facies model. A facies-based Markov chain geostatistical model is used to quantify the spatial variability of the aquifer system hydrostratigraphy. It is integrated with a groundwater flow model and a random walk particle transport model to estimate the solute travel time probability distribution functions (pdfs) for solute flux from the water table to the bottom boundary (production horizon) of the aquifer. The cases examined include, two-, three-, and four-facies models with horizontal to vertical facies mean length anisotropy ratios, ek, from 25:1 to 300:1, and with a wide range of facies volume proportions (e.g, from 5% to 95% coarse textured facies). Predictions of travel time pdfs are found to be significantly affected by the number of hydrostratigraphic facies identified in the aquifer, the proportions of coarse-textured sediments, the mean length of the facies (particularly the ratio of length to thickness of coarse materials), and - to a lesser degree - the juxtapositional preference among the hydrostratigraphic facies. In transport normal to the sedimentary bedding plane, travel time pdfs are not log- normally distributed as is often assumed. Also, macrodispersive behavior (variance of the travel time pdf) was found to not be a unique function of the conductivity variance. The skewness of the travel time pdf varied from negatively skewed to strongly positively skewed within the parameter range examined. We also show that the Markov chain approach may give significantly different travel time pdfs when compared to the more commonly used Gaussian random field approach even though the first and second order moments in the geostatistical distribution of the lnK field are identical. The choice of the appropriate geostatistical model is therefore critical in the assessment of nonpoint source transport.

  17. Nonpoint source solute transport normal to aquifer bedding in heterogeneous, Markov chain random fields

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Harter, Thomas; Sivakumar, Bellie

    2006-06-01

    Facies-based geostatistical models have become important tools for analyzing flow and mass transport processes in heterogeneous aquifers. Yet little is known about the relationship between these latter processes and the parameters of facies-based geostatistical models. In this study, we examine the transport of a nonpoint source solute normal (perpendicular) to the major bedding plane of an alluvial aquifer medium that contains multiple geologic facies, including interconnected, high-conductivity (coarse textured) facies. We also evaluate the dependence of the transport behavior on the parameters of the constitutive facies model. A facies-based Markov chain geostatistical model is used to quantify the spatial variability of the aquifer system's hydrostratigraphy. It is integrated with a groundwater flow model and a random walk particle transport model to estimate the solute traveltime probability density function (pdf) for solute flux from the water table to the bottom boundary (the production horizon) of the aquifer. The cases examined include two-, three-, and four-facies models, with mean length anisotropy ratios for horizontal to vertical facies, ek, from 25:1 to 300:1 and with a wide range of facies volume proportions (e.g., from 5 to 95% coarse-textured facies). Predictions of traveltime pdfs are found to be significantly affected by the number of hydrostratigraphic facies identified in the aquifer. Those predictions of traveltime pdfs also are affected by the proportions of coarse-textured sediments, the mean length of the facies (particularly the ratio of length to thickness of coarse materials), and, to a lesser degree, the juxtapositional preference among the hydrostratigraphic facies. In transport normal to the sedimentary bedding plane, traveltime is not lognormally distributed as is often assumed. Also, macrodispersive behavior (variance of the traveltime) is found not to be a unique function of the conductivity variance. For the parameter range examined, the third moment of the traveltime pdf varies from negatively skewed to strongly positively skewed. We also show that the Markov chain approach may give significantly different traveltime distributions when compared to the more commonly used Gaussian random field approach, even when the first- and second-order moments in the geostatistical distribution of the lnK field are identical. The choice of the appropriate geostatistical model is therefore critical in the assessment of nonpoint source transport, and uncertainty about that choice must be considered in evaluating the results.

  18. Exchange bias training relaxation in spin glass/ferromagnet bilayers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chi, Xiaodan; Du, An; Rui, Wenbin

    2016-04-25

    A canonical spin glass (SG) FeAu layer is fabricated to couple to a soft ferromagnet (FM) FeNi layer. Below the SG freezing temperature, exchange bias (EB) and training are observed. Training in SG/FM bilayers is insensitive to cooling field and may suppress the EB or change the sign of the EB field from negative to positive at specific temperatures, violating from the simple power-law or the single exponential function derived from the antiferromagnet based systems. In view of the SG nature, we employ a double decay model to distinguish the contributions from the SG bulk and the SG/FM interface tomore » training. Dynamical properties during training under different cooling fields and at different temperatures are discussed, and the nonzero shifting coefficient in the time index as a signature of slowing-down decay for SG based systems is interpreted by means of a modified Monte Carlo Metropolis algorithm.« less

  19. AVHRR-Based Polar Pathfinder Products: Evaluation, Enhancement and Transition to MODIS

    NASA Technical Reports Server (NTRS)

    Fowler, Charles; Masalanik, James; Stone, Robert; Stroeve, Julienne; Emery, William

    2001-01-01

    The Advanced Very High Resolution Radiometer (AVHRR)-Based Polar Pathfinder (APP) products include calibrated AVHRR channel data, surface temperatures, albedo, satellite scan and solar geometries, and cloud mask, all composited into twice-per-day images, and daily averaged fields of sea ice motion, for regions poleward of 50 latitude. Our general goals under this grant: (1) Quantify the APP accuracy and sources of error by comparing Pathfinder products with field measurements; (2) Determine the consistency of mean fields and trends in comparison with longer time series of available station data and forecast model output; (3) Investigate the consistency of the products between the different AVHRR instruments over the 1982-present period of the NOAA program; and (4) Compare and annual cycle of the APP products with MODIS to establish a baseline for extending Pathfinder-type products into the new ESE period.

  20. The 5'×5' global geoid model GGM2016

    NASA Astrophysics Data System (ADS)

    Shen, WenBin; Han, Jiancheng

    2016-04-01

    We provide an updated 5'×5' global geoid model GGM2016, which is determined based on the shallow layer method (Shen 2006). We choose an inner surface S below the EGM2008 geoid, and the layer bounded by the inner surface S and the Earth's geographical surface E is referred to as the shallow layer. The Earth's geographical surface E is determined by the digital topographic model DTM2006.0 combining with the DNSC2008 mean sea surface. We determine the 3D shallow layer model (SLM) using the refined crust density model CRUST1.0-5min, which is an improved 5'×5' density model of the CRUST1.0 with taking into account the corrections of the areas covered by ice sheets and the land-ocean crossing regions. Based on the SLM and the gravity field EGM2008 defined outside the Earth's geographical surface E, we determine the gravity field EGM2008S defined in the region outside the inner surface S, extending the gravity field's definition domain from the domain outside E to the domain outside S. Based on the geodetic equation W(P)=W0, where W0 is the geopotential constant on the geoid, we determine a 5'×5' global geoid model GGM2016, which provides both the 5'×5' grid values and spherical harmonic coefficient expressions. Comparisons show that the GGM2016 fits the globally available GPS/leveling points better than the EGM2008 geoid. This study is supported by National 973 Project China (grant Nos. 2013CB733301 and 2013CB733305), NSFC (grant Nos. 41174011, 41210006, 41429401, 41128003, 41021061).

  1. Impact of network topology on self-organized criticality

    NASA Astrophysics Data System (ADS)

    Hoffmann, Heiko

    2018-02-01

    The general mechanisms behind self-organized criticality (SOC) are still unknown. Several microscopic and mean-field theory approaches have been suggested, but they do not explain the dependence of the exponents on the underlying network topology of the SOC system. Here, we first report the phenomena that in the Bak-Tang-Wiesenfeld (BTW) model, sites inside an avalanche area largely return to their original state after the passing of an avalanche, forming, effectively, critically arranged clusters of sites. Then, we hypothesize that SOC relies on the formation process of these clusters, and present a model of such formation. For low-dimensional networks, we show theoretically and in simulation that the exponent of the cluster-size distribution is proportional to the ratio of the fractal dimension of the cluster boundary and the dimensionality of the network. For the BTW model, in our simulations, the exponent of the avalanche-area distribution matched approximately our prediction based on this ratio for two-dimensional networks, but deviated for higher dimensions. We hypothesize a transition from cluster formation to the mean-field theory process with increasing dimensionality. This work sheds light onto the mechanisms behind SOC, particularly, the impact of the network topology.

  2. Measurement of terms and parameters in turbulent models

    NASA Technical Reports Server (NTRS)

    Sandborn, Virgil A.

    1989-01-01

    Experimental measurements of the mean and turbulent velocity field in a water flow, turn-around-duct is documented. The small radius of curvature duct experiments were made over a range of Reynolds numbers (based on a duct height of 10 cm) from 70,000 to 500,000. For this particular channel, the flow is dominated by the inertia forces. Use of the local bulk velocity to non-dimensionalize the local velocity was found to limit Reynolds number effects to the regions very close to the wall. Only secondary effects on the flow field were observed when the inlet or exit boundary conditions were altered. The flow over the central two-thirds of the channel was two-dimensional. Mean tangetial and radial velocities, streamlines, pressure distributions, surface shear stress; tangential, radial and lateral turbulent velocities and the Reynolds turbulent shear values are tabulated in other reports. It is evident from the experimental study that a complex numerical modeling technique must be developed to predict the flow in the turn-around-duct. The model must be able to predict relaminarization along the inner-convex-wall. It must also allow for the major increase in turbulence produced by the outer-concave-wall.

  3. Simulations of sooting turbulent jet flames using a hybrid flamelet/stochastic Eulerian field method

    NASA Astrophysics Data System (ADS)

    Consalvi, Jean-Louis; Nmira, Fatiha; Burot, Daria

    2016-03-01

    The stochastic Eulerian field method is applied to simulate 12 turbulent C1-C3 hydrocarbon jet diffusion flames covering a wide range of Reynolds numbers and fuel sooting propensities. The joint scalar probability density function (PDF) is a function of the mixture fraction, enthalpy defect, scalar dissipation rate and representative soot properties. Soot production is modelled by a semi-empirical acetylene/benzene-based soot model. Spectral gas and soot radiation is modelled using a wide-band correlated-k model. Emission turbulent radiation interactions (TRIs) are taken into account by means of the PDF method, whereas absorption TRIs are modelled using the optically thin fluctuation approximation. Model predictions are found to be in reasonable agreement with experimental data in terms of flame structure, soot quantities and radiative loss. Mean soot volume fractions are predicted within a factor of two of the experiments whereas radiant fractions and peaks of wall radiative fluxes are within 20%. The study also aims to assess approximate radiative models, namely the optically thin approximation (OTA) and grey medium approximation. These approximations affect significantly the radiative loss and should be avoided if accurate predictions of the radiative flux are desired. At atmospheric pressure, the relative errors that they produced on the peaks of temperature and soot volume fraction are within both experimental and model uncertainties. However, these discrepancies are found to increase with pressure, suggesting that spectral models describing properly the self-absorption should be considered at over-atmospheric pressure.

  4. National geodetic satellite program, part 2

    NASA Technical Reports Server (NTRS)

    Schmid, H.

    1977-01-01

    Satellite geodesy and the creation of worldwide geodetic reference systems is discussed. The geometric description of the surface and the analytical description of the gravity field of the earth by means of worldwide reference systems, with the aid of satellite geodesy, are presented. A triangulation method based on photogrammetric principles is described in detail. Results are derived in the form of three dimensional models. These mathematical models represent the frame of reference into which one can fit the existing geodetic results from the various local datums, as well as future measurements.

  5. Ideal glass transitions in thin films: An energy landscape perspective

    NASA Astrophysics Data System (ADS)

    Truskett, Thomas M.; Ganesan, Venkat

    2003-07-01

    We introduce a mean-field model for the potential energy landscape of a thin fluid film confined between parallel substrates. The model predicts how the number of accessible basins on the energy landscape and, consequently, the film's ideal glass transition temperature depend on bulk pressure, film thickness, and the strength of the fluid-fluid and fluid-substrate interactions. The predictions are in qualitative agreement with the experimental trends for the kinetic glass transition temperature of thin films, suggesting the utility of landscape-based approaches for studying the behavior of confined fluids.

  6. Geomagnetic modeling by optimal recursive filtering

    NASA Technical Reports Server (NTRS)

    Gibbs, B. P.; Estes, R. H.

    1981-01-01

    The results of a preliminary study to determine the feasibility of using Kalman filter techniques for geomagnetic field modeling are given. Specifically, five separate field models were computed using observatory annual means, satellite, survey and airborne data for the years 1950 to 1976. Each of the individual field models used approximately five years of data. These five models were combined using a recursive information filter (a Kalman filter written in terms of information matrices rather than covariance matrices.) The resulting estimate of the geomagnetic field and its secular variation was propogated four years past the data to the time of the MAGSAT data. The accuracy with which this field model matched the MAGSAT data was evaluated by comparisons with predictions from other pre-MAGSAT field models. The field estimate obtained by recursive estimation was found to be superior to all other models.

  7. European larch phenology in the Alps: can we grasp the role of ecological factors by combining field observations and inverse modelling?

    NASA Astrophysics Data System (ADS)

    Migliavacca, M.; Cremonese, E.; Colombo, R.; Busetto, L.; Galvagno, M.; Ganis, L.; Meroni, M.; Pari, E.; Rossini, M.; Siniscalco, C.; Morra di Cella, U.

    2008-09-01

    Vegetation phenology is strongly influenced by climatic factors. Climate changes may cause phenological variations, especially in the Alps which are considered to be extremely vulnerable to global warming. The main goal of our study is to analyze European larch ( Larix decidua Mill.) phenology in alpine environments and the role of the ecological factors involved, using an integrated approach based on accurate field observations and modelling techniques. We present 2 years of field-collected larch phenological data, obtained following a specifically designed observation protocol. We observed that both spring and autumn larch phenology is strongly influenced by altitude. We propose an approach for the optimization of a spring warming model (SW) and the growing season index model (GSI) consisting of a model inversion technique, based on simulated look-up tables (LUTs), that provides robust parameter estimates. The optimized models showed excellent agreement between modelled and observed data: the SW model predicts the beginning of the growing season (BGS) with a mean RMSE of 4 days, while GSI gives a prediction of the growing season length (LGS) with a RMSE of 5 days. Moreover, we showed that the original GSI parameters led to consistent errors, while the optimized ones significantly increased model accuracy. Finally, we used GSI to investigate interactions of ecological factors during springtime development and autumn senescence. We found that temperature is the most effective factor during spring recovery while photoperiod plays an important role during autumn senescence: photoperiod shows a contrasting effect with altitude decreasing its influence with increasing altitude.

  8. European larch phenology in the Alps: can we grasp the role of ecological factors by combining field observations and inverse modelling?

    PubMed

    Migliavacca, M; Cremonese, E; Colombo, R; Busetto, L; Galvagno, M; Ganis, L; Meroni, M; Pari, E; Rossini, M; Siniscalco, C; Morra di Cella, U

    2008-09-01

    Vegetation phenology is strongly influenced by climatic factors. Climate changes may cause phenological variations, especially in the Alps which are considered to be extremely vulnerable to global warming. The main goal of our study is to analyze European larch (Larix decidua Mill.) phenology in alpine environments and the role of the ecological factors involved, using an integrated approach based on accurate field observations and modelling techniques. We present 2 years of field-collected larch phenological data, obtained following a specifically designed observation protocol. We observed that both spring and autumn larch phenology is strongly influenced by altitude. We propose an approach for the optimization of a spring warming model (SW) and the growing season index model (GSI) consisting of a model inversion technique, based on simulated look-up tables (LUTs), that provides robust parameter estimates. The optimized models showed excellent agreement between modelled and observed data: the SW model predicts the beginning of the growing season (B(GS)) with a mean RMSE of 4 days, while GSI gives a prediction of the growing season length (L(GS)) with a RMSE of 5 days. Moreover, we showed that the original GSI parameters led to consistent errors, while the optimized ones significantly increased model accuracy. Finally, we used GSI to investigate interactions of ecological factors during springtime development and autumn senescence. We found that temperature is the most effective factor during spring recovery while photoperiod plays an important role during autumn senescence: photoperiod shows a contrasting effect with altitude decreasing its influence with increasing altitude.

  9. The shape of the F-region irregularities which produce satellite scintillations Evidence for axial asymmetry.

    NASA Technical Reports Server (NTRS)

    Moorcroft, D. R.; Arima, K. S.

    1972-01-01

    Correlation analysis of three-station observations of satellite amplitude scintillations, recorded at London, Canada during the summer of 1968, have been interpreted to give information on the height, size and shape of the ionospheric irregularities. The irregularities had a mean height of 390 km, and when interpreted in terms of the usual axially-symmetric, field-aligned model, had a mean axial ratio of 6.5, and a mean dimension transverse to the magnetic field of 0.7 km. None of these parameters showed any systematic trend with geomagnetic latitude. The data for one of the passes analyzed were inconsistent with axial symmetry, and when examined in terms of a more general model, 3 of 9 passes showed evidence of irregularities which were elongated both along and transverse to the earth's magnetic field, the elongation transverse to the field tending to lie in a north-south direction.

  10. TREDI: A self consistent three-dimensional integration scheme for RF-gun dynamics based on the Lienard-Wiechert potentials formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giannessi, Luca; Quattromini, Marcello

    1997-06-01

    We describe the model for the simulation of charged beam dynamics in radiofrequency injectors used in the three dimensional code TREDI, where the inclusion of space charge fields is obtained by means of the Lienard-Wiechert retarded potentials. The problem of charge screening is analyzed in covariant form and some general recipes for charge assignment and noise reduction are given.

  11. K-Means Clustering to Study How Student Reasoning Lines Can Be Modified by a Learning Activity Based on Feynman's Unifying Approach

    ERIC Educational Resources Information Center

    Battaglia, Onofrio Rosario; Di Paola, Benedetto; Fazio, Claudio

    2017-01-01

    Research in Science Education has shown that often students need to learn how to identify differences and similarities between descriptive and explicative models. The development and use of explicative skills in the field of thermal science has always been a difficult objective to reach. A way to develop analogical reasoning is to use in Science…

  12. Quantification of correlations in quantum many-particle systems.

    PubMed

    Byczuk, Krzysztof; Kuneš, Jan; Hofstetter, Walter; Vollhardt, Dieter

    2012-02-24

    We introduce a well-defined and unbiased measure of the strength of correlations in quantum many-particle systems which is based on the relative von Neumann entropy computed from the density operator of correlated and uncorrelated states. The usefulness of this general concept is demonstrated by quantifying correlations of interacting electrons in the Hubbard model and in a series of transition-metal oxides using dynamical mean-field theory.

  13. The flow across a street canyon of variable width—Part 2:. Scalar dispersion from a street level line source

    NASA Astrophysics Data System (ADS)

    Simoëns, Serge; Wallace, James M.

    As described in Part 1 [Simoëns et al., 2007. The flow across a street canyon of variable width—Part 1: kinematic description. Atmospheric Environment 41, 9002-9017] measurements have been made of the velocity field around and within the canyon formed by two obstacles placed on the wall of a turbulent boundary layer. Here in Part 2 measurements of the scalar dispersion of smoke released from a two-dimensional slot in the wall perpendicular to the mean flow and located parallel to and midway between these two square obstacles are presented. The Reynolds number of the boundary layer at the slot location without the obstacles in place was Rθ≈980. Statistical properties of the concentration field and the scalar fluxes in the streamwise plane are reported here for canyon openings that have been chosen based on characteristics of the kinematic description. These opening widths, expressed as multiples of the obstacle height, are 1 h, 4 h and 8 h. The mean concentration field revealed that the much of the scalar is trapped on the leeward side of the upstream obstacle before some of it escapes the canyon and is entrained on the roof of the upstream obstacle. It then is spread downstream by the turbulence in the wake of this obstacle. Surprisingly, the root mean square (rms) concentration field reveals that high concentration fluctuations exist in a zone where velocity field turbulence is very low. Measured streamwise scalar fluxes were found to be negative above the obstacles, whereas they are mainly positive between the obstacles. The measured wall normal scalar fluxes have an inverse behavior. Within the canyon, the scalar fluxes are greatest in the region between the large primary vortex, evident in the kinematic field, and the secondary vortex located in the corner of the leeward side of the upstream obstacle. In the flow above the obstacle roofs the wake of the upstream obstacle seems to dominate the scalar transport. Between the obstacles in and above the canyon, the existence of intermittent and intense events appear to prevent the modelling of these fluxes with a simple mean concentration gradient model.

  14. Global maps of the magnetic thickness and magnetization of the Earth's lithosphere

    NASA Astrophysics Data System (ADS)

    Vervelidou, Foteini; Thébault, Erwan

    2015-10-01

    We have constructed global maps of the large-scale magnetic thickness and magnetization of Earth's lithosphere. Deriving such large-scale maps based on lithospheric magnetic field measurements faces the challenge of the masking effect of the core field. In this study, the maps were obtained through analyses in the spectral domain by means of a new regional spatial power spectrum based on the Revised Spherical Cap Harmonic Analysis (R-SCHA) formalism. A series of regional spectral analyses were conducted covering the entire Earth. The R-SCHA surface power spectrum for each region was estimated using the NGDC-720 spherical harmonic (SH) model of the lithospheric magnetic field, which is based on satellite, aeromagnetic, and marine measurements. These observational regional spectra were fitted to a recently proposed statistical expression of the power spectrum of Earth's lithospheric magnetic field, whose free parameters include the thickness and magnetization of the magnetic sources. The resulting global magnetic thickness map is compared to other crustal and magnetic thickness maps based upon different geophysical data. We conclude that the large-scale magnetic thickness of the lithosphere is on average confined to a layer that does not exceed the Moho.

  15. The vertical variability of hyporheic fluxes inferred from riverbed temperature data

    NASA Astrophysics Data System (ADS)

    Cranswick, Roger H.; Cook, Peter G.; Shanafield, Margaret; Lamontagne, Sebastien

    2014-05-01

    We present detailed profiles of vertical water flux from the surface to 1.2 m beneath the Haughton River in the tropical northeast of Australia. A 1-D numerical model is used to estimate vertical flux based on raw temperature time series observations from within downwelling, upwelling, neutral, and convergent sections of the hyporheic zone. A Monte Carlo analysis is used to derive error bounds for the fluxes based on temperature measurement error and uncertainty in effective thermal diffusivity. Vertical fluxes ranged from 5.7 m d-1 (downward) to -0.2 m d-1 (upward) with the lowest relative errors for values between 0.3 and 6 m d-1. Our 1-D approach provides a useful alternative to 1-D analytical and other solutions because it does not incorporate errors associated with simplified boundary conditions or assumptions of purely vertical flow, hydraulic parameter values, or hydraulic conditions. To validate the ability of this 1-D approach to represent the vertical fluxes of 2-D flow fields, we compare our model with two simple 2-D flow fields using a commercial numerical model. These comparisons showed that: (1) the 1-D vertical flux was equivalent to the mean vertical component of flux irrespective of a changing horizontal flux; and (2) the subsurface temperature data inherently has a "spatial footprint" when the vertical flux profiles vary spatially. Thus, the mean vertical flux within a 2-D flow field can be estimated accurately without requiring the flow to be purely vertical. The temperature-derived 1-D vertical flux represents the integrated vertical component of flux along the flow path intersecting the observation point. This article was corrected on 6 JUN 2014. See the end of the full text for details.

  16. Epidemic threshold of the susceptible-infected-susceptible model on complex networks

    NASA Astrophysics Data System (ADS)

    Lee, Hyun Keun; Shim, Pyoung-Seop; Noh, Jae Dong

    2013-06-01

    We demonstrate that the susceptible-infected-susceptible (SIS) model on complex networks can have an inactive Griffiths phase characterized by a slow relaxation dynamics. It contrasts with the mean-field theoretical prediction that the SIS model on complex networks is active at any nonzero infection rate. The dynamic fluctuation of infected nodes, ignored in the mean field approach, is responsible for the inactive phase. It is proposed that the question whether the epidemic threshold of the SIS model on complex networks is zero or not can be resolved by the percolation threshold in a model where nodes are occupied in degree-descending order. Our arguments are supported by the numerical studies on scale-free network models.

  17. Forecasting experiments of a dynamical-statistical model of the sea surface temperature anomaly field based on the improved self-memorization principle

    NASA Astrophysics Data System (ADS)

    Hong, Mei; Chen, Xi; Zhang, Ren; Wang, Dong; Shen, Shuanghe; Singh, Vijay P.

    2018-04-01

    With the objective of tackling the problem of inaccurate long-term El Niño-Southern Oscillation (ENSO) forecasts, this paper develops a new dynamical-statistical forecast model of the sea surface temperature anomaly (SSTA) field. To avoid single initial prediction values, a self-memorization principle is introduced to improve the dynamical reconstruction model, thus making the model more appropriate for describing such chaotic systems as ENSO events. The improved dynamical-statistical model of the SSTA field is used to predict SSTA in the equatorial eastern Pacific and during El Niño and La Niña events. The long-term step-by-step forecast results and cross-validated retroactive hindcast results of time series T1 and T2 are found to be satisfactory, with a Pearson correlation coefficient of approximately 0.80 and a mean absolute percentage error (MAPE) of less than 15 %. The corresponding forecast SSTA field is accurate in that not only is the forecast shape similar to the actual field but also the contour lines are essentially the same. This model can also be used to forecast the ENSO index. The temporal correlation coefficient is 0.8062, and the MAPE value of 19.55 % is small. The difference between forecast results in spring and those in autumn is not high, indicating that the improved model can overcome the spring predictability barrier to some extent. Compared with six mature models published previously, the present model has an advantage in prediction precision and length, and is a novel exploration of the ENSO forecast method.

  18. Greenhouse gas emissions from dairy manure management: a review of field-based studies.

    PubMed

    Owen, Justine J; Silver, Whendee L

    2015-02-01

    Livestock manure management accounts for almost 10% of greenhouse gas emissions from agriculture globally, and contributes an equal proportion to the US methane emission inventory. Current emissions inventories use emissions factors determined from small-scale laboratory experiments that have not been compared to field-scale measurements. We compiled published data on field-scale measurements of greenhouse gas emissions from working and research dairies and compared these to rates predicted by the IPCC Tier 2 modeling approach. Anaerobic lagoons were the largest source of methane (368 ± 193 kg CH4 hd(-1) yr(-1)), more than three times that from enteric fermentation (~120 kg CH4 hd(-1) yr(-1)). Corrals and solid manure piles were large sources of nitrous oxide (1.5 ± 0.8 and 1.1 ± 0.7 kg N2O hd(-1) yr(-1), respectively). Nitrous oxide emissions from anaerobic lagoons (0.9 ± 0.5 kg N2O hd(-1) yr(-1)) and barns (10 ± 6 kg N2O hd(-1) yr(-1)) were unexpectedly large. Modeled methane emissions underestimated field measurement means for most manure management practices. Modeled nitrous oxide emissions underestimated field measurement means for anaerobic lagoons and manure piles, but overestimated emissions from slurry storage. Revised emissions factors nearly doubled slurry CH4 emissions for Europe and increased N2O emissions from solid piles and lagoons in the United States by an order of magnitude. Our results suggest that current greenhouse gas emission factors generally underestimate emissions from dairy manure and highlight liquid manure systems as promising target areas for greenhouse gas mitigation. © 2014 John Wiley & Sons Ltd.

  19. Modeling Sound Propagation Through Non-Axisymmetric Jets

    NASA Technical Reports Server (NTRS)

    Leib, Stewart J.

    2014-01-01

    A method for computing the far-field adjoint Green's function of the generalized acoustic analogy equations under a locally parallel mean flow approximation is presented. The method is based on expanding the mean-flow-dependent coefficients in the governing equation and the scalar Green's function in truncated Fourier series in the azimuthal direction and a finite difference approximation in the radial direction in circular cylindrical coordinates. The combined spectral/finite difference method yields a highly banded system of algebraic equations that can be efficiently solved using a standard sparse system solver. The method is applied to test cases, with mean flow specified by analytical functions, corresponding to two noise reduction concepts of current interest: the offset jet and the fluid shield. Sample results for the Green's function are given for these two test cases and recommendations made as to the use of the method as part of a RANS-based jet noise prediction code.

  20. Particle based plasma simulation for an ion engine discharge chamber

    NASA Astrophysics Data System (ADS)

    Mahalingam, Sudhakar

    Design of the next generation of ion engines can benefit from detailed computer simulations of the plasma in the discharge chamber. In this work a complete particle based approach has been taken to model the discharge chamber plasma. This is the first time that simplifying continuum assumptions on the particle motion have not been made in a discharge chamber model. Because of the long mean free paths of the particles in the discharge chamber continuum models are questionable. The PIC-MCC model developed in this work tracks following particles: neutrals, singly charged ions, doubly charged ions, secondary electrons, and primary electrons. The trajectories of these particles are determined using the Newton-Lorentz's equation of motion including the effects of magnetic and electric fields. Particle collisions are determined using an MCC statistical technique. A large number of collision processes and particle wall interactions are included in the model. The magnetic fields produced by the permanent magnets are determined using Maxwell's equations. The electric fields are determined using an approximate input electric field coupled with a dynamic determination of the electric fields caused by the charged particles. In this work inclusion of the dynamic electric field calculation is made possible by using an inflated plasma permittivity value in the Poisson solver. This allows dynamic electric field calculation with minimal computational requirements in terms of both computer memory and run time. In addition, a number of other numerical procedures such as parallel processing have been implemented to shorten the computational time. The primary results are those modeling the discharge chamber of NASA's NSTAR ion engine at its full operating power. Convergence of numerical results such as total number of particles inside the discharge chamber, average energy of the plasma particles, discharge current, beam current and beam efficiency are obtained. Steady state results for the particle number density distributions and particle loss rates to the walls are presented. Comparisons of numerical results with experimental measurements such as currents and the particle number density distributions are made. Results from a parametric study and from an alternative magnetic field design are also given.

  1. Unsupervised change detection of multispectral images based on spatial constraint chi-squared transform and Markov random field model

    NASA Astrophysics Data System (ADS)

    Shi, Aiye; Wang, Chao; Shen, Shaohong; Huang, Fengchen; Ma, Zhenli

    2016-10-01

    Chi-squared transform (CST), as a statistical method, can describe the difference degree between vectors. The CST-based methods operate directly on information stored in the difference image and are simple and effective methods for detecting changes in remotely sensed images that have been registered and aligned. However, the technique does not take spatial information into consideration, which leads to much noise in the result of change detection. An improved unsupervised change detection method is proposed based on spatial constraint CST (SCCST) in combination with a Markov random field (MRF) model. First, the mean and variance matrix of the difference image of bitemporal images are estimated by an iterative trimming method. In each iteration, spatial information is injected to reduce scattered changed points (also known as "salt and pepper" noise). To determine the key parameter confidence level in the SCCST method, a pseudotraining dataset is constructed to estimate the optimal value. Then, the result of SCCST, as an initial solution of change detection, is further improved by the MRF model. The experiments on simulated and real multitemporal and multispectral images indicate that the proposed method performs well in comprehensive indices compared with other methods.

  2. Coupling the WRF model with a temperature index model based on remote sensing for snowmelt simulations in a river basin in the Altay Mountains, northwest China

    NASA Astrophysics Data System (ADS)

    Wu, X.; Shen, Y.; Wang, N.; Pan, X.; Zhang, W.; He, J.; Wang, G.

    2017-12-01

    Snowmelt water is an important freshwater resource in the Altay Mountains in northwest China, and it is also crucial for local ecological system, economic and social sustainable development; however, warming climate and rapid spring snowmelt can cause floods that endanger both eco-environment and public and personal property and safety. This study simulates snowmelt in the Kayiertesi River catchment using a temperature-index model based on remote sensing coupled with high-resolution meteorological data obtained from NCEP reanalysis fields that were downscaled using Weather Research Forecasting model, then bias-corrected using a statistical downscaled model. Validation of the forcing data revealed that the high-resolution meteorological fields derived from downscaled NCEP reanalysis were reliable for driving the snowmelt model. Parameters of temperature-index model based on remote sensing were calibrated for spring 2014, and model performance was validated using MODIS snow cover and snow observations from spring 2012. The results show that the temperature-index model based on remote sensing performed well, with a simulation mean relative error of 6.7% and a Nash-Sutchliffe efficiency of 0.98 in spring 2012 in the river of Altay Mountains. Based on the reliable distributed snow water equivalent simulation, daily snowmelt runoff was calculated for spring 2012 in the basin. In the study catchment, spring snowmelt runoff accounts for 72% of spring runoff and 21% of annual runoff. Snowmelt is the main source of runoff for the catchment and should be managed and utilized effectively. The results provide a basis for snowmelt runoff predictions, so as to prevent snowmelt-induced floods, and also provide a generalizable approach that can be applied to other remote locations where high-density, long-term observational data is lacking.

  3. Insight into the structural requirements of proton pump inhibitors based on CoMFA and CoMSIA studies.

    PubMed

    Nayana, M Ravi Shashi; Sekhar, Y Nataraja; Nandyala, Haritha; Muttineni, Ravikumar; Bairy, Santosh Kumar; Singh, Kriti; Mahmood, S K

    2008-10-01

    In the present study, a series of 179 quinoline and quinazoline heterocyclic analogues exhibiting inhibitory activity against Gastric (H+/K+)-ATPase were investigated using the comparative molecular field analysis (CoMFA) and comparative molecular similarity indices (CoMSIA) methods. Both the models exhibited good correlation between the calculated 3D-QSAR fields and the observed biological activity for the respective training set compounds. The most optimal CoMFA and CoMSIA models yielded significant leave-one-out cross-validation coefficient, q(2) of 0.777, 0.744 and conventional cross-validation coefficient, r(2) of 0.927, 0.914 respectively. The predictive ability of generated models was tested on a set of 52 compounds having broad range of activity. CoMFA and CoMSIA yielded predicted activities for test set compounds with r(pred)(2) of 0.893 and 0.917 respectively. These validation tests not only revealed the robustness of the models but also demonstrated that for our models r(pred)(2) based on the mean activity of test set compounds can accurately estimate external predictivity. The factors affecting activity were analyzed carefully according to standard coefficient contour maps of steric, electrostatic, hydrophobic, acceptor and donor fields derived from the CoMFA and CoMSIA. These contour plots identified several key features which explain the wide range of activities. The results obtained from models offer important structural insight into designing novel peptic-ulcer inhibitors prior to their synthesis.

  4. Field Evaluation of the Pedostructure-Based Model (Kamel®)

    USDA-ARS?s Scientific Manuscript database

    This study involves a field evaluation of the pedostructure-based model Kamel and comparisons between Kamel and the Hydrus-1D model for predicting profile soil moisture. This paper also presents a sensitivity analysis of Kamel with an evaluation field site used as the base scenario. The field site u...

  5. Hybrid approaches for multiple-species stochastic reaction–diffusion models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spill, Fabian, E-mail: fspill@bu.edu; Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139; Guerrero, Pilar

    2015-10-15

    Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and smallmore » in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. - Highlights: • A novel hybrid stochastic/deterministic reaction–diffusion simulation method is given. • Can massively speed up stochastic simulations while preserving stochastic effects. • Can handle multiple reacting species. • Can handle moving boundaries.« less

  6. Mean Field Games for Stochastic Growth with Relative Utility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Minyi, E-mail: mhuang@math.carleton.ca; Nguyen, Son Luu, E-mail: sonluu.nguyen@upr.edu

    This paper considers continuous time stochastic growth-consumption optimization in a mean field game setting. The individual capital stock evolution is determined by a Cobb–Douglas production function, consumption and stochastic depreciation. The individual utility functional combines an own utility and a relative utility with respect to the population. The use of the relative utility reflects human psychology, leading to a natural pattern of mean field interaction. The fixed point equation of the mean field game is derived with the aid of some ordinary differential equations. Due to the relative utility interaction, our performance analysis depends on some ratio based approximation errormore » estimate.« less

  7. On-Site Determination and Monitoring of Real-Time Fluence Delivery for an Operating UV Reactor Based on a True Fluence Rate Detector.

    PubMed

    Li, Mengkai; Li, Wentao; Qiang, Zhimin; Blatchley, Ernest R

    2017-07-18

    At present, on-site fluence (distribution) determination and monitoring of an operating UV system represent a considerable challenge. The recently developed microfluorescent silica detector (MFSD) is able to measure the approximate true fluence rate (FR) at a fixed position in a UV reactor that can be compared with a FR model directly. Hence it has provided a connection between model calculation and real-time fluence determination. In this study, an on-site determination and monitoring method of fluence delivery for an operating UV reactor was developed. True FR detectors, a UV transmittance (UVT) meter, and a flow rate meter were used for fundamental measurements. The fluence distribution, as well as reduction equivalent fluence (REF), 10th percentile dose in the UV fluence distribution (F 10 ), minimum fluence (F min ), and mean fluence (F mean ) of a test reactor, was calculated in advance by the combined use of computational fluid dynamics and FR field modeling. A field test was carried out on the test reactor for disinfection of a secondary water supply. The estimated real-time REF, F 10 , F min , and F mean decreased 73.6%, 71.4%, 69.6%, and 72.9%, respectively, during a 6-month period, which was attributable to lamp output attenuation and sleeve fouling. The results were analyzed with synchronous data from a previously developed triparameter UV monitoring system and water temperature sensor. This study allowed demonstration of an accurate method for on-site, real-time fluence determination which could be used to enhance the security and public confidence of UV-based water treatment processes.

  8. Sensitivity-based virtual fields for the non-linear virtual fields method

    NASA Astrophysics Data System (ADS)

    Marek, Aleksander; Davis, Frances M.; Pierron, Fabrice

    2017-09-01

    The virtual fields method is an approach to inversely identify material parameters using full-field deformation data. In this manuscript, a new set of automatically-defined virtual fields for non-linear constitutive models has been proposed. These new sensitivity-based virtual fields reduce the influence of noise on the parameter identification. The sensitivity-based virtual fields were applied to a numerical example involving small strain plasticity; however, the general formulation derived for these virtual fields is applicable to any non-linear constitutive model. To quantify the improvement offered by these new virtual fields, they were compared with stiffness-based and manually defined virtual fields. The proposed sensitivity-based virtual fields were consistently able to identify plastic model parameters and outperform the stiffness-based and manually defined virtual fields when the data was corrupted by noise.

  9. Prediction of Turbulence-Generated Noise in Unheated Jets. Part 2; JeNo Users' Manual (Version 1.0)

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Wolter, John D.; Koch, L. Danielle

    2009-01-01

    JeNo (Version 1.0) is a Fortran90 computer code that calculates the far-field sound spectral density produced by axisymmetric, unheated jets at a user specified observer location and frequency range. The user must provide a structured computational grid and a mean flow solution from a Reynolds-Averaged Navier Stokes (RANS) code as input. Turbulence kinetic energy and its dissipation rate from a k-epsilon or k-omega turbulence model must also be provided. JeNo is a research code, and as such, its development is ongoing. The goal is to create a code that is able to accurately compute far-field sound pressure levels for jets at all observer angles and all operating conditions. In order to achieve this goal, current theories must be combined with the best practices in numerical modeling, all of which must be validated by experiment. Since the acoustic predictions from JeNo are based on the mean flow solutions from a RANS code, quality predictions depend on accurate aerodynamic input.This is why acoustic source modeling, turbulence modeling, together with the development of advanced measurement systems are the leading areas of research in jet noise research at NASA Glenn Research Center.

  10. Intercomparison and validation of the mixed layer depth fields of global ocean syntheses

    NASA Astrophysics Data System (ADS)

    Toyoda, Takahiro; Fujii, Yosuke; Kuragano, Tsurane; Kamachi, Masafumi; Ishikawa, Yoichi; Masuda, Shuhei; Sato, Kanako; Awaji, Toshiyuki; Hernandez, Fabrice; Ferry, Nicolas; Guinehut, Stéphanie; Martin, Matthew J.; Peterson, K. Andrew; Good, Simon A.; Valdivieso, Maria; Haines, Keith; Storto, Andrea; Masina, Simona; Köhl, Armin; Zuo, Hao; Balmaseda, Magdalena; Yin, Yonghong; Shi, Li; Alves, Oscar; Smith, Gregory; Chang, You-Soon; Vernieres, Guillaume; Wang, Xiaochun; Forget, Gael; Heimbach, Patrick; Wang, Ou; Fukumori, Ichiro; Lee, Tong

    2017-08-01

    Intercomparison and evaluation of the global ocean surface mixed layer depth (MLD) fields estimated from a suite of major ocean syntheses are conducted. Compared with the reference MLDs calculated from individual profiles, MLDs calculated from monthly mean and gridded profiles show negative biases of 10-20 m in early spring related to the re-stratification process of relatively deep mixed layers. Vertical resolution of profiles also influences the MLD estimation. MLDs are underestimated by approximately 5-7 (14-16) m with the vertical resolution of 25 (50) m when the criterion of potential density exceeding the 10-m value by 0.03 kg m-3 is used for the MLD estimation. Using the larger criterion (0.125 kg m-3) generally reduces the underestimations. In addition, positive biases greater than 100 m are found in wintertime subpolar regions when MLD criteria based on temperature are used. Biases of the reanalyses are due to both model errors and errors related to differences between the assimilation methods. The result shows that these errors are partially cancelled out through the ensemble averaging. Moreover, the bias in the ensemble mean field of the reanalyses is smaller than in the observation-only analyses. This is largely attributed to comparably higher resolutions of the reanalyses. The robust reproduction of both the seasonal cycle and interannual variability by the ensemble mean of the reanalyses indicates a great potential of the ensemble mean MLD field for investigating and monitoring upper ocean processes.

  11. Windowed and Wavelet Analysis of Marine Stratocumulus Cloud Inhomogeneity

    NASA Technical Reports Server (NTRS)

    Gollmer, Steven M.; Harshvardhan; Cahalan, Robert F.; Snider, Jack B.

    1995-01-01

    To improve radiative transfer calculations for inhomogeneous clouds, a consistent means of modeling inhomogeneity is needed. One current method of modeling cloud inhomogeneity is through the use of fractal parameters. This method is based on the supposition that cloud inhomogeneity over a large range of scales is related. An analysis technique named wavelet analysis provides a means of studying the multiscale nature of cloud inhomogeneity. In this paper, the authors discuss the analysis and modeling of cloud inhomogeneity through the use of wavelet analysis. Wavelet analysis as well as other windowed analysis techniques are used to study liquid water path (LWP) measurements obtained during the marine stratocumulus phase of the First ISCCP (International Satellite Cloud Climatology Project) Regional Experiment. Statistics obtained using analysis windows, which are translated to span the LWP dataset, are used to study the local (small scale) properties of the cloud field as well as their time dependence. The LWP data are transformed onto an orthogonal wavelet basis that represents the data as a number of times series. Each of these time series lies within a frequency band and has a mean frequency that is half the frequency of the previous band. Wavelet analysis combined with translated analysis windows reveals that the local standard deviation of each frequency band is correlated with the local standard deviation of the other frequency bands. The ratio between the standard deviation of adjacent frequency bands is 0.9 and remains constant with respect to time. This ratio defined as the variance coupling parameter is applicable to all of the frequency bands studied and appears to be related to the slope of the data's power spectrum. Similar analyses are performed on two cloud inhomogeneity models, which use fractal-based concepts to introduce inhomogeneity into a uniform cloud field. The bounded cascade model does this by iteratively redistributing LWP at each scale using the value of the local mean. This model is reformulated into a wavelet multiresolution framework, thereby presenting a number of variants of the bounded cascade model. One variant introduced in this paper is the 'variance coupled model,' which redistributes LWP using the local standard deviation and the variance coupling parameter. While the bounded cascade model provides an elegant two- parameter model for generating cloud inhomogeneity, the multiresolution framework provides more flexibility at the expense of model complexity. Comparisons are made with the results from the LWP data analysis to demonstrate both the strengths and weaknesses of these models.

  12. Markowitz portfolio optimization model employing fuzzy measure

    NASA Astrophysics Data System (ADS)

    Ramli, Suhailywati; Jaaman, Saiful Hafizah

    2017-04-01

    Markowitz in 1952 introduced the mean-variance methodology for the portfolio selection problems. His pioneering research has shaped the portfolio risk-return model and become one of the most important research fields in modern finance. This paper extends the classical Markowitz's mean-variance portfolio selection model applying the fuzzy measure to determine the risk and return. In this paper, we apply the original mean-variance model as a benchmark, fuzzy mean-variance model with fuzzy return and the model with return are modeled by specific types of fuzzy number for comparison. The model with fuzzy approach gives better performance as compared to the mean-variance approach. The numerical examples are included to illustrate these models by employing Malaysian share market data.

  13. Technical Note: Approximate Bayesian parameterization of a complex tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2013-08-01

    Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.

  14. Climate Simulations based on a different-grid nested and coupled model

    NASA Astrophysics Data System (ADS)

    Li, Dan; Ji, Jinjun; Li, Yinpeng

    2002-05-01

    An atmosphere-vegetation interaction model (A VIM) has been coupled with a nine-layer General Cir-culation Model (GCM) of Institute of Atmospheic Physics/State Key Laboratory of Numerical Modeling for Atmospheric Sciences and Geophysical Fluid Dynamics (IAP/LASG), which is rhomboidally truncated at zonal wave number 15, to simulate global climatic mean states. A VIM is a model having inter-feedback between land surface processes and eco-physiological processes on land. As the first step to couple land with atmosphere completely, the physiological processes are fixed and only the physical part (generally named the SVAT (soil-vegetation-atmosphere-transfer scheme) model) of AVIM is nested into IAP/LASG L9R15 GCM. The ocean part of GCM is prescribed and its monthly sea surface temperature (SST) is the climatic mean value. With respect to the low resolution of GCM, i.e., each grid cell having lon-gitude 7.5° and latitude 4.5°, the vegetation is given a high resolution of 1.5° by 1.5° to nest and couple the fine grid cells of land with the coarse grid cells of atmosphere. The coupling model has been integrated for 15 years and its last ten-year mean of outputs was chosen for analysis. Compared with observed data and NCEP reanalysis, the coupled model simulates the main characteris-tics of global atmospheric circulation and the fields of temperature and moisture. In particular, the simu-lated precipitation and surface air temperature have sound results. The work creates a solid base on coupling climate models with the biosphere.

  15. Developing Soil Moisture Profiles Utilizing Remotely Sensed MW and TIR Based SM Estimates Through Principle of Maximum Entropy

    NASA Astrophysics Data System (ADS)

    Mishra, V.; Cruise, J. F.; Mecikalski, J. R.

    2015-12-01

    Developing accurate vertical soil moisture profiles with minimum input requirements is important to agricultural as well as land surface modeling. Earlier studies show that the principle of maximum entropy (POME) can be utilized to develop vertical soil moisture profiles with accuracy (MAE of about 1% for a monotonically dry profile; nearly 2% for monotonically wet profiles and 3.8% for mixed profiles) with minimum constraints (surface, mean and bottom soil moisture contents). In this study, the constraints for the vertical soil moisture profiles were obtained from remotely sensed data. Low resolution (25 km) MW soil moisture estimates (AMSR-E) were downscaled to 4 km using a soil evaporation efficiency index based disaggregation approach. The downscaled MW soil moisture estimates served as a surface boundary condition, while 4 km resolution TIR based Atmospheric Land Exchange Inverse (ALEXI) estimates provided the required mean root-zone soil moisture content. Bottom soil moisture content is assumed to be a soil dependent constant. Mulit-year (2002-2011) gridded profiles were developed for the southeastern United States using the POME method. The soil moisture profiles were compared to those generated in land surface models (Land Information System (LIS) and an agricultural model DSSAT) along with available NRCS SCAN sites in the study region. The end product, spatial soil moisture profiles, can be assimilated into agricultural and hydrologic models in lieu of precipitation for data scarce regions.Developing accurate vertical soil moisture profiles with minimum input requirements is important to agricultural as well as land surface modeling. Previous studies have shown that the principle of maximum entropy (POME) can be utilized with minimal constraints to develop vertical soil moisture profiles with accuracy (MAE = 1% for monotonically dry profiles; MAE = 2% for monotonically wet profiles and MAE = 3.8% for mixed profiles) when compared to laboratory and field data. In this study, vertical soil moisture profiles were developed using the POME model to evaluate an irrigation schedule over a maze field in north central Alabama (USA). The model was validated using both field data and a physically based mathematical model. The results demonstrate that a simple two-constraint entropy model under the assumption of a uniform initial soil moisture distribution can simulate most soil moisture profiles within the field area for 6 different soil types. The results of the irrigation simulation demonstrated that the POME model produced a very efficient irrigation strategy with loss of about 1.9% of the total applied irrigation water. However, areas of fine-textured soil (i.e. silty clay) resulted in plant stress of nearly 30% of the available moisture content due to insufficient water supply on the last day of the drying phase of the irrigation cycle. Overall, the POME approach showed promise as a general strategy to guide irrigation in humid environments, with minimum input requirements.

  16. Coarse-graining and hybrid methods for efficient simulation of stochastic multi-scale models of tumour growth.

    PubMed

    de la Cruz, Roberto; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás

    2017-12-01

    The development of hybrid methodologies is of current interest in both multi-scale modelling and stochastic reaction-diffusion systems regarding their applications to biology. We formulate a hybrid method for stochastic multi-scale models of cells populations that extends the remit of existing hybrid methods for reaction-diffusion systems. Such method is developed for a stochastic multi-scale model of tumour growth, i.e. population-dynamical models which account for the effects of intrinsic noise affecting both the number of cells and the intracellular dynamics. In order to formulate this method, we develop a coarse-grained approximation for both the full stochastic model and its mean-field limit. Such approximation involves averaging out the age-structure (which accounts for the multi-scale nature of the model) by assuming that the age distribution of the population settles onto equilibrium very fast. We then couple the coarse-grained mean-field model to the full stochastic multi-scale model. By doing so, within the mean-field region, we are neglecting noise in both cell numbers (population) and their birth rates (structure). This implies that, in addition to the issues that arise in stochastic-reaction diffusion systems, we need to account for the age-structure of the population when attempting to couple both descriptions. We exploit our coarse-graining model so that, within the mean-field region, the age-distribution is in equilibrium and we know its explicit form. This allows us to couple both domains consistently, as upon transference of cells from the mean-field to the stochastic region, we sample the equilibrium age distribution. Furthermore, our method allows us to investigate the effects of intracellular noise, i.e. fluctuations of the birth rate, on collective properties such as travelling wave velocity. We show that the combination of population and birth-rate noise gives rise to large fluctuations of the birth rate in the region at the leading edge of front, which cannot be accounted for by the coarse-grained model. Such fluctuations have non-trivial effects on the wave velocity. Beyond the development of a new hybrid method, we thus conclude that birth-rate fluctuations are central to a quantitatively accurate description of invasive phenomena such as tumour growth.

  17. Coarse-graining and hybrid methods for efficient simulation of stochastic multi-scale models of tumour growth

    NASA Astrophysics Data System (ADS)

    de la Cruz, Roberto; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás

    2017-12-01

    The development of hybrid methodologies is of current interest in both multi-scale modelling and stochastic reaction-diffusion systems regarding their applications to biology. We formulate a hybrid method for stochastic multi-scale models of cells populations that extends the remit of existing hybrid methods for reaction-diffusion systems. Such method is developed for a stochastic multi-scale model of tumour growth, i.e. population-dynamical models which account for the effects of intrinsic noise affecting both the number of cells and the intracellular dynamics. In order to formulate this method, we develop a coarse-grained approximation for both the full stochastic model and its mean-field limit. Such approximation involves averaging out the age-structure (which accounts for the multi-scale nature of the model) by assuming that the age distribution of the population settles onto equilibrium very fast. We then couple the coarse-grained mean-field model to the full stochastic multi-scale model. By doing so, within the mean-field region, we are neglecting noise in both cell numbers (population) and their birth rates (structure). This implies that, in addition to the issues that arise in stochastic-reaction diffusion systems, we need to account for the age-structure of the population when attempting to couple both descriptions. We exploit our coarse-graining model so that, within the mean-field region, the age-distribution is in equilibrium and we know its explicit form. This allows us to couple both domains consistently, as upon transference of cells from the mean-field to the stochastic region, we sample the equilibrium age distribution. Furthermore, our method allows us to investigate the effects of intracellular noise, i.e. fluctuations of the birth rate, on collective properties such as travelling wave velocity. We show that the combination of population and birth-rate noise gives rise to large fluctuations of the birth rate in the region at the leading edge of front, which cannot be accounted for by the coarse-grained model. Such fluctuations have non-trivial effects on the wave velocity. Beyond the development of a new hybrid method, we thus conclude that birth-rate fluctuations are central to a quantitatively accurate description of invasive phenomena such as tumour growth.

  18. Universal structures in some mean field spin glasses and an application

    NASA Astrophysics Data System (ADS)

    Bolthausen, Erwin; Kistler, Nicola

    2008-12-01

    We discuss a spin glass reminiscent of the random energy model (REM), which allows, in particular, to recast the Parisi minimization into a more classical Gibbs variational principle, thereby shedding some light into the physical meaning of the order parameter of the Parisi theory. As an application, we study the impact of an extensive cavity field on Derrida's REM: Despite its simplicity, this model displays some interesting features such as ultrametricity and chaos in temperature.

  19. Coagulation kinetics beyond mean field theory using an optimised Poisson representation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burnett, James; Ford, Ian J.

    Binary particle coagulation can be modelled as the repeated random process of the combination of two particles to form a third. The kinetics may be represented by population rate equations based on a mean field assumption, according to which the rate of aggregation is taken to be proportional to the product of the mean populations of the two participants, but this can be a poor approximation when the mean populations are small. However, using the Poisson representation, it is possible to derive a set of rate equations that go beyond mean field theory, describing pseudo-populations that are continuous, noisy, andmore » complex, but where averaging over the noise and initial conditions gives the mean of the physical population. Such an approach is explored for the simple case of a size-independent rate of coagulation between particles. Analytical results are compared with numerical computations and with results derived by other means. In the numerical work, we encounter instabilities that can be eliminated using a suitable “gauge” transformation of the problem [P. D. Drummond, Eur. Phys. J. B 38, 617 (2004)] which we show to be equivalent to the application of the Cameron-Martin-Girsanov formula describing a shift in a probability measure. The cost of such a procedure is to introduce additional statistical noise into the numerical results, but we identify an optimised gauge transformation where this difficulty is minimal for the main properties of interest. For more complicated systems, such an approach is likely to be computationally cheaper than Monte Carlo simulation.« less

  20. Model-based coefficient method for calculation of N leaching from agricultural fields applied to small catchments and the effects of leaching reducing measures

    NASA Astrophysics Data System (ADS)

    Kyllmar, K.; Mårtensson, K.; Johnsson, H.

    2005-03-01

    A method to calculate N leaching from arable fields using model-calculated N leaching coefficients (NLCs) was developed. Using the process-based modelling system SOILNDB, leaching of N was simulated for four leaching regions in southern Sweden with 20-year climate series and a large number of randomised crop sequences based on regional agricultural statistics. To obtain N leaching coefficients, mean values of annual N leaching were calculated for each combination of main crop, following crop and fertilisation regime for each leaching region and soil type. The field-NLC method developed could be useful for following up water quality goals in e.g. small monitoring catchments, since it allows normal leaching from actual crop rotations and fertilisation to be determined regardless of the weather. The method was tested using field data from nine small intensively monitored agricultural catchments. The agreement between calculated field N leaching and measured N transport in catchment stream outlets, 19-47 and 8-38 kg ha -1 yr -1, respectively, was satisfactory in most catchments when contributions from land uses other than arable land and uncertainties in groundwater flows were considered. The possibility of calculating effects of crop combinations (crop and following crop) is of considerable value since changes in crop rotation constitute a large potential for reducing N leaching. When the effect of a number of potential measures to reduce N leaching (i.e. applying manure in spring instead of autumn; postponing ploughing-in of ley and green fallow in autumn; undersowing a catch crop in cereals and oilseeds; and increasing the area of catch crops by substituting winter cereals and winter oilseeds with corresponding spring crops) was calculated for the arable fields in the catchments using field-NLCs, N leaching was reduced by between 34 and 54% for the separate catchments when the best possible effect on the entire potential area was assumed.

  1. Integrating economic and biophysical data in assessing cost-effectiveness of buffer strip placement.

    PubMed

    Balana, Bedru Babulo; Lago, Manuel; Baggaley, Nikki; Castellazzi, Marie; Sample, James; Stutter, Marc; Slee, Bill; Vinten, Andy

    2012-01-01

    The European Union Water Framework Directive (WFD) requires Member States to set water quality objectives and identify cost-effective mitigation measures to achieve "good status" in all waters. However, costs and effectiveness of measures vary both within and between catchments, depending on factors such as land use and topography. The aim of this study was to develop a cost-effectiveness analysis framework for integrating estimates of phosphorus (P) losses from land-based sources, potential abatement using riparian buffers, and the economic implications of buffers. Estimates of field-by-field P exports and routing were based on crop risk and field slope classes. Buffer P trapping efficiencies were based on literature metadata analysis. Costs of placing buffers were based on foregone farm gross margins. An integrated optimization model of cost minimization was developed and solved for different P reduction targets to the Rescobie Loch catchment in eastern Scotland. A target mean annual P load reduction of 376 kg to the loch to achieve good status was identified. Assuming all the riparian fields initially have the 2-m buffer strip required by the General Binding Rules (part of the WFD in Scotland), the model gave good predictions of P loads (345-481 kg P). The modeling results show that riparian buffers alone cannot achieve the required P load reduction (up to 54% P can be removed). In the medium P input scenario, average costs vary from £38 to £176 kg P at 10% and 54% P reduction, respectively. The framework demonstrates a useful tool for exploring cost-effective targeting of environmental measures. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  2. Building a mechanistic understanding of predation with GPS-based movement data.

    PubMed

    Merrill, Evelyn; Sand, Håkan; Zimmermann, Barbara; McPhee, Heather; Webb, Nathan; Hebblewhite, Mark; Wabakken, Petter; Frair, Jacqueline L

    2010-07-27

    Quantifying kill rates and sources of variation in kill rates remains an important challenge in linking predators to their prey. We address current approaches to using global positioning system (GPS)-based movement data for quantifying key predation components of large carnivores. We review approaches to identify kill sites from GPS movement data as a means to estimate kill rates and address advantages of using GPS-based data over past approaches. Despite considerable progress, modelling the probability that a cluster of GPS points is a kill site is no substitute for field visits, but can guide our field efforts. Once kill sites are identified, time spent at a kill site (handling time) and time between kills (killing time) can be determined. We show how statistical models can be used to investigate the influence of factors such as animal characteristics (e.g. age, sex, group size) and landscape features on either handling time or killing efficiency. If we know the prey densities along paths to a kill, we can quantify the 'attack success' parameter in functional response models directly. Problems remain in incorporating the behavioural complexity derived from GPS movement paths into functional response models, particularly in multi-prey systems, but we believe that exploring the details of GPS movement data has put us on the right path.

  3. Actual evapotranspiration (water use) assessment of the Colorado River Basin at the Landsat resolution using the operational simplified surface energy balance model

    USGS Publications Warehouse

    Singh, Ramesh K.; Senay, Gabriel B.; Velpuri, Naga Manohar; Bohms, Stefanie; Russell L, Scott; Verdin, James P.

    2014-01-01

    Accurately estimating consumptive water use in the Colorado River Basin (CRB) is important for assessing and managing limited water resources in the basin. Increasing water demand from various sectors may threaten long-term sustainability of the water supply in the arid southwestern United States. We have developed a first-ever basin-wide actual evapotranspiration (ETa) map of the CRB at the Landsat scale for water use assessment at the field level. We used the operational Simplified Surface Energy Balance (SSEBop) model for estimating ETa using 328 cloud-free Landsat images acquired during 2010. Our results show that cropland had the highest ETa among all land cover classes except for water. Validation using eddy covariance measured ETa showed that the SSEBop model nicely captured the variability in annual ETa with an overall R2 of 0.78 and a mean bias error of about 10%. Comparison with water balance-based ETa showed good agreement (R2 = 0.85) at the sub-basin level. Though there was good correlation (R2 = 0.79) between Moderate Resolution Imaging Spectroradiometer (MODIS)-based ETa (1 km spatial resolution) and Landsat-based ETa (30 m spatial resolution), the spatial distribution of MODIS-based ETa was not suitable for water use assessment at the field level. In contrast, Landsat-based ETa has good potential to be used at the field level for water management. With further validation using multiple years and sites, our methodology can be applied for regular production of ETa maps of larger areas such as the conterminous United States.

  4. Semiclassical theory of the self-consistent vibration-rotation fields and its application to the bending-rotation interaction in the H{sub 2}O molecule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skalozub, A.S.; Tsaune, A.Ya.

    1994-12-01

    A new approach for analyzing the highly excited vibration-rotation (VR) states of nonrigid molecules is suggested. It is based on the separation of the vibrational and rotational terms in the molecular VR Hamiltonian by introducing periodic auxiliary fields. These fields transfer different interactions within a molecule and are treated in terms of the mean-field approximation. As a result, the solution of the stationary Schroedinger equation with the VR Hamiltonian amounts to a quantization of the Berry phase in a problem of the molecular angular-momentum motion in a certain periodic VR field (rotational problem). The quantization procedure takes into account themore » motion of the collective vibrational variables in the appropriate VR potentials (vibrational problem). The quantization rules, the mean-field configurations of auxiliary interactions, and the solutions to the Schrodinger equations for the vibrational and rotational problems are self-consistently connected with one another. The potentialities of the theory are demonstrated by the bending-rotation interaction modeled by the Bunker-Landsberg potential function in the H{sub 2} molecule. The calculations are compared with both the results of the exact computations and those of other approximate methods. 32 refs., 4 tabs.« less

  5. Educational application for visualization and analysis of electric field strength in multiple electrode electroporation.

    PubMed

    Mahnič-Kalamiza, Samo; Kotnik, Tadej; Miklavčič, Damijan

    2012-10-30

    Electrochemotherapy is a local treatment that utilizes electric pulses in order to achieve local increase in cytotoxicity of some anticancer drugs. The success of this treatment is highly dependent on parameters such as tissue electrical properties, applied voltages and spatial relations in placement of electrodes that are used to establish a cell-permeabilizing electric field in target tissue. Non-thermal irreversible electroporation techniques for ablation of tissue depend similarly on these parameters. In the treatment planning stage, if oversimplified approximations for evaluation of electric field are used, such as U/d (voltage-to-distance ratio), sufficient field strength may not be reached within the entire target (tumor) area, potentially resulting in treatment failure. In order to provide an aid in education of medical personnel performing electrochemotherapy and non-thermal irreversible electroporation for tissue ablation, assist in visualizing the electric field in needle electrode electroporation and the effects of changes in electrode placement, an application has been developed both as a desktop- and a web-based solution. It enables users to position up to twelve electrodes in a plane of adjustable dimensions representing a two-dimensional slice of tissue. By means of manipulation of electrode placement, i.e. repositioning, and the changes in electrical parameters, the users interact with the system and observe the resulting electrical field strength established by the inserted electrodes in real time. The field strength is calculated and visualized online and instantaneously reflects the desired changes, dramatically improving the user friendliness and educational value, especially compared to approaches utilizing general-purpose numerical modeling software, such as finite element modeling packages. In this paper we outline the need and offer a solution in medical education in the field of electroporation-based treatments, e.g. primarily electrochemotherapy and non-thermal irreversible tissue ablation. We present the background, the means of implementation and the fully functional application, which is the first of its kind. While the initial feedback from students that have evaluated this application as part of an e-learning course is positive, a formal study is planned to thoroughly evaluate the current version and identify possible future improvements and modifications.

  6. Macro-architectured cellular materials: Properties, characteristic modes, and prediction methods

    NASA Astrophysics Data System (ADS)

    Ma, Zheng-Dong

    2017-12-01

    Macro-architectured cellular (MAC) material is defined as a class of engineered materials having configurable cells of relatively large (i.e., visible) size that can be architecturally designed to achieve various desired material properties. Two types of novel MAC materials, negative Poisson's ratio material and biomimetic tendon reinforced material, were introduced in this study. To estimate the effective material properties for structural analyses and to optimally design such materials, a set of suitable homogenization methods was developed that provided an effective means for the multiscale modeling of MAC materials. First, a strain-based homogenization method was developed using an approach that separated the strain field into a homogenized strain field and a strain variation field in the local cellular domain superposed on the homogenized strain field. The principle of virtual displacements for the relationship between the strain variation field and the homogenized strain field was then used to condense the strain variation field onto the homogenized strain field. The new method was then extended to a stress-based homogenization process based on the principle of virtual forces and further applied to address the discrete systems represented by the beam or frame structures of the aforementioned MAC materials. The characteristic modes and the stress recovery process used to predict the stress distribution inside the cellular domain and thus determine the material strengths and failures at the local level are also discussed.

  7. Towards a coastal ocean forecasting system in Southern Adriatic Northern Ionian seas based on unstructured-grid model

    NASA Astrophysics Data System (ADS)

    Federico, Ivan; Oddo, Paolo; Pinardi, Nadia; Coppini, Giovanni

    2014-05-01

    The Southern Adriatic Northern Ionian Forecasting System (SANIFS) operational chain is based on a nesting approach. The large scale model for the entire Mediterranean basin (MFS, Mediterranean Forecasting system, operated by INGV, e.g. Tonani et al. 2008, Oddo et al. 2009) provides lateral open boundary conditions to the regional model for Adriatic and Ionian seas (AIFS, Adriatic Ionian Forecasting System) which provides the open-sea fields (initial conditions and lateral open boundary conditions) to SANIFS. The latter, here presented, is a coastal ocean model based on SHYFEM (Shallow HYdrodynamics Finite Element Model) code, which is an unstructured grid, finite element three-dimensional hydrodynamic model (e.g. Umgiesser et al., 2004, Ferrarin et al., 2013). The SANIFS hydrodynamic model component has been designed to provide accurate information of hydrodynamics and active tracer fields in the coastal waters of Southern Eastern Italy (Apulia, Basilicata and Calabria regions), where the model is characterized by a resolution of about of 200-500 m. The horizontal resolution is also accurate in open-sea areas, where the elements size is approximately 3 km. During the development phase the model has been initialized and forced at the lateral open boundaries through a full nesting strategy directly with the MFS fields. The heat fluxes has been computed by bulk formulae using as input data the operational analyses of European Centre for Medium-Range Weather Forecasts. Short range pre-operational forecast tests have been performed in different seasons to evaluate the robustness of the implemented model in different oceanographic conditions. Model results are validated by means of comparison with MFS operational results and observations. The model is able to reproduce the large-scale oceanographic structures of the area (keeping similar structures of MFS in open sea), while in the coastal area significant improvements in terms of reproduced structures and dynamics are evident.

  8. Parametrization of 2,2,2-trifluoroethanol based on the generalized AMBER force field provides realistic agreement between experimental and calculated properties of pure liquid as well as water-mixed solutions.

    PubMed

    Vymětal, Jiří; Vondrášek, Jiří

    2014-09-04

    We present a novel force field model of 2,2,2-trifluoroethanol (TFE) based on the generalized AMBER force field. The model was exhaustively parametrized to reproduce liquid-state properties of pure TFE, namely, density, enthalpy of vaporization, self-diffusion coefficient, and population of trans and gauche conformers. The model predicts excellently other liquid-state properties such as shear viscosity, thermal expansion coefficient, and isotropic compressibility. The resulting model describes unexpectedly well the state equation of the liquid region in the range of 100 K and 10 MPa. More importantly, the proposed TFE model was optimized for use in combination with the TIP4P/Ew and TIP4P/2005 water models. It does not manifest excessive aggregation, which is known for other models, and therefore, it is supposed to more realistically describe the behavior of TFE/water mixtures. This was demonstrated by means of the Kirkwood-Buff theory of solutions and reasonable agreement with experimental data. We explored a considerable part of the parameter space and systematically tested individual combinations of parameters for performance in combination with the TIP4P/Ew and TIP4P/2005 water models. We observed ambiguity in parameters describing pure liquid TFE; however, most of them failed for TFE/water mixtures. We clearly demonstrated the necessity for balanced TFE-TFE, TFE-water, and water-water interactions which can be acquired only by employing implicit polarization correction in the course of parametrization.

  9. Angular-domain scattering interferometry.

    PubMed

    Shipp, Dustin W; Qian, Ruobing; Berger, Andrew J

    2013-11-15

    We present an angular-scattering optical method that is capable of measuring the mean size of scatterers in static ensembles within a field of view less than 20 μm in diameter. Using interferometry, the method overcomes the inability of intensity-based models to tolerate the large speckle grains associated with such small illumination areas. By first estimating each scatterer's location, the method can model between-scatterer interference as well as traditional single-particle Mie scattering. Direct angular-domain measurements provide finer angular resolution than digitally transformed image-plane recordings. This increases sensitivity to size-dependent scattering features, enabling more robust size estimates. The sensitivity of these angular-scattering measurements to various sizes of polystyrene beads is demonstrated. Interferometry also allows recovery of the full complex scattered field, including a size-dependent phase profile in the angular-scattering pattern.

  10. Quantum behaviour of open pumped and damped Bose-Hubbard trimers

    NASA Astrophysics Data System (ADS)

    Chianca, C. V.; Olsen, M. K.

    2018-01-01

    We propose and analyse analogs of optical cavities for atoms using three-well inline Bose-Hubbard models with pumping and losses. With one well pumped and one damped, we find that both the mean-field dynamics and the quantum statistics show a qualitative dependence on the choice of damped well. The systems we analyse remain far from equilibrium, although most do enter a steady-state regime. We find quadrature squeezing, bipartite and tripartite inseparability and entanglement, and states exhibiting the EPR paradox, depending on the parameter regimes. We also discover situations where the mean-field solutions of our models are noticeably different from the quantum solutions for the mean fields. Due to recent experimental advances, it should be possible to demonstrate the effects we predict and investigate in this article.

  11. Fault Diagnosis of Rolling Bearing Based on Fast Nonlocal Means and Envelop Spectrum

    PubMed Central

    Lv, Yong; Zhu, Qinglin; Yuan, Rui

    2015-01-01

    The nonlocal means (NL-Means) method that has been widely used in the field of image processing in recent years effectively overcomes the limitations of the neighborhood filter and eliminates the artifact and edge problems caused by the traditional image denoising methods. Although NL-Means is very popular in the field of 2D image signal processing, it has not received enough attention in the field of 1D signal processing. This paper proposes a novel approach that diagnoses the fault of a rolling bearing based on fast NL-Means and the envelop spectrum. The parameters of the rolling bearing signals are optimized in the proposed method, which is the key contribution of this paper. This approach is applied to the fault diagnosis of rolling bearing, and the results have shown the efficiency at detecting roller bearing failures. PMID:25585105

  12. A FOUR-FLUID MHD MODEL OF THE SOLAR WIND/INTERSTELLAR MEDIUM INTERACTION WITH TURBULENCE TRANSPORT AND PICKUP PROTONS AS SEPARATE FLUID

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Usmanov, Arcadi V.; Matthaeus, William H.; Goldstein, Melvyn L., E-mail: arcadi.usmanov@nasa.gov

    2016-03-20

    We have developed a four-fluid, three-dimensional magnetohydrodynamic model of the solar wind interaction with the local interstellar medium. The unique features of the model are: (a) a three-fluid description for the charged components of the solar wind and interstellar plasmas (thermal protons, electrons, and pickup protons), (b) the built-in turbulence transport equations based on Reynolds decomposition and coupled with the mean-flow Reynolds-averaged equations, and (c) a solar corona/solar wind model that supplies inner boundary conditions at 40 au by computing solar wind and magnetic field parameters outward from the coronal base. The three charged species are described by separate energy equationsmore » and are assumed to move with the same velocity. The fourth fluid in the model is the interstellar hydrogen which is treated by separate continuity, momentum, and energy equations and is coupled with the charged components through photoionization and charge exchange. We evaluate the effects of turbulence transport and pickup protons on the global heliospheric structure and compute the distribution of plasma, magnetic field, and turbulence parameters throughout the heliosphere for representative solar minimum and maximum conditions. We compare our results with Voyager 1 observations in the outer heliosheath and show that the relative amplitude of magnetic fluctuations just outside the heliopause is in close agreement with the value inferred from Voyager 1 measurements by Burlaga et al. The simulated profiles of magnetic field parameters in the outer heliosheath are in qualitative agreement with the Voyager 1 observations and with the analytical model of magnetic field draping around the heliopause of Isenberg et al.« less

  13. Beyond the Unified Model

    NASA Astrophysics Data System (ADS)

    Frauendorf, S.

    2018-04-01

    The key elements of the Unified Model are reviewed. The microscopic derivation of the Bohr Hamiltonian by means of adiabatic time-dependent mean field theory is presented. By checking against experimental data the limitations of the Unified Model are delineated. The description of the strong coupling between the rotational and intrinsic degrees of freedom in framework of the rotating mean field is presented from a conceptual point of view. The classification of rotational bands as configurations of rotating quasiparticles is introduced. The occurrence of uniform rotation about an axis that differs from the principle axes of the nuclear density distribution is discussed. The physics behind this tilted-axis rotation, unknown in molecular physics, is explained on a basic level. The new symmetries of the rotating mean field that arise from the various orientations of the angular momentum vector with respect to the triaxial nuclear density distribution and their manifestation by the level sequence of rotational bands are discussed. Resulting phenomena, as transverse wobbling, rotational chirality, magnetic rotation and band termination are discussed. Using the concept of spontaneous symmetry breaking the microscopic underpinning of the rotational degrees is refined.

  14. A molecular-field-based similarity study of non-nucleoside HIV-1 reverse transcriptase inhibitors

    NASA Astrophysics Data System (ADS)

    Mestres, Jordi; Rohrer, Douglas C.; Maggiora, Gerald M.

    1999-01-01

    This article describes a molecular-field-based similarity method for aligning molecules by matching their steric and electrostatic fields and an application of the method to the alignment of three structurally diverse non-nucleoside HIV-1 reverse transcriptase inhibitors. A brief description of the method, as implemented in the program MIMIC, is presented, including a discussion of pairwise and multi-molecule similarity-based matching. The application provides an example that illustrates how relative binding orientations of molecules can be determined in the absence of detailed structural information on their target protein. In the particular system studied here, availability of the X-ray crystal structures of the respective ligand-protein complexes provides a means for constructing an 'experimental model' of the relative binding orientations of the three inhibitors. The experimental model is derived by using MIMIC to align the steric fields of the three protein P66 subunit main chains, producing an overlay with a 1.41 Å average rms distance between the corresponding Cα's in the three chains. The inter-chain residue similarities for the backbone structures show that the main-chain conformations are conserved in the region of the inhibitor-binding site, with the major deviations located primarily in the 'finger' and RNase H regions. The resulting inhibitor structure overlay provides an experimental-based model that can be used to evaluate the quality of the direct a priori inhibitor alignment obtained using MIMIC. It is found that the 'best' pairwise alignments do not always correspond to the experimental model alignments. Therefore, simply combining the best pairwise alignments will not necessarily produce the optimal multi-molecule alignment. However, the best simultaneous three-molecule alignment was found to reproduce the experimental inhibitor alignment model. A pairwise consistency index has been derived which gauges the quality of combining the pairwise alignments and aids in efficiently forming the optimal multi-molecule alignment analysis. Two post-alignment procedures are described that provide information on feature-based and field-based pharmacophoric patterns. The former corresponds to traditional pharmacophore models and is derived from the contribution of individual atoms to the total similarity. The latter is based on molecular regions rather than atoms and is constructed by computing the percent contribution to the similarity of individual points in a regular lattice surrounding the molecules, which when contoured and colored visually depict regions of highly conserved similarity. A discussion of how the information provided by each of the procedures is useful in drug design is also presented.

  15. Characterizing optical properties and spatial heterogeneity of human ovarian tissue using spatial frequency domain imaging

    NASA Astrophysics Data System (ADS)

    Nandy, Sreyankar; Mostafa, Atahar; Kumavor, Patrick D.; Sanders, Melinda; Brewer, Molly; Zhu, Quing

    2016-10-01

    A spatial frequency domain imaging (SFDI) system was developed for characterizing ex vivo human ovarian tissue using wide-field absorption and scattering properties and their spatial heterogeneities. Based on the observed differences between absorption and scattering images of different ovarian tissue groups, six parameters were quantitatively extracted. These are the mean absorption and scattering, spatial heterogeneities of both absorption and scattering maps measured by a standard deviation, and a fitting error of a Gaussian model fitted to normalized mean Radon transform of the absorption and scattering maps. A logistic regression model was used for classification of malignant and normal ovarian tissues. A sensitivity of 95%, specificity of 100%, and area under the curve of 0.98 were obtained using six parameters extracted from the SFDI images. The preliminary results demonstrate the diagnostic potential of the SFDI method for quantitative characterization of wide-field optical properties and the spatial distribution heterogeneity of human ovarian tissue. SFDI could be an extremely robust and valuable tool for evaluation of the ovary and detection of neoplastic changes of ovarian cancer.

  16. Color Superconductivity and Charge Neutrality in Yukawa Theory

    NASA Astrophysics Data System (ADS)

    Alford, Mark G.; Pangeni, Kamal; Windisch, Andreas

    2018-02-01

    It is generally believed that when Cooper pairing occurs between two different species of fermions, their Fermi surfaces become locked together so that the resultant state remains "neutral," with equal number densities of the two species, even when subjected to a chemical potential that couples to the difference in number densities. This belief is based on mean-field calculations in models with a zero-range interaction, where the anomalous self-energy is independent of energy and momentum. Following up on an early report of a deviation from neutrality in a Dyson-Schwinger calculation of color-flavor-locked quark matter, we investigate the neutrality of a two-species condensate using a Yukawa model which has a finite-range interaction. In a mean field calculation we obtain the full energy-momentum dependence of the self-energy and find that the energy dependence leads to a population imbalance in the Cooper-paired phase when it is stressed by a species-dependent chemical potential. This gives some support to the suggestion that the color-flavor-locked phase of quark matter might not be an insulator.

  17. Coupled forward-backward trajectory approach for nonequilibrium electron-ion dynamics

    NASA Astrophysics Data System (ADS)

    Sato, Shunsuke A.; Kelly, Aaron; Rubio, Angel

    2018-04-01

    We introduce a simple ansatz for the wave function of a many-body system based on coupled forward and backward propagating semiclassical trajectories. This method is primarily aimed at, but not limited to, treating nonequilibrium dynamics in electron-phonon systems. The time evolution of the system is obtained from the Euler-Lagrange variational principle, and we show that this ansatz yields Ehrenfest mean-field theory in the limit that the forward and backward trajectories are orthogonal, and in the limit that they coalesce. We investigate accuracy and performance of this method by simulating electronic relaxation in the spin-boson model and the Holstein model. Although this method involves only pairs of semiclassical trajectories, it shows a substantial improvement over mean-field theory, capturing quantum coherence of nuclear dynamics as well as electron-nuclear correlations. This improvement is particularly evident in nonadiabatic systems, where the accuracy of this coupled trajectory method extends well beyond the perturbative electron-phonon coupling regime. This approach thus provides an attractive route forward to the ab initio description of relaxation processes, such as thermalization, in condensed phase systems.

  18. A tomographic technique for aerodynamics at transonic speeds

    NASA Technical Reports Server (NTRS)

    Lee, G.

    1985-01-01

    Computer aided tomography (CAT) provides a means of noninvasively measuring the air density distribution around an aerodynamic model. This technique is global in that a large portion of the flow field can be measured. A test of the applicability of CAT to transonic velocities was studied. A hemispherical-nose cylinder afterbody model was tested at a Mach number of 0.8 with a new laser holographic interferometer at the 2- by 2-Foot Transonic Wind Tunnel. Holograms of the flow field were taken and were reconstructed into interferograms. The fringe distribution (a measure of the local densities) was digitized for subsequent data reduction. A computer program based on the Fourier-transform technique was developed to convert the fringe distribution into three-dimensional densities around the model. Theoretical aerodynamic densities were calculated for evaluating and assessing the accuracy of the data obtained from the tomographic method.

  19. Modeling nearshore morphological evolution at seasonal scale

    USGS Publications Warehouse

    Walstra, D.-J.R.; Ruggiero, P.; Lesser, G.; Gelfenbaum, G.

    2006-01-01

    A process-based model is compared with field measurements to test and improve our ability to predict nearshore morphological change at seasonal time scales. The field experiment, along the dissipative beaches adjacent to Grays Harbor, Washington USA, successfully captured the transition between the high-energy erosive conditions of winter and the low-energy beach-building conditions typical of summer. The experiment documented shoreline progradation on the order of 20 m and as much as 175 m of onshore bar migration. Significant alongshore variability was observed in the morphological response of the sandbars over a 4 km reach of coast. A detailed sensitivity analysis suggests that the model results are more sensitive to adjusting the sediment transport associated with asymmetric oscillatory wave motions than to adjusting the transport due to mean currents. Initial results suggest that alongshore variations in the initial bathymetry are partially responsible for the observed alongshore variable morphological response during the experiment. Copyright ASCE 2006.

  20. A Case Study of the Weather Research and Forecasting Model Applied to the Joint Urban 2003 Tracer Field Experiment. Part 1. Wind and Turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Matthew A.; Brown, Michael J.; Halverson, Scot A.

    We found that numerical-weather-prediction models are often used to supply the mean wind and turbulence fields for atmospheric transport and dispersion plume models as they provide dense horizontally- and vertically-resolved geographic coverage in comparison to typically sparse monitoring networks. Here, the Weather Research and Forecasting (WRF) model was run over the month-long period of the Joint Urban 2003 field campaign conducted in Oklahoma City and the simulated fields important to transport and dispersion models were compared to measurements from a number of sodars, tower-based sonic anemometers, and balloon soundings located in the greater metropolitan area. Time histories of computed windmore » speed, wind direction, turbulent kinetic energy (e), friction velocity (u* ), and reciprocal Obukhov length (1 / L) were compared to measurements over the 1-month field campaign. Vertical profiles of wind speed, potential temperature (θ ), and e were compared during short intensive operating periods. The WRF model was typically able to replicate the measured diurnal variation of the wind fields, but with an average absolute wind direction and speed difference of 35° and 1.9 m s -1 , respectively. Then, using the Mellor-Yamada-Janjic (MYJ) surface-layer scheme, the WRF model was found to generally underpredict surface-layer TKE but overpredict u* that was observed above a suburban region of Oklahoma City. The TKE-threshold method used by the WRF model’s MYJ surface-layer scheme to compute the boundary-layer height (h) consistently overestimated h derived from a θ gradient method whether using observed or modelled θ profiles.« less

Top