Sample records for full explicit models

  1. Design and numerical evaluation of full-authority flight control systems for conventional and thruster-augmented helicopters employed in NOE operations

    NASA Technical Reports Server (NTRS)

    Perri, Todd A.; Mckillip, R. M., Jr.; Curtiss, H. C., Jr.

    1987-01-01

    The development and methodology is presented for development of full-authority implicit model-following and explicit model-following optimal controllers for use on helicopters operating in the Nap-of-the Earth (NOE) environment. Pole placement, input-output frequency response, and step input response were used to evaluate handling qualities performance. The pilot was equipped with velocity-command inputs. A mathematical/computational trajectory optimization method was employed to evaluate the ability of each controller to fly NOE maneuvers. The method determines the optimal swashplate and thruster input histories from the helicopter's dynamics and the prescribed geometry and desired flying qualities of the maneuver. Three maneuvers were investigated for both the implicit and explicit controllers with and without auxiliary propulsion installed: pop-up/dash/descent, bob-up at 40 knots, and glideslope. The explicit controller proved to be superior to the implicit controller in performance and ease of design.

  2. Exact Local Correlations and Full Counting Statistics for Arbitrary States of the One-Dimensional Interacting Bose Gas

    NASA Astrophysics Data System (ADS)

    Bastianello, Alvise; Piroli, Lorenzo; Calabrese, Pasquale

    2018-05-01

    We derive exact analytic expressions for the n -body local correlations in the one-dimensional Bose gas with contact repulsive interactions (Lieb-Liniger model) in the thermodynamic limit. Our results are valid for arbitrary states of the model, including ground and thermal states, stationary states after a quantum quench, and nonequilibrium steady states arising in transport settings. Calculations for these states are explicitly presented and physical consequences are critically discussed. We also show that the n -body local correlations are directly related to the full counting statistics for the particle-number fluctuations in a short interval, for which we provide an explicit analytic result.

  3. Overcoming Challenges in Kinetic Modeling of Magnetized Plasmas and Vacuum Electronic Devices

    NASA Astrophysics Data System (ADS)

    Omelchenko, Yuri; Na, Dong-Yeop; Teixeira, Fernando

    2017-10-01

    We transform the state-of-the art of plasma modeling by taking advantage of novel computational techniques for fast and robust integration of multiscale hybrid (full particle ions, fluid electrons, no displacement current) and full-PIC models. These models are implemented in 3D HYPERS and axisymmetric full-PIC CONPIC codes. HYPERS is a massively parallel, asynchronous code. The HYPERS solver does not step fields and particles synchronously in time but instead executes local variable updates (events) at their self-adaptive rates while preserving fundamental conservation laws. The charge-conserving CONPIC code has a matrix-free explicit finite-element (FE) solver based on a sparse-approximate inverse (SPAI) algorithm. This explicit solver approximates the inverse FE system matrix (``mass'' matrix) using successive sparsity pattern orders of the original matrix. It does not reduce the set of Maxwell's equations to a vector-wave (curl-curl) equation of second order but instead utilizes the standard coupled first-order Maxwell's system. We discuss the ability of our codes to accurately and efficiently account for multiscale physical phenomena in 3D magnetized space and laboratory plasmas and axisymmetric vacuum electronic devices.

  4. Turbulence Model Predictions of Strongly Curved Flow in a U-Duct

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Gatski, Thomas B.; Morrison, Joseph H.

    2000-01-01

    The ability of three types of turbulence models to accurately predict the effects of curvature on the flow in a U-duct is studied. An explicit algebraic stress model performs slightly better than one- or two-equation linear eddy viscosity models, although it is necessary to fully account for the variation of the production-to-dissipation-rate ratio in the algebraic stress model formulation. In their original formulations, none of these turbulence models fully captures the suppressed turbulence near the convex wall, whereas a full Reynolds stress model does. Some of the underlying assumptions used in the development of algebraic stress models are investigated and compared with the computed flowfield from the full Reynolds stress model. Through this analysis, the assumption of Reynolds stress anisotropy equilibrium used in the algebraic stress model formulation is found to be incorrect in regions of strong curvature. By the accounting for the local variation of the principal axes of the strain rate tensor, the explicit algebraic stress model correctly predicts the suppressed turbulence in the outer part of the boundary layer near the convex wall.

  5. A spatial stochastic programming model for timber and core area management under risk of stand-replacing fire

    Treesearch

    Dung Tuan Nguyen

    2012-01-01

    Forest harvest scheduling has been modeled using deterministic and stochastic programming models. Past models seldom address explicit spatial forest management concerns under the influence of natural disturbances. In this research study, we employ multistage full recourse stochastic programming models to explore the challenges and advantages of building spatial...

  6. Improved Subcell Model for the Prediction of Braided Composite Response

    NASA Technical Reports Server (NTRS)

    Cater, Christopher R.; Xinran, Xiao; Goldberg, Robert K.; Kohlman, Lee W.

    2013-01-01

    In this work, the modeling of triaxially braided composites was explored through a semi-analytical discretization. Four unique subcells, each approximated by a "mosaic" stacking of unidirectional composite plies, were modeled through the use of layered-shell elements within the explicit finite element code LS-DYNA. Two subcell discretizations were investigated: a model explicitly capturing pure matrix regions, and a novel model which absorbed pure matrix pockets into neighboring tow plies. The in-plane stiffness properties of both models, computed using bottom-up micromechanics, correlated well to experimental data. The absorbed matrix model, however, was found to best capture out-of- plane flexural properties by comparing numerical simulations of the out-of-plane displacements from single-ply tension tests to experimental full field data. This strong correlation of out-of-plane characteristics supports the current modeling approach as a viable candidate for future work involving impact simulations.

  7. Assessment of the GECKO-A Modeling Tool and Simplified 3D Model Parameterizations for SOA Formation

    NASA Astrophysics Data System (ADS)

    Aumont, B.; Hodzic, A.; La, S.; Camredon, M.; Lannuque, V.; Lee-Taylor, J. M.; Madronich, S.

    2014-12-01

    Explicit chemical mechanisms aim to embody the current knowledge of the transformations occurring in the atmosphere during the oxidation of organic matter. These explicit mechanisms are therefore useful tools to explore the fate of organic matter during its tropospheric oxidation and examine how these chemical processes shape the composition and properties of the gaseous and the condensed phases. Furthermore, explicit mechanisms provide powerful benchmarks to design and assess simplified parameterizations to be included 3D model. Nevertheless, the explicit mechanism describing the oxidation of hydrocarbons with backbones larger than few carbon atoms involves millions of secondary organic compounds, far exceeding the size of chemical mechanisms that can be written manually. Data processing tools can however be designed to overcome these difficulties and automatically generate consistent and comprehensive chemical mechanisms on a systematic basis. The Generator for Explicit Chemistry and Kinetics of Organics in the Atmosphere (GECKO-A) has been developed for the automatic writing of explicit chemical schemes of organic species and their partitioning between the gas and condensed phases. GECKO-A can be viewed as an expert system that mimics the steps by which chemists might develop chemical schemes. GECKO-A generates chemical schemes according to a prescribed protocol assigning reaction pathways and kinetics data on the basis of experimental data and structure-activity relationships. In its current version, GECKO-A can generate the full atmospheric oxidation scheme for most linear, branched and cyclic precursors, including alkanes and alkenes up to C25. Assessments of the GECKO-A modeling tool based on chamber SOA observations will be presented. GECKO-A was recently used to design a parameterization for SOA formation based on a Volatility Basis Set (VBS) approach. First results will be presented.

  8. Fuselage Versus Subcomponent Panel Response Correlation Based on ABAQUS Explicit Progressive Damage Analysis Tools

    NASA Technical Reports Server (NTRS)

    Gould, Kevin E.; Satyanarayana, Arunkumar; Bogert, Philip B.

    2016-01-01

    Analysis performed in this study substantiates the need for high fidelity vehicle level progressive damage analyses (PDA) structural models for use in the verification and validation of proposed sub-scale structural models and to support required full-scale vehicle level testing. PDA results are presented that capture and correlate the responses of sub-scale 3-stringer and 7-stringer panel models and an idealized 8-ft diameter fuselage model, which provides a vehicle level environment for the 7-stringer sub-scale panel model. Two unique skin-stringer attachment assumptions are considered and correlated in the models analyzed: the TIE constraint interface versus the cohesive element (COH3D8) interface. Evaluating different interfaces allows for assessing a range of predicted damage modes, including delamination and crack propagation responses. Damage models considered in this study are the ABAQUS built-in Hashin procedure and the COmplete STress Reduction (COSTR) damage procedure implemented through a VUMAT user subroutine using the ABAQUS/Explicit code.

  9. A Collection of Technical Studies Completed for the Computer-Aided Acquisition and Logistic Support (CALS) Program Fiscal Year 1987. Volume 2

    DTIC Science & Technology

    1988-03-01

    short description of how the TOP-CGM profile differs from the full CG.I standard. This change, along with explicitly pulling out the Conformance and...the CGI/CGEM segmentation model provides such capability. 3 t. Goali and Dujgn Cricr-s The segment model of CGEM is to meet the following criteria: I

  10. A new method for calculating time-dependent atomic level populations

    NASA Technical Reports Server (NTRS)

    Kastner, S. O.

    1981-01-01

    A method is described for reducing the number of levels to be dealt with in calculating time-dependent populations of atoms or ions in plasmas. The procedure effectively extends the collisional-radiative model to consecutive stages of ionization, treating ground and metastable levels explicitly and excited levels implicitly. Direct comparisons of full and simulated systems are carried out for five-level models.

  11. Cosmology on a cosmic ring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niedermann, Florian; Schneider, Robert, E-mail: florian.niedermann@physik.lmu.de, E-mail: robert.bob.schneider@physik.uni-muenchen.de

    We derive the modified Friedmann equations for a generalization of the Dvali-Gabadadze-Porrati (DGP) model in which the brane has one additional compact dimension. The main new feature is the emission of gravitational waves into the bulk. We study two classes of solutions: first, if the compact dimension is stabilized, the waves vanish and one exactly recovers DGP cosmology. However, a stabilization by means of physical matter is not possible for a tension-dominated brane, thus implying a late time modification of 4D cosmology different from DGP. Second, for a freely expanding compact direction, we find exact attractor solutions with zero 4Dmore » Hubble parameter despite the presence of a 4D cosmological constant. The model hence constitutes an explicit example of dynamical degravitation at the full nonlinear level. Without stabilization, however, there is no 4D regime and the model is ruled out observationally, as we demonstrate explicitly by comparing to supernova data.« less

  12. Quantization of a U(1) gauged chiral boson in the Batalin-Fradkin-Vilkovisky scheme

    NASA Astrophysics Data System (ADS)

    Ghosh, Subir

    1994-03-01

    The scheme developed by Batalin, Fradkin, and Vilkovisky (BFV) to convert a second-class constrained system to a first-class one (having gauge invariance) is used in the Floreanini-Jackiw formulation of the chiral boson interacting with a U(1) gauge field. Explicit expressions of the BRST charge, the unitarizing Hamiltonian, and the BRST invariant effective action are provided and the full quantization is carried through. The spectra in both cases have been analyzed to show the presence of the proper chiral components explicitly. In the gauged model, Wess-Zumino terms in terms of the Batalin-Fradkin fields are identified.

  13. Quantization of a U(1) gauged chiral boson in the Batalin-Fradkin-Vilkovisky scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, S.

    1994-03-15

    The scheme developed by Batalin, Fradkin, and Vilkovisky (BFV) to convert a second-class constrained system to a first-class one (having gauge invariance) is used in the Floreanini-Jackiw formulation of the chiral boson interacting with a U(1) gauge field. Explicit expressions of the BRST charge, the unitarizing Hamiltonian, and the BRST invariant effective action are provided and the full quantization is carried through. The spectra in both cases have been analyzed to show the presence of the proper chiral components explicitly. In the gauged model, Wess-Zumino terms in terms of the Batalin-Fradkin fields are identified.

  14. An explicitly solvated full atomistic model of the cardiac thin filament and application on the calcium binding affinity effects from familial hypertrophic cardiomyopathy linked mutations

    NASA Astrophysics Data System (ADS)

    Williams, Michael; Schwartz, Steven

    2015-03-01

    The previous version of our cardiac thin filament (CTF) model consisted of the troponin complex (cTn), two coiled-coil dimers of tropomyosin (Tm), and 29 actin units. We now present the newest revision of the model to include explicit solvation. The model was developed to continue our study of genetic mutations in the CTF proteins which are linked to familial hypertrophic cardiomyopathies. Binding of calcium to the cTnC subunit causes subtle conformational changes to propagate through the cTnC to the cTnI subunit which then detaches from actin. Conformational changes propagate through to the cTnT subunit, which allows Tm to move into the open position along actin, leading to muscle contraction. Calcium disassociation allows for the reverse to occur, which results in muscle relaxation. The inclusion of explicit TIP3 water solvation allows for the model to get better individual local solvent to protein interactions; which are important when observing the N-lobe calcium binding pocket of the cTnC. We are able to compare in silica and in vitro experimental results to better understand the physiological effects from mutants, such as the R92L/W and F110V/I of the cTnT, on the calcium binding affinity compared to the wild type.

  15. GPGPU-based explicit finite element computations for applications in biomechanics: the performance of material models, element technologies, and hardware generations.

    PubMed

    Strbac, V; Pierce, D M; Vander Sloten, J; Famaey, N

    2017-12-01

    Finite element (FE) simulations are increasingly valuable in assessing and improving the performance of biomedical devices and procedures. Due to high computational demands such simulations may become difficult or even infeasible, especially when considering nearly incompressible and anisotropic material models prevalent in analyses of soft tissues. Implementations of GPGPU-based explicit FEs predominantly cover isotropic materials, e.g. the neo-Hookean model. To elucidate the computational expense of anisotropic materials, we implement the Gasser-Ogden-Holzapfel dispersed, fiber-reinforced model and compare solution times against the neo-Hookean model. Implementations of GPGPU-based explicit FEs conventionally rely on single-point (under) integration. To elucidate the expense of full and selective-reduced integration (more reliable) we implement both and compare corresponding solution times against those generated using underintegration. To better understand the advancement of hardware, we compare results generated using representative Nvidia GPGPUs from three recent generations: Fermi (C2075), Kepler (K20c), and Maxwell (GTX980). We explore scaling by solving the same boundary value problem (an extension-inflation test on a segment of human aorta) with progressively larger FE meshes. Our results demonstrate substantial improvements in simulation speeds relative to two benchmark FE codes (up to 300[Formula: see text] while maintaining accuracy), and thus open many avenues to novel applications in biomechanics and medicine.

  16. Trojan War displayed as a full annihilation-diffusion-reaction model

    NASA Astrophysics Data System (ADS)

    Flores, J. C.

    2017-02-01

    The diffusive pair annihilation model with embedded topological domains and archaeological data is applied in an analysis of the hypothetical Trojan-Greek war during the late Bronze Age. Estimations of parameter are explicitly made for critical dynamics of the model. In particular, the 8-metre walls of Troy could be viewed as the effective shield that provided the technological difference between the two armies. Suggestively, the numbers in The Iliad are quite sound, being in accord with Lanchester's laws of warfare.

  17. On the Nexus of the Spatial Dynamics of Global Urbanization and the Age of the City

    PubMed Central

    Scheuer, Sebastian; Haase, Dagmar; Volk, Martin

    2016-01-01

    A number of concepts exist regarding how urbanization can be described as a process. Understanding this process that affects billions of people and its future development in a spatial manner is imperative to address related issues such as human quality of life. In the focus of spatially explicit studies on urbanization is typically a city, a particular urban region, an agglomeration. However, gaps remain in spatially explicit global models. This paper addresses that issue by examining the spatial dynamics of urban areas over time, for a full coverage of the world. The presented model identifies past, present and potential future hotspots of urbanization as a function of an urban area's spatial variation and age, whose relation could be depicted both as a proxy and as a path of urban development. PMID:27490199

  18. On the Nexus of the Spatial Dynamics of Global Urbanization and the Age of the City.

    PubMed

    Scheuer, Sebastian; Haase, Dagmar; Volk, Martin

    2016-01-01

    A number of concepts exist regarding how urbanization can be described as a process. Understanding this process that affects billions of people and its future development in a spatial manner is imperative to address related issues such as human quality of life. In the focus of spatially explicit studies on urbanization is typically a city, a particular urban region, an agglomeration. However, gaps remain in spatially explicit global models. This paper addresses that issue by examining the spatial dynamics of urban areas over time, for a full coverage of the world. The presented model identifies past, present and potential future hotspots of urbanization as a function of an urban area's spatial variation and age, whose relation could be depicted both as a proxy and as a path of urban development.

  19. Using effort information with change-in-ratio data for population estimation

    USGS Publications Warehouse

    Udevitz, Mark S.; Pollock, Kenneth H.

    1995-01-01

    Most change-in-ratio (CIR) methods for estimating fish and wildlife population sizes have been based only on assumptions about how encounter probabilities vary among population subclasses. When information on sampling effort is available, it is also possible to derive CIR estimators based on assumptions about how encounter probabilities vary over time. This paper presents a generalization of previous CIR models that allows explicit consideration of a range of assumptions about the variation of encounter probabilities among subclasses and over time. Explicit estimators are derived under this model for specific sets of assumptions about the encounter probabilities. Numerical methods are presented for obtaining estimators under the full range of possible assumptions. Likelihood ratio tests for these assumptions are described. Emphasis is on obtaining estimators based on assumptions about variation of encounter probabilities over time.

  20. A new solution method for wheel/rail rolling contact.

    PubMed

    Yang, Jian; Song, Hua; Fu, Lihua; Wang, Meng; Li, Wei

    2016-01-01

    To solve the problem of wheel/rail rolling contact of nonlinear steady-state curving, a three-dimensional transient finite element (FE) model is developed by the explicit software ANSYS/LS-DYNA. To improve the solving speed and efficiency, an explicit-explicit order solution method is put forward based on analysis of the features of implicit and explicit algorithm. The solution method was first applied to calculate the pre-loading of wheel/rail rolling contact with explicit algorithm, and then the results became the initial conditions in solving the dynamic process of wheel/rail rolling contact with explicit algorithm as well. Simultaneously, the common implicit-explicit order solution method is used to solve the FE model. Results show that the explicit-explicit order solution method has faster operation speed and higher efficiency than the implicit-explicit order solution method while the solution accuracy is almost the same. Hence, the explicit-explicit order solution method is more suitable for the wheel/rail rolling contact model with large scale and high nonlinearity.

  1. Equation-oriented specification of neural models for simulations

    PubMed Central

    Stimberg, Marcel; Goodman, Dan F. M.; Benichoux, Victor; Brette, Romain

    2013-01-01

    Simulating biological neuronal networks is a core method of research in computational neuroscience. A full specification of such a network model includes a description of the dynamics and state changes of neurons and synapses, as well as the synaptic connectivity patterns and the initial values of all parameters. A standard approach in neuronal modeling software is to build network models based on a library of pre-defined components and mechanisms; if a model component does not yet exist, it has to be defined in a special-purpose or general low-level language and potentially be compiled and linked with the simulator. Here we propose an alternative approach that allows flexible definition of models by writing textual descriptions based on mathematical notation. We demonstrate that this approach allows the definition of a wide range of models with minimal syntax. Furthermore, such explicit model descriptions allow the generation of executable code for various target languages and devices, since the description is not tied to an implementation. Finally, this approach also has advantages for readability and reproducibility, because the model description is fully explicit, and because it can be automatically parsed and transformed into formatted descriptions. The presented approach has been implemented in the Brian2 simulator. PMID:24550820

  2. Modelling zwitterions in solution: 3-fluoro-γ-aminobutyric acid (3F-GABA).

    PubMed

    Cao, Jie; Bjornsson, Ragnar; Bühl, Michael; Thiel, Walter; van Mourik, Tanja

    2012-01-02

    The conformations and relative stabilities of folded and extended 3-fluoro-γ-aminobutyric acid (3F-GABA) conformers were studied using explicit solvation models. Geometry optimisations in the gas phase with one or two explicit water molecules favour folded and neutral structures containing intramolecular NH···O-C hydrogen bonds. With three or five explicit water molecules zwitterionic minima are obtained, with folded structures being preferred over extended conformers. The stability of folded versus extended zwitterionic conformers increases on going from a PCM continuum solvation model to the microsolvated complexes, though extended structures become less disfavoured with the inclusion of more water molecules. Full explicit solvation was studied with a hybrid quantum-mechanical/molecular-mechanical (QM/MM) scheme and molecular dynamics simulations, including more than 6000 TIP3P water molecules. According to free energies obtained from thermodynamic integration at the PM3/MM level and corrected for B3LYP/MM total energies, the fully extended conformer is more stable than folded ones by about -4.5 kJ mol(-1). B3LYP-computed (3)J(F,H) NMR spin-spin coupling constants, averaged over PM3/MM-MD trajectories, agree best with experiment for this fully extended form, in accordance with the original NMR analysis. The seeming discrepancy between static PCM calculations and experiment noted previously is now resolved. That the inexpensive semiempirical PM3 method performs so well for this archetypical zwitterion is encouraging for further QM/MM studies of biomolecular systems. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. The Nonstationary Dynamics of Fitness Distributions: Asexual Model with Epistasis and Standing Variation

    PubMed Central

    Martin, Guillaume; Roques, Lionel

    2016-01-01

    Various models describe asexual evolution by mutation, selection, and drift. Some focus directly on fitness, typically modeling drift but ignoring or simplifying both epistasis and the distribution of mutation effects (traveling wave models). Others follow the dynamics of quantitative traits determining fitness (Fisher’s geometric model), imposing a complex but fixed form of mutation effects and epistasis, and often ignoring drift. In all cases, predictions are typically obtained in high or low mutation rate limits and for long-term stationary regimes, thus losing information on transient behaviors and the effect of initial conditions. Here, we connect fitness-based and trait-based models into a single framework, and seek explicit solutions even away from stationarity. The expected fitness distribution is followed over time via its cumulant generating function, using a deterministic approximation that neglects drift. In several cases, explicit trajectories for the full fitness distribution are obtained for arbitrary mutation rates and standing variance. For nonepistatic mutations, especially with beneficial mutations, this approximation fails over the long term but captures the early dynamics, thus complementing stationary stochastic predictions. The approximation also handles several diminishing returns epistasis models (e.g., with an optimal genotype); it can be applied at and away from equilibrium. General results arise at equilibrium, where fitness distributions display a “phase transition” with mutation rate. Beyond this phase transition, in Fisher’s geometric model, the full trajectory of fitness and trait distributions takes a simple form; robust to the details of the mutant phenotype distribution. Analytical arguments are explored regarding why and when the deterministic approximation applies. PMID:27770037

  4. Quantum cluster theory for the polarizable continuum model. I. The CCSD level with analytical first and second derivatives.

    PubMed

    Cammi, R

    2009-10-28

    We present a general formulation of the coupled-cluster (CC) theory for a molecular solute described within the framework of the polarizable continuum model (PCM). The PCM-CC theory is derived in its complete form, called PTDE scheme, in which the correlated electronic density is used to have a self-consistent reaction field, and in an approximate form, called PTE scheme, in which the PCM-CC equations are solved assuming the fixed Hartree-Fock solvent reaction field. Explicit forms for the PCM-CC-PTDE equations are derived at the single and double (CCSD) excitation level of the cluster operator. At the same level, explicit equations for the analytical first derivatives of the PCM basic energy functional are presented, and analytical second derivatives are also discussed. The corresponding PCM-CCSD-PTE equations are given as a special case of the full theory.

  5. Development of a Navier-Stokes algorithm for parallel-processing supercomputers. Ph.D. Thesis - Colorado State Univ., Dec. 1988

    NASA Technical Reports Server (NTRS)

    Swisshelm, Julie M.

    1989-01-01

    An explicit flow solver, applicable to the hierarchy of model equations ranging from Euler to full Navier-Stokes, is combined with several techniques designed to reduce computational expense. The computational domain consists of local grid refinements embedded in a global coarse mesh, where the locations of these refinements are defined by the physics of the flow. Flow characteristics are also used to determine which set of model equations is appropriate for solution in each region, thereby reducing not only the number of grid points at which the solution must be obtained, but also the computational effort required to get that solution. Acceleration to steady-state is achieved by applying multigrid on each of the subgrids, regardless of the particular model equations being solved. Since each of these components is explicit, advantage can readily be taken of the vector- and parallel-processing capabilities of machines such as the Cray X-MP and Cray-2.

  6. From non-trivial geometries to power spectra and vice versa

    NASA Astrophysics Data System (ADS)

    Brooker, D. J.; Tsamis, N. C.; Woodard, R. P.

    2018-04-01

    We review a recent formalism which derives the functional forms of the primordial—tensor and scalar—power spectra of scalar potential inflationary models. The formalism incorporates the case of geometries with non-constant first slow-roll parameter. Analytic expressions for the power spectra are given that explicitly display the dependence on the geometric properties of the background. Moreover, we present the full algorithm for using our formalism, to reconstruct the model from the observed power spectra. Our techniques are applied to models possessing "features" in their potential with excellent agreement.

  7. The Critical Z-Invariant Ising Model via Dimers: Locality Property

    NASA Astrophysics Data System (ADS)

    Boutillier, Cédric; de Tilière, Béatrice

    2011-01-01

    We study a large class of critical two-dimensional Ising models, namely critical Z-invariant Ising models. Fisher (J Math Phys 7:1776-1781, 1966) introduced a correspondence between the Ising model and the dimer model on a decorated graph, thus setting dimer techniques as a powerful tool for understanding the Ising model. In this paper, we give a full description of the dimer model corresponding to the critical Z-invariant Ising model, consisting of explicit expressions which only depend on the local geometry of the underlying isoradial graph. Our main result is an explicit local formula for the inverse Kasteleyn matrix, in the spirit of Kenyon (Invent Math 150(2):409-439, 2002), as a contour integral of the discrete exponential function of Mercat (Discrete period matrices and related topics, 2002) and Kenyon (Invent Math 150(2):409-439, 2002) multiplied by a local function. Using results of Boutillier and de Tilière (Prob Theor Rel Fields 147(3-4):379-413, 2010) and techniques of de Tilière (Prob Th Rel Fields 137(3-4):487-518, 2007) and Kenyon (Invent Math 150(2):409-439, 2002), this yields an explicit local formula for a natural Gibbs measure, and a local formula for the free energy. As a corollary, we recover Baxter's formula for the free energy of the critical Z-invariant Ising model (Baxter, in Exactly solved models in statistical mechanics, Academic Press, London, 1982), and thus a new proof of it. The latter is equal, up to a constant, to the logarithm of the normalized determinant of the Laplacian obtained in Kenyon (Invent Math 150(2):409-439, 2002).

  8. Effect of the explicit flexibility of the InhA enzyme from Mycobacterium tuberculosis in molecular docking simulations.

    PubMed

    Cohen, Elisangela M L; Machado, Karina S; Cohen, Marcelo; de Souza, Osmar Norberto

    2011-12-22

    Protein/receptor explicit flexibility has recently become an important feature of molecular docking simulations. Taking the flexibility into account brings the docking simulation closer to the receptors' real behaviour in its natural environment. Several approaches have been developed to address this problem. Among them, modelling the full flexibility as an ensemble of snapshots derived from a molecular dynamics simulation (MD) of the receptor has proved very promising. Despite its potential, however, only a few studies have employed this method to probe its effect in molecular docking simulations. We hereby use ensembles of snapshots obtained from three different MD simulations of the InhA enzyme from M. tuberculosis (Mtb), the wild-type (InhA_wt), InhA_I16T, and InhA_I21V mutants to model their explicit flexibility, and to systematically explore their effect in docking simulations with three different InhA inhibitors, namely, ethionamide (ETH), triclosan (TCL), and pentacyano(isoniazid)ferrate(II) (PIF). The use of fully-flexible receptor (FFR) models of InhA_wt, InhA_I16T, and InhA_I21V mutants in docking simulation with the inhibitors ETH, TCL, and PIF revealed significant differences in the way they interact as compared to the rigid, InhA crystal structure (PDB ID: 1ENY). In the latter, only up to five receptor residues interact with the three different ligands. Conversely, in the FFR models this number grows up to an astonishing 80 different residues. The comparison between the rigid crystal structure and the FFR models showed that the inclusion of explicit flexibility, despite the limitations of the FFR models employed in this study, accounts in a substantial manner to the induced fit expected when a protein/receptor and ligand approach each other to interact in the most favourable manner. Protein/receptor explicit flexibility, or FFR models, represented as an ensemble of MD simulation snapshots, can lead to a more realistic representation of the induced fit effect expected in the encounter and proper docking of receptors to ligands. The FFR models of InhA explicitly characterizes the overall movements of the amino acid residues in helices, strands, loops, and turns, allowing the ligand to properly accommodate itself in the receptor's binding site. Utilization of the intrinsic flexibility of Mtb's InhA enzyme and its mutants in virtual screening via molecular docking simulation may provide a novel platform to guide the rational or dynamical-structure-based drug design of novel inhibitors for Mtb's InhA. We have produced a short video sequence of each ligand (ETH, TCL and PIF) docked to the FFR models of InhA_wt. These videos are available at http://www.inf.pucrs.br/~osmarns/LABIO/Videos_Cohen_et_al_19_07_2011.htm.

  9. Zipf exponent of trajectory distribution in the hidden Markov model

    NASA Astrophysics Data System (ADS)

    Bochkarev, V. V.; Lerner, E. Yu

    2014-03-01

    This paper is the first step of generalization of the previously obtained full classification of the asymptotic behavior of the probability for Markov chain trajectories for the case of hidden Markov models. The main goal is to study the power (Zipf) and nonpower asymptotics of the frequency list of trajectories of hidden Markov frequencys and to obtain explicit formulae for the exponent of the power asymptotics. We consider several simple classes of hidden Markov models. We prove that the asymptotics for a hidden Markov model and for the corresponding Markov chain can be essentially different.

  10. Neuman systems model-based research: an integrative review project.

    PubMed

    Fawcett, J; Giangrande, S K

    2001-07-01

    The project integrated Neuman systems model-based research literature. Two hundred published studies were located. This article is limited to the 59 full journal articles and 3 book chapters identified. A total of 37% focused on prevention interventions; 21% on perception of stressors; and 10% on stressor reactions. Only 50% of the reports explicitly linked the model with the study variables, and 61% did not include conclusions regarding model utility or credibility. No programs of research were identified. Academic courses and continuing education workshops are needed to help researchers design programs of Neuman systems model-based research and better explicate linkages between the model and the research.

  11. A new heterogeneous asynchronous explicit-implicit time integrator for nonsmooth dynamics

    NASA Astrophysics Data System (ADS)

    Fekak, Fatima-Ezzahra; Brun, Michael; Gravouil, Anthony; Depale, Bruno

    2017-07-01

    In computational structural dynamics, particularly in the presence of nonsmooth behavior, the choice of the time-step and the time integrator has a critical impact on the feasibility of the simulation. Furthermore, in some cases, as in the case of a bridge crane under seismic loading, multiple time-scales coexist in the same problem. In that case, the use of multi-time scale methods is suitable. Here, we propose a new explicit-implicit heterogeneous asynchronous time integrator (HATI) for nonsmooth transient dynamics with frictionless unilateral contacts and impacts. Furthermore, we present a new explicit time integrator for contact/impact problems where the contact constraints are enforced using a Lagrange multiplier method. In other words, the aim of this paper consists in using an explicit time integrator with a fine time scale in the contact area for reproducing high frequency phenomena, while an implicit time integrator is adopted in the other parts in order to reproduce much low frequency phenomena and to optimize the CPU time. In a first step, the explicit time integrator is tested on a one-dimensional example and compared to Moreau-Jean's event-capturing schemes. The explicit algorithm is found to be very accurate and the scheme has generally a higher order of convergence than Moreau-Jean's schemes and provides also an excellent energy behavior. Then, the two time scales explicit-implicit HATI is applied to the numerical example of a bridge crane under seismic loading. The results are validated in comparison to a fine scale full explicit computation. The energy dissipated in the implicit-explicit interface is well controlled and the computational time is lower than a full-explicit simulation.

  12. Dynamic finite element method modeling of the upper shelf energy of precracked Charpy specimens of neutron irradiated weld metal 72W

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, A.S.; Sidener, S.E.; Hamilton, M.L.

    1999-10-01

    Dynamic finite element modeling of the fracture behavior of fatigue-precracked Charpy specimens in both unirradiated and irradiated conditions was performed using a computer code, ABAQUS Explicit, to predict the upper shelf energy of precracked specimens of a given size from experimental data obtained for a different size. A tensile fracture-strain based method for modeling crack extension and propagation was used. It was found that the predicted upper shelf energies of full and half size precracked specimens based on third size data were in reasonable agreement with their respective experimental values. Similar success was achieved for predicting the upper shelf energymore » of subsize precracked specimens based on full size data.« less

  13. Isolating Curvature Effects in Computing Wall-Bounded Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Gatski, Thomas B.

    2001-01-01

    The flow over the zero-pressure-gradient So-Mellor convex curved wall is simulated using the Navier-Stokes equations. An inviscid effective outer wall shape, undocumented in the experiment, is obtained by using an adjoint optimization method with the desired pressure distribution on the inner wall as the cost function. Using this wall shape with a Navier-Stokes method, the abilities of various turbulence models to simulate the effects of curvature without the complicating factor of streamwise pressure gradient can be evaluated. The one-equation Spalart-Allmaras turbulence model overpredicts eddy viscosity, and its boundary layer profiles are too full. A curvature-corrected version of this model improves results, which are sensitive to the choice of a particular constant. An explicit algebraic stress model does a reasonable job predicting this flow field. However, results can be slightly improved by modifying the assumption on anisotropy equilibrium in the model's derivation. The resulting curvature-corrected explicit algebraic stress model possesses no heuristic functions or additional constants. It lowers slightly the computed skin friction coefficient and the turbulent stress levels for this case (in better agreement with experiment), but the effect on computed velocity profiles is very small.

  14. Ocean-Atmosphere Coupled Model Simulations of Precipitation in the Central Andes

    NASA Technical Reports Server (NTRS)

    Nicholls, Stephen D.; Mohr, Karen I.

    2015-01-01

    The meridional extent and complex orography of the South American continent contributes to a wide diversity of climate regimes ranging from hyper-arid deserts to tropical rainforests to sub-polar highland regions. In addition, South American meteorology and climate are also made further complicated by ENSO, a powerful coupled ocean-atmosphere phenomenon. Modelling studies in this region have typically resorted to either atmospheric mesoscale or atmosphere-ocean coupled global climate models. The latter offers full physics and high spatial resolution, but it is computationally inefficient typically lack an interactive ocean, whereas the former offers high computational efficiency and ocean-atmosphere coupling, but it lacks adequate spatial and temporal resolution to adequate resolve the complex orography and explicitly simulate precipitation. Explicit simulation of precipitation is vital in the Central Andes where rainfall rates are light (0.5-5 mm hr-1), there is strong seasonality, and most precipitation is associated with weak mesoscale-organized convection. Recent increases in both computational power and model development have led to the advent of coupled ocean-atmosphere mesoscale models for both weather and climate study applications. These modelling systems, while computationally expensive, include two-way ocean-atmosphere coupling, high resolution, and explicit simulation of precipitation. In this study, we use the Coupled Ocean-Atmosphere-Wave-Sediment Transport (COAWST), a fully-coupled mesoscale atmosphere-ocean modeling system. Previous work has shown COAWST to reasonably simulate the entire 2003-2004 wet season (Dec-Feb) as validated against both satellite and model analysis data when ECMWF interim analysis data were used for boundary conditions on a 27-9-km grid configuration (Outer grid extent: 60.4S to 17.7N and 118.6W to 17.4W).

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sattison, M.B.; Blackman, H.S.; Novack, S.D.

    The Office for Analysis and Evaluation of Operational Data (AEOD) has sought the assistance of the Idaho National Engineering Laboratory (INEL) to make some significant enhancements to the SAPHIRE-based Accident Sequence Precursor (ASP) models recently developed by the INEL. The challenge of this project is to provide the features of a full-scale PRA within the framework of the simplified ASP models. Some of these features include: (1) uncertainty analysis addressing the standard PRA uncertainties and the uncertainties unique to the ASP models and methods, (2) incorporation and proper quantification of individual human actions and the interaction among human actions, (3)more » enhanced treatment of common cause failures, and (4) extension of the ASP models to more closely mimic full-scale PRAs (inclusion of more initiators, explicitly modeling support system failures, etc.). This paper provides an overview of the methods being used to make the above improvements.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sattison, M.B.; Blackman, H.S.; Novack, S.D.

    The Office for Analysis and Evaluation of Operational Data (AEOD) has sought the assistance of the Idaho National Engineering Laboratory (INEL) to make some significant enhancements to the SAPHIRE-based Accident Sequence Precursor (ASP) models recently developed by the INEL. The challenge of this project is to provide the features of a full-scale PRA within the framework of the simplified ASP models. Some of these features include: (1) uncertainty analysis addressing the standard PRA uncertainties and the uncertainties unique to the ASP models and methodology, (2) incorporation and proper quantification of individual human actions and the interaction among human actions, (3)more » enhanced treatment of common cause failures, and (4) extension of the ASP models to more closely mimic full-scale PRAs (inclusion of more initiators, explicitly modeling support system failures, etc.). This paper provides an overview of the methods being used to make the above improvements.« less

  17. Rigorous Model Reduction for a Damped-Forced Nonlinear Beam Model: An Infinite-Dimensional Analysis

    NASA Astrophysics Data System (ADS)

    Kogelbauer, Florian; Haller, George

    2018-06-01

    We use invariant manifold results on Banach spaces to conclude the existence of spectral submanifolds (SSMs) in a class of nonlinear, externally forced beam oscillations. SSMs are the smoothest nonlinear extensions of spectral subspaces of the linearized beam equation. Reduction in the governing PDE to SSMs provides an explicit low-dimensional model which captures the correct asymptotics of the full, infinite-dimensional dynamics. Our approach is general enough to admit extensions to other types of continuum vibrations. The model-reduction procedure we employ also gives guidelines for a mathematically self-consistent modeling of damping in PDEs describing structural vibrations.

  18. Particle-hole symmetry in generalized seniority, microscopic interacting boson (fermion) model, nucleon-pair approximation, and other models

    NASA Astrophysics Data System (ADS)

    Jia, L. Y.

    2016-06-01

    The particle-hole symmetry (equivalence) of the full shell-model Hilbert space is straightforward and routinely used in practical calculations. In this work I show that this symmetry is preserved in the subspace truncated up to a certain generalized seniority and give the explicit transformation between the states in the two types (particle and hole) of representations. Based on the results, I study particle-hole symmetry in popular theories that could be regarded as further truncations on top of the generalized seniority, including the microscopic interacting boson (fermion) model, the nucleon-pair approximation, and other models.

  19. Explicitly represented polygon wall boundary model for the explicit MPS method

    NASA Astrophysics Data System (ADS)

    Mitsume, Naoto; Yoshimura, Shinobu; Murotani, Kohei; Yamada, Tomonori

    2015-05-01

    This study presents an accurate and robust boundary model, the explicitly represented polygon (ERP) wall boundary model, to treat arbitrarily shaped wall boundaries in the explicit moving particle simulation (E-MPS) method, which is a mesh-free particle method for strong form partial differential equations. The ERP model expresses wall boundaries as polygons, which are explicitly represented without using the distance function. These are derived so that for viscous fluids, and with less computational cost, they satisfy the Neumann boundary condition for the pressure and the slip/no-slip condition on the wall surface. The proposed model is verified and validated by comparing computed results with the theoretical solution, results obtained by other models, and experimental results. Two simulations with complex boundary movements are conducted to demonstrate the applicability of the E-MPS method to the ERP model.

  20. The effects of divided attention on implicit and explicit memory performance.

    PubMed

    Schmitter-Edgecombe, M

    1996-03-01

    This study explored the nature of the relationship between attention available at learning and subsequent implicit and explicit memory performance. One hundred neurologically normal subjects rated their liking of target words on a five-point scale. Half of the subjects completed the word-rating task in a full attention condition and the other half performed the task in a divided attention condition. Following administration of the word-rating task, all subjects completed five memory tests, three implicit (category association, tachistoscopic identification, and perceptual clarification) and two explicit (semantic-cued recall and graphemic-cued recall), each bearing on a different subset of the list of previously presented target words. The results revealed that subjects in the divided attention condition performed significantly more poorly than subjects in the full attention condition on the explicit memory measures. In contrast, there were no significant group differences in performance on the implicit memory measures. These findings suggest that the attention to an episode that is necessary to produce later explicit memory may differ from that necessary to produce unconscious influences. The relationship between implicit memory, neurologic injury and automatic processes is discussed.

  1. Development of a Localized Low-Dimensional Approach to Turbulence Simulation

    NASA Astrophysics Data System (ADS)

    Juttijudata, Vejapong; Rempfer, Dietmar; Lumley, John

    2000-11-01

    Our previous study has shown that the localized low-dimensional model derived from a projection of Navier-Stokes equations onto a set of one-dimensional scalar POD modes, with boundary conditions at y^+=40, can predict wall turbulence accurately for short times while failing to give a stable long-term solution. The structures obtained from the model and later studies suggest our boundary conditions from DNS are not consistent with the solution from the localized model resulting in an injection of energy at the top boundary. In the current study, we develop low-dimensional models using one-dimensional scalar POD modes derived from an explicitly filtered DNS. This model problem has exact no-slip boundary conditions at both walls while the locality of the wall layer is still retained. Furthermore, the interaction between wall and core region is attenuated via an explicit filter which allows us to investigate the quality of the model without requiring complicated modeling of the top boundary conditions. The full-channel model gives reasonable wall turbulence structures as well as long-term turbulent statistics while still having difficulty with the prediction of the mean velocity profile farther from the wall. We also consider a localized model with modified boundary conditions in the last part of our study.

  2. Extinction debt from climate change for frogs in the wet tropics

    PubMed Central

    Brook, Barry W.; Hoskin, Conrad J.; Pressey, Robert L.; VanDerWal, Jeremy; Williams, Stephen E.

    2016-01-01

    The effect of twenty-first-century climate change on biodiversity is commonly forecast based on modelled shifts in species ranges, linked to habitat suitability. These projections have been coupled with species–area relationships (SAR) to infer extinction rates indirectly as a result of the loss of climatically suitable areas and associated habitat. This approach does not model population dynamics explicitly, and so accepts that extinctions might occur after substantial (but unknown) delays—an extinction debt. Here we explicitly couple bioclimatic envelope models of climate and habitat suitability with generic life-history models for 24 species of frogs found in the Australian Wet Tropics (AWT). We show that (i) as many as four species of frogs face imminent extinction by 2080, due primarily to climate change; (ii) three frogs face delayed extinctions; and (iii) this extinction debt will take at least a century to be realized in full. Furthermore, we find congruence between forecast rates of extinction using SARs, and demographic models with an extinction lag of 120 years. We conclude that SAR approaches can provide useful advice to conservation on climate change impacts, provided there is a good understanding of the time lags over which delayed extinctions are likely to occur. PMID:27729484

  3. A Simplified Model for Multiphase Leakage through Faults with Applications for CO2 Storage

    NASA Astrophysics Data System (ADS)

    Watson, F. E.; Doster, F.

    2017-12-01

    In the context of geological CO2 storage, faults in the subsurface could affect storage security by acting as high permeability pathways which allow CO2 to flow upwards and away from the storage formation. To assess the likelihood of leakage through faults and the impacts faults might have on storage security numerical models are required. However, faults are complex geological features, usually consisting of a fault core surrounded by a highly fractured damage zone. A direct representation of these in a numerical model would require very fine grid resolution and would be computationally expensive. Here, we present the development of a reduced complexity model for fault flow using the vertically integrated formulation. This model captures the main features of the flow but does not require us to resolve the vertical dimension, nor the fault in the horizontal dimension, explicitly. It is thus less computationally expensive than full resolution models. Consequently, we can quickly model many realisations for parameter uncertainty studies of CO2 injection into faulted reservoirs. We develop the model based on explicitly simulating local 3D representations of faults for characteristic scenarios using the Matlab Reservoir Simulation Toolbox (MRST). We have assessed the impact of variables such as fault geometry, porosity and permeability on multiphase leakage rates.

  4. Material Model Evaluation of a Composite Honeycomb Energy Absorber

    NASA Technical Reports Server (NTRS)

    Jackson, Karen E.; Annett, Martin S.; Fasanella, Edwin L.; Polanco, Michael A.

    2012-01-01

    A study was conducted to evaluate four different material models in predicting the dynamic crushing response of solid-element-based models of a composite honeycomb energy absorber, designated the Deployable Energy Absorber (DEA). Dynamic crush tests of three DEA components were simulated using the nonlinear, explicit transient dynamic code, LS-DYNA . In addition, a full-scale crash test of an MD-500 helicopter, retrofitted with DEA blocks, was simulated. The four material models used to represent the DEA included: *MAT_CRUSHABLE_FOAM (Mat 63), *MAT_HONEYCOMB (Mat 26), *MAT_SIMPLIFIED_RUBBER/FOAM (Mat 181), and *MAT_TRANSVERSELY_ANISOTROPIC_CRUSHABLE_FOAM (Mat 142). Test-analysis calibration metrics included simple percentage error comparisons of initial peak acceleration, sustained crush stress, and peak compaction acceleration of the DEA components. In addition, the Roadside Safety Verification and Validation Program (RSVVP) was used to assess similarities and differences between the experimental and analytical curves for the full-scale crash test.

  5. Comparative density functional study of the complexes [UO2(CO3)3]4- and [(UO2)3(CO3)6]6- in aqueous solution.

    PubMed

    Schlosser, Florian; Moskaleva, Lyudmila V; Kremleva, Alena; Krüger, Sven; Rösch, Notker

    2010-06-28

    With a relativistic all-electron density functional method, we studied two anionic uranium(VI) carbonate complexes that are important for uranium speciation and transport in aqueous medium, the mononuclear tris(carbonato) complex [UO(2)(CO(3))(3)](4-) and the trinuclear hexa(carbonato) complex [(UO(2))(3)(CO(3))(6)](6-). Focusing on the structures in solution, we applied for the first time a full solvation treatment to these complexes. We approximated short-range effects by explicit aqua ligands and described long-range electrostatic interactions via a polarizable continuum model. Structures and vibrational frequencies of "gas-phase" models with explicit aqua ligands agree best with experiment. This is accidental because the continuum model of the solvent to some extent overestimates the electrostatic interactions of these highly anionic systems with the bulk solvent. The calculated free energy change when three mono-nuclear complexes associate to the trinuclear complex, agrees well with experiment and supports the formation of the latter species upon acidification of a uranyl carbonate solution.

  6. Full numerical simulation of coflowing, axisymmetric jet diffusion flames

    NASA Technical Reports Server (NTRS)

    Mahalingam, S.; Cantwell, B. J.; Ferziger, J. H.

    1990-01-01

    The near field of a non-premixed flame in a low speed, coflowing axisymmetric jet is investigated numerically using full simulation. The time-dependent governing equations are solved by a second-order, explicit finite difference scheme and a single-step, finite rate model is used to represent the chemistry. Steady laminar flame results show the correct dependence of flame height on Peclet number and reaction zone thickness on Damkoehler number. Forced simulations reveal a large difference in the instantaneous structure of scalar dissipation fields between nonbuoyant and buoyant cases. In the former, the scalar dissipation marks intense reaction zones, supporting the flamelet concept; however, results suggest that flamelet modeling assumptions need to be reexamined. In the latter, this correspondence breaks down, suggesting that modifications to the flamelet modeling approach are needed in buoyant turbulent diffusion flames.

  7. Full versus divided attention and implicit memory performance.

    PubMed

    Wolters, G; Prinsen, A

    1997-11-01

    Effects of full and divided attention during study on explicit and implicit memory performance were investigated in two experiments. Study time was manipulated in a third experiment. Experiment 1 showed that both similar and dissociative effects can be found in the two kinds of memory test, depending on the difficulty of the concurrent tasks used in the divided-attention condition. In this experiment, however, standard implicit memory tests were used and contamination by explicit memory influences cannot be ruled out. Therefore, in Experiments 2 and 3 the process dissociation procedure was applied. Manipulations of attention during study and of study time clearly affected the controlled (explicit) memory component, but had no effect on the automatic (implicit) memory component. Theoretical implications of these findings are discussed.

  8. CDPOP: A spatially explicit cost distance population genetics program

    Treesearch

    Erin L. Landguth; S. A. Cushman

    2010-01-01

    Spatially explicit simulation of gene flow in complex landscapes is essential to explain observed population responses and provide a foundation for landscape genetics. To address this need, we wrote a spatially explicit, individual-based population genetics model (CDPOP). The model implements individual-based population modelling with Mendelian inheritance and k-allele...

  9. Effects of Explicit Instructions, Metacognition, and Motivation on Creative Performance

    ERIC Educational Resources Information Center

    Hong, Eunsook; O'Neil, Harold F.; Peng, Yun

    2016-01-01

    Effects of explicit instructions, metacognition, and intrinsic motivation on creative homework performance were examined in 303 Chinese 10th-grade students. Models that represent hypothesized relations among these constructs and trait covariates were tested using structural equation modelling. Explicit instructions geared to originality were…

  10. Making the Tacit Explicit: Rethinking Culturally Inclusive Pedagogy in International Student Academic Adaptation

    ERIC Educational Resources Information Center

    Blasco, Maribel

    2015-01-01

    The article proposes an approach, broadly inspired by culturally inclusive pedagogy, to facilitate international student academic adaptation based on rendering tacit aspects of local learning cultures explicit to international full degree students, rather than adapting them. Preliminary findings are presented from a focus group-based exploratory…

  11. Multiscale Simulations of Protein Landscapes: Using Coarse Grained Models as Reference Potentials to Full Explicit Models

    PubMed Central

    Messer, Benjamin M.; Roca, Maite; Chu, Zhen T.; Vicatos, Spyridon; Kilshtain, Alexandra Vardi; Warshel, Arieh

    2009-01-01

    Evaluating the free energy landscape of proteins and the corresponding functional aspects presents a major challenge for computer simulation approaches. This challenge is due to the complexity of the landscape and the enormous computer time needed for converging simulations. The use of simplified coarse grained (CG) folding models offers an effective way of sampling the landscape but such a treatment, however, may not give the correct description of the effect of the actual protein residues. A general way around this problem that has been put forward in our early work (Fan et al, Theor Chem Acc (1999) 103:77-80) uses the CG model as a reference potential for free energy calculations of different properties of the explicit model. This method is refined and extended here, focusing on improving the electrostatic treatment and on demonstrating key applications. This application includes: evaluation of changes of folding energy upon mutations, calculations of transition states binding free energies (which are crucial for rational enzyme design), evaluation of catalytic landscape and simulation of the time dependent responses to pH changes. Furthermore, the general potential of our approach in overcoming major challenges in studies of structure function correlation in proteins is discussed. PMID:20052756

  12. Extinction debt from climate change for frogs in the wet tropics.

    PubMed

    Fordham, Damien A; Brook, Barry W; Hoskin, Conrad J; Pressey, Robert L; VanDerWal, Jeremy; Williams, Stephen E

    2016-10-01

    The effect of twenty-first-century climate change on biodiversity is commonly forecast based on modelled shifts in species ranges, linked to habitat suitability. These projections have been coupled with species-area relationships (SAR) to infer extinction rates indirectly as a result of the loss of climatically suitable areas and associated habitat. This approach does not model population dynamics explicitly, and so accepts that extinctions might occur after substantial (but unknown) delays-an extinction debt. Here we explicitly couple bioclimatic envelope models of climate and habitat suitability with generic life-history models for 24 species of frogs found in the Australian Wet Tropics (AWT). We show that (i) as many as four species of frogs face imminent extinction by 2080, due primarily to climate change; (ii) three frogs face delayed extinctions; and (iii) this extinction debt will take at least a century to be realized in full. Furthermore, we find congruence between forecast rates of extinction using SARs, and demographic models with an extinction lag of 120 years. We conclude that SAR approaches can provide useful advice to conservation on climate change impacts, provided there is a good understanding of the time lags over which delayed extinctions are likely to occur. © 2016 The Author(s).

  13. Improvement, Verification, and Refinement of Spatially-Explicit Exposure Models in Risk Assessment - FishRand Spatially-Explicit Bioaccumulation Model Demonstration

    DTIC Science & Technology

    2015-08-01

    21  Figure 4. Data-based proportion of DDD , DDE and DDT in total DDx in fish and sediment by... DDD dichlorodiphenyldichloroethane DDE dichlorodiphenyldichloroethylene DDT dichlorodiphenyltrichloroethane DoD Department of Defense ERM... DDD ) at the other site. The spatially-explicit model consistently predicts tissue concentrations that closely match both the average and the

  14. Prediction of Complex Aerodynamic Flows with Explicit Algebraic Stress Models

    NASA Technical Reports Server (NTRS)

    Abid, Ridha; Morrison, Joseph H.; Gatski, Thomas B.; Speziale, Charles G.

    1996-01-01

    An explicit algebraic stress equation, developed by Gatski and Speziale, is used in the framework of K-epsilon formulation to predict complex aerodynamic turbulent flows. The nonequilibrium effects are modeled through coefficients that depend nonlinearly on both rotational and irrotational strains. The proposed model was implemented in the ISAAC Navier-Stokes code. Comparisons with the experimental data are presented which clearly demonstrate that explicit algebraic stress models can predict the correct response to nonequilibrium flow.

  15. Program SPACECAP: software for estimating animal density using spatially explicit capture-recapture models

    USGS Publications Warehouse

    Gopalaswamy, Arjun M.; Royle, J. Andrew; Hines, James E.; Singh, Pallavi; Jathanna, Devcharan; Kumar, N. Samba; Karanth, K. Ullas

    2012-01-01

    1. The advent of spatially explicit capture-recapture models is changing the way ecologists analyse capture-recapture data. However, the advantages offered by these new models are not fully exploited because they can be difficult to implement. 2. To address this need, we developed a user-friendly software package, created within the R programming environment, called SPACECAP. This package implements Bayesian spatially explicit hierarchical models to analyse spatial capture-recapture data. 3. Given that a large number of field biologists prefer software with graphical user interfaces for analysing their data, SPACECAP is particularly useful as a tool to increase the adoption of Bayesian spatially explicit capture-recapture methods in practice.

  16. Connecting Free Energy Surfaces in Implicit and Explicit Solvent: an Efficient Method to Compute Conformational and Solvation Free Energies

    PubMed Central

    Deng, Nanjie; Zhang, Bin W.; Levy, Ronald M.

    2015-01-01

    The ability to accurately model solvent effects on free energy surfaces is important for understanding many biophysical processes including protein folding and misfolding, allosteric transitions and protein-ligand binding. Although all-atom simulations in explicit solvent can provide an accurate model for biomolecules in solution, explicit solvent simulations are hampered by the slow equilibration on rugged landscapes containing multiple basins separated by barriers. In many cases, implicit solvent models can be used to significantly speed up the conformational sampling; however, implicit solvent simulations do not fully capture the effects of a molecular solvent, and this can lead to loss of accuracy in the estimated free energies. Here we introduce a new approach to compute free energy changes in which the molecular details of explicit solvent simulations are retained while also taking advantage of the speed of the implicit solvent simulations. In this approach, the slow equilibration in explicit solvent, due to the long waiting times before barrier crossing, is avoided by using a thermodynamic cycle which connects the free energy basins in implicit solvent and explicit solvent using a localized decoupling scheme. We test this method by computing conformational free energy differences and solvation free energies of the model system alanine dipeptide in water. The free energy changes between basins in explicit solvent calculated using fully explicit solvent paths agree with the corresponding free energy differences obtained using the implicit/explicit thermodynamic cycle to within 0.3 kcal/mol out of ~3 kcal/mol at only ~8 % of the computational cost. We note that WHAM methods can be used to further improve the efficiency and accuracy of the explicit/implicit thermodynamic cycle. PMID:26236174

  17. Connecting free energy surfaces in implicit and explicit solvent: an efficient method to compute conformational and solvation free energies.

    PubMed

    Deng, Nanjie; Zhang, Bin W; Levy, Ronald M

    2015-06-09

    The ability to accurately model solvent effects on free energy surfaces is important for understanding many biophysical processes including protein folding and misfolding, allosteric transitions, and protein–ligand binding. Although all-atom simulations in explicit solvent can provide an accurate model for biomolecules in solution, explicit solvent simulations are hampered by the slow equilibration on rugged landscapes containing multiple basins separated by barriers. In many cases, implicit solvent models can be used to significantly speed up the conformational sampling; however, implicit solvent simulations do not fully capture the effects of a molecular solvent, and this can lead to loss of accuracy in the estimated free energies. Here we introduce a new approach to compute free energy changes in which the molecular details of explicit solvent simulations are retained while also taking advantage of the speed of the implicit solvent simulations. In this approach, the slow equilibration in explicit solvent, due to the long waiting times before barrier crossing, is avoided by using a thermodynamic cycle which connects the free energy basins in implicit solvent and explicit solvent using a localized decoupling scheme. We test this method by computing conformational free energy differences and solvation free energies of the model system alanine dipeptide in water. The free energy changes between basins in explicit solvent calculated using fully explicit solvent paths agree with the corresponding free energy differences obtained using the implicit/explicit thermodynamic cycle to within 0.3 kcal/mol out of ∼3 kcal/mol at only ∼8% of the computational cost. We note that WHAM methods can be used to further improve the efficiency and accuracy of the implicit/explicit thermodynamic cycle.

  18. Concurrent processing simulation of the space station

    NASA Technical Reports Server (NTRS)

    Gluck, R.; Hale, A. L.; Sunkel, John W.

    1989-01-01

    The development of a new capability for the time-domain simulation of multibody dynamic systems and its application to the study of a large angle rotational maneuvers of the Space Station is described. The effort was divided into three sequential tasks, which required significant advancements of the state-of-the art to accomplish. These were: (1) the development of an explicit mathematical model via symbol manipulation of a flexible, multibody dynamic system; (2) the development of a methodology for balancing the computational load of an explicit mathematical model for concurrent processing; and (3) the implementation and successful simulation of the above on a prototype Custom Architectured Parallel Processing System (CAPPS) containing eight processors. The throughput rate achieved by the CAPPS operating at only 70 percent efficiency, was 3.9 times greater than that obtained sequentially by the IBM 3090 supercomputer simulating the same problem. More significantly, analysis of the results leads to the conclusion that the relative cost effectiveness of concurrent vs. sequential digital computation will grow substantially as the computational load is increased. This is a welcomed development in an era when very complex and cumbersome mathematical models of large space vehicles must be used as substitutes for full scale testing which has become impractical.

  19. User's guide for NASCRIN: A vectorized code for calculating two-dimensional supersonic internal flow fields

    NASA Technical Reports Server (NTRS)

    Kumar, A.

    1984-01-01

    A computer program NASCRIN has been developed for analyzing two-dimensional flow fields in high-speed inlets. It solves the two-dimensional Euler or Navier-Stokes equations in conservation form by an explicit, two-step finite-difference method. An explicit-implicit method can also be used at the user's discretion for viscous flow calculations. For turbulent flow, an algebraic, two-layer eddy-viscosity model is used. The code is operational on the CDC CYBER 203 computer system and is highly vectorized to take full advantage of the vector-processing capability of the system. It is highly user oriented and is structured in such a way that for most supersonic flow problems, the user has to make only a few changes. Although the code is primarily written for supersonic internal flow, it can be used with suitable changes in the boundary conditions for a variety of other problems.

  20. Effects of reducing attentional resources on implicit and explicit memory after severe traumatic brain injury.

    PubMed

    Watt, S; Shores, E A; Kinoshita, S

    1999-07-01

    Implicit and explicit memory were examined in individuals with severe traumatic brain injury (TBI) under conditions of full and divided attention. Participants included 12 individuals with severe TBI and 12 matched controls. In Experiment 1, participants carried out an implicit test of word-stem completion and an explicit test of cued recall. Results demonstrated that TBI participants exhibited impaired explicit memory but preserved implicit memory. In Experiment 2, a significant reduction in the explicit memory performance of both TBI and control participants, as well as a significant decrease in the implicit memory performance of TBI participants, was achieved by reducing attentional resources at encoding. These results indicated that performance on an implicit task of word-stem completion may require the availability of additional attentional resources that are not preserved after severe TBI.

  1. Properties of interaction networks underlying the minority game.

    PubMed

    Caridi, Inés

    2014-11-01

    The minority game is a well-known agent-based model with no explicit interaction among its agents. However, it is known that they interact through the global magnitudes of the model and through their strategies. In this work we have attempted to formalize the implicit interactions among minority game agents as if they were links on a complex network. We have defined the link between two agents by quantifying the similarity between them. This link definition is based on the information of the instance of the game (the set of strategies assigned to each agent at the beginning) without any dynamic information on the game and brings about a static, unweighed and undirected network. We have analyzed the structure of the resulting network for different parameters, such as the number of agents (N) and the agent's capacity to process information (m), always taking into account games with two strategies per agent. In the region of crowd effects of the model, the resulting networks structure is a small-world network, whereas in the region where the behavior of the minority game is the same as in a game of random decisions, networks become a random network of Erdos-Renyi. The transition between these two types of networks is slow, without any peculiar feature of the network in the region of the coordination among agents. Finally, we have studied the resulting static networks for the full strategy minority game model, a maximal instance of the minority game in which all possible agents take part in the game. We have explicitly calculated the degree distribution of the full strategy minority game network and, on the basis of this analytical result, we have estimated the degree distribution of the minority game network, which is in accordance with computational results.

  2. Finite Element Simulation of Three Full-Scale Crash Tests for Cessna 172 Aircraft

    NASA Technical Reports Server (NTRS)

    Mason, Brian H.; Warren, Jerry E., Jr.

    2017-01-01

    The NASA Emergency Locator Transmitter Survivability and Reliability (ELT-SAR) project was initiated in 2013 to assess the crash performance standards for the next generation of emergency locator transmitter (ELT) systems. Three Cessna 172 aircraft were acquired to perform crash testing at NASA Langley Research Center's Landing and Impact Research Facility. Full-scale crash tests were conducted in the summer of 2015 and each test article was subjected to severe, but survivable, impact conditions including a flare-to-stall during emergency landing, and two controlled-flight-into-terrain scenarios. Full-scale finite element analyses were performed using a commercial explicit solver, ABAQUS. The first test simulated impacting a concrete surface represented analytically by a rigid plane. Tests 2 and 3 simulated impacting a dirt surface represented analytically by an Eulerian grid of brick elements using a Mohr-Coulomb material model. The objective of this paper is to summarize the test and analysis results for the three full-scale crash tests. Simulation models of the airframe which correlate well with the tests are needed for future studies of alternate ELT mounting configurations.

  3. Structure of marginally jammed polydisperse packings of frictionless spheres

    NASA Astrophysics Data System (ADS)

    Zhang, Chi; O'Donovan, Cathal B.; Corwin, Eric I.; Cardinaux, Frédéric; Mason, Thomas G.; Möbius, Matthias E.; Scheffold, Frank

    2015-03-01

    We model the packing structure of a marginally jammed bulk ensemble of polydisperse spheres. To this end we expand on the granocentric model [Clusel et al., Nature (London) 460, 611 (2009), 10.1038/nature08158], explicitly taking into account rattlers. This leads to a relationship between the characteristic parameters of the packing, such as the mean number of neighbors and the fraction of rattlers, and the radial distribution function g (r ) . We find excellent agreement between the model predictions for g (r ) and packing simulations, as well as experiments on jammed emulsion droplets. The observed quantitative agreement opens the path towards a full structural characterization of jammed particle systems for imaging and scattering experiments.

  4. Integrated Structural/Acoustic Modeling of Heterogeneous Panels

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett, A.; Aboudi, Jacob; Arnold, Steven, M.; Pennline, James, A.

    2012-01-01

    A model for the dynamic response of heterogeneous media is presented. A given medium is discretized into a number of subvolumes, each of which may contain an elastic anisotropic material, void, or fluid, and time-dependent boundary conditions are applied to simulate impact or incident pressure waves. The full time-dependent displacement and stress response throughout the medium is then determined via an explicit solution procedure. The model is applied to simulate the coupled structural/acoustic response of foam core sandwich panels as well as aluminum panels with foam inserts. Emphasis is placed on the acoustic absorption performance of the panels versus weight and the effects of the arrangement of the materials and incident wave frequency.

  5. Generation of linear dynamic models from a digital nonlinear simulation

    NASA Technical Reports Server (NTRS)

    Daniele, C. J.; Krosel, S. M.

    1979-01-01

    The results and methodology used to derive linear models from a nonlinear simulation are presented. It is shown that averaged positive and negative perturbations in the state variables can reduce numerical errors in finite difference, partial derivative approximations and, in the control inputs, can better approximate the system response in both directions about the operating point. Both explicit and implicit formulations are addressed. Linear models are derived for the F 100 engine, and comparisons of transients are made with the nonlinear simulation. The problem of startup transients in the nonlinear simulation in making these comparisons is addressed. Also, reduction of the linear models is investigated using the modal and normal techniques. Reduced-order models of the F 100 are derived and compared with the full-state models.

  6. Implicit and explicit self-esteem and their reciprocal relationship with symptoms of depression and social anxiety: a longitudinal study in adolescents.

    PubMed

    van Tuijl, Lonneke A; de Jong, Peter J; Sportel, B Esther; de Hullu, Eva; Nauta, Maaike H

    2014-03-01

    A negative self-view is a prominent factor in most cognitive vulnerability models of depression and anxiety. Recently, there has been increased attention to differentiate between the implicit (automatic) and the explicit (reflective) processing of self-related evaluations. This longitudinal study aimed to test the association between implicit and explicit self-esteem and symptoms of adolescent depression and social anxiety disorder. Two complementary models were tested: the vulnerability model and the scarring effect model. Participants were 1641 first and second year pupils of secondary schools in the Netherlands. The Rosenberg Self-Esteem Scale, self-esteem Implicit Association Test and Revised Child Anxiety and Depression Scale were completed to measure explicit self-esteem, implicit self-esteem and symptoms of social anxiety disorder (SAD) and major depressive disorder (MDD), respectively, at baseline and two-year follow-up. Explicit self-esteem at baseline was associated with symptoms of MDD and SAD at follow-up. Symptomatology at baseline was not associated with explicit self-esteem at follow-up. Implicit self-esteem was not associated with symptoms of MDD or SAD in either direction. We relied on self-report measures of MDD and SAD symptomatology. Also, findings are based on a non-clinical sample. Our findings support the vulnerability model, and not the scarring effect model. The implications of these findings suggest support of an explicit self-esteem intervention to prevent increases in MDD and SAD symptomatology in non-clinical adolescents. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Are mixed explicit/implicit solvation models reliable for studying phosphate hydrolysis? A comparative study of continuum, explicit and mixed solvation models.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamerlin, Shina C. L.; Haranczyk, Maciej; Warshel, Arieh

    2009-05-01

    Phosphate hydrolysis is ubiquitous in biology. However, despite intensive research on this class of reactions, the precise nature of the reaction mechanism remains controversial. In this work, we have examined the hydrolysis of three homologous phosphate diesters. The solvation free energy was simulated by means of either an implicit solvation model (COSMO), hybrid quantum mechanical / molecular mechanical free energy perturbation (QM/MM-FEP) or a mixed solvation model in which N water molecules were explicitly included in the ab initio description of the reacting system (where N=1-3), with the remainder of the solvent being implicitly modelled as a continuum. Here, bothmore » COSMO and QM/MM-FEP reproduce Delta Gobs within an error of about 2kcal/mol. However, we demonstrate that in order to obtain any form of reliable results from a mixed model, it is essential to carefully select the explicit water molecules from short QM/MM runs that act as a model for the true infinite system. Additionally, the mixed models tend to be increasingly inaccurate the more explicit water molecules are placed into the system. Thus, our analysis indicates that this approach provides an unreliable way for modelling phosphate hydrolysis in solution.« less

  8. Explicit filtering in large eddy simulation using a discontinuous Galerkin method

    NASA Astrophysics Data System (ADS)

    Brazell, Matthew J.

    The discontinuous Galerkin (DG) method is a formulation of the finite element method (FEM). DG provides the ability for a high order of accuracy in complex geometries, and allows for highly efficient parallelization algorithms. These attributes make the DG method attractive for solving the Navier-Stokes equations for large eddy simulation (LES). The main goal of this work is to investigate the feasibility of adopting an explicit filter in the numerical solution of the Navier-Stokes equations with DG. Explicit filtering has been shown to increase the numerical stability of under-resolved simulations and is needed for LES with dynamic sub-grid scale (SGS) models. The explicit filter takes advantage of DG's framework where the solution is approximated using a polyno- mial basis where the higher modes of the solution correspond to a higher order polynomial basis. By removing high order modes, the filtered solution contains low order frequency content much like an explicit low pass filter. The explicit filter implementation is tested on a simple 1-D solver with an initial condi- tion that has some similarity to turbulent flows. The explicit filter does restrict the resolution as well as remove accumulated energy in the higher modes from aliasing. However, the ex- plicit filter is unable to remove numerical errors causing numerical dissipation. A second test case solves the 3-D Navier-Stokes equations of the Taylor-Green vortex flow (TGV). The TGV is useful for SGS model testing because it is initially laminar and transitions into a fully turbulent flow. The SGS models investigated include the constant coefficient Smagorinsky model, dynamic Smagorinsky model, and dynamic Heinz model. The constant coefficient Smagorinsky model is over dissipative, this is generally not desirable however it does add stability. The dynamic Smagorinsky model generally performs better, especially during the laminar-turbulent transition region as expected. The dynamic Heinz model which is based on an improved model, handles the laminar-turbulent transition region well while also showing additional robustness.

  9. Embedded-explicit emergent literacy intervention I: Background and description of approach.

    PubMed

    Justice, Laura M; Kaderavek, Joan N

    2004-07-01

    This article, the first of a two-part series, provides background information and a general description of an emergent literacy intervention model for at-risk preschoolers and kindergartners. The embedded-explicit intervention model emphasizes the dual importance of providing young children with socially embedded opportunities for meaningful, naturalistic literacy experiences throughout the day, in addition to regular structured therapeutic interactions that explicitly target critical emergent literacy goals. The role of the speech-language pathologist (SLP) in the embedded-explicit model encompasses both indirect and direct service delivery: The SLP consults and collaborates with teachers and parents to ensure the highest quality and quantity of socially embedded literacy-focused experiences and serves as a direct provider of explicit interventions using structured curricula and/or lesson plans. The goal of this integrated model is to provide comprehensive emergent literacy interventions across a spectrum of early literacy skills to ensure the successful transition of at-risk children from prereaders to readers.

  10. Developing Spatially Explicit Habitat Models for Grassland Bird Conservation Planning in the Prairie Pothole Region of North Dakota

    Treesearch

    Neal D. Niemuth; Michael E. Estey; Charles R. Loesch

    2005-01-01

    Conservation planning for birds is increasingly focused on landscapes. However, little spatially explicit information is available to guide landscape-level conservation planning for many species of birds. We used georeferenced 1995 Breeding Bird Survey (BBS) data in conjunction with land-cover information to develop a spatially explicit habitat model predicting the...

  11. Explicit robust schemes for implementation of general principal value-based constitutive models

    NASA Technical Reports Server (NTRS)

    Arnold, S. M.; Saleeb, A. F.; Tan, H. Q.; Zhang, Y.

    1993-01-01

    The issue of developing effective and robust schemes to implement general hyperelastic constitutive models is addressed. To this end, special purpose functions are used to symbolically derive, evaluate, and automatically generate the associated FORTRAN code for the explicit forms of the corresponding stress function and material tangent stiffness tensors. These explicit forms are valid for the entire deformation range. The analytical form of these explicit expressions is given here for the case in which the strain-energy potential is taken as a nonseparable polynomial function of the principle stretches.

  12. Uncertainty in spatially explicit animal dispersal models

    USGS Publications Warehouse

    Mooij, Wolf M.; DeAngelis, Donald L.

    2003-01-01

    Uncertainty in estimates of survival of dispersing animals is a vexing difficulty in conservation biology. The current notion is that this uncertainty decreases the usefulness of spatially explicit population models in particular. We examined this problem by comparing dispersal models of three levels of complexity: (1) an event-based binomial model that considers only the occurrence of mortality or arrival, (2) a temporally explicit exponential model that employs mortality and arrival rates, and (3) a spatially explicit grid-walk model that simulates the movement of animals through an artificial landscape. Each model was fitted to the same set of field data. A first objective of the paper is to illustrate how the maximum-likelihood method can be used in all three cases to estimate the means and confidence limits for the relevant model parameters, given a particular set of data on dispersal survival. Using this framework we show that the structure of the uncertainty for all three models is strikingly similar. In fact, the results of our unified approach imply that spatially explicit dispersal models, which take advantage of information on landscape details, suffer less from uncertainly than do simpler models. Moreover, we show that the proposed strategy of model development safeguards one from error propagation in these more complex models. Finally, our approach shows that all models related to animal dispersal, ranging from simple to complex, can be related in a hierarchical fashion, so that the various approaches to modeling such dispersal can be viewed from a unified perspective.

  13. Relationship between the spectral line based weighted-sum-of-gray-gases model and the full spectrum k-distribution model

    NASA Astrophysics Data System (ADS)

    Chu, Huaqiang; Liu, Fengshan; Consalvi, Jean-Louis

    2014-08-01

    The relationship between the spectral line based weighted-sum-of-gray-gases (SLW) model and the full-spectrum k-distribution (FSK) model in isothermal and homogeneous media is investigated in this paper. The SLW transfer equation can be derived from the FSK transfer equation expressed in the k-distribution function without approximation. It confirms that the SLW model is equivalent to the FSK model in the k-distribution function form. The numerical implementation of the SLW relies on a somewhat arbitrary discretization of the absorption cross section whereas the FSK model finds the spectrally integrated intensity by integration over the smoothly varying cumulative-k distribution function using a Gaussian quadrature scheme. The latter is therefore in general more efficient as a fewer number of gray gases is required to achieve a prescribed accuracy. Sample numerical calculations were conducted to demonstrate the different efficiency of these two methods. The FSK model is found more accurate than the SLW model in radiation transfer in H2O; however, the SLW model is more accurate in media containing CO2 as the only radiating gas due to its explicit treatment of ‘clear gas.’

  14. Effective Reading and Writing Instruction: A Focus on Modeling

    ERIC Educational Resources Information Center

    Regan, Kelley; Berkeley, Sheri

    2012-01-01

    When providing effective reading and writing instruction, teachers need to provide explicit modeling. Modeling is particularly important when teaching students to use cognitive learning strategies. Examples of how teachers can provide specific, explicit, and flexible instructional modeling is presented in the context of two evidence-based…

  15. The role of attention during encoding in implicit and explicit memory.

    PubMed

    Mulligan, N W

    1998-01-01

    In 5 experiments, participants read study words under conditions of divided or full attention. Dividing attention reduced performance on the general knowledge test, a conceptual implicit test of memory. Likewise, dividing attention reduced conceptual priming on the word--association task, as well as on a matched explicit test, associate-cued recall. In contrast, even very strong division of attention did not reduce perceptual priming on word-fragment completion, although it did reduce recall on the matched explicit test of word-fragment-cued recall. Finally, dividing attention reduced recall on the perceptual explicit tests of graphemic-cued recall and graphemic recognition. The results indicate that perceptual implicit tests rely minimally on attention-demanding encoding processes relative to other types of memory tests. The obtained pattern of dissociations is not readily accommodated by the transfer-appropriate-processing (TAP) account of implicit and explicit memory. Potential extensions of the TAP view are discussed.

  16. "Epidemiological criminology": coming full circle.

    PubMed

    Akers, Timothy A; Lanier, Mark M

    2009-03-01

    Members of the public health and criminal justice disciplines often work with marginalized populations: people at high risk of drug use, health problems, incarceration, and other difficulties. As these fields increasingly overlap, distinctions between them are blurred, as numerous research reports and funding trends document. However, explicit theoretical and methodological linkages between the 2 disciplines remain rare. A new paradigm that links methods and statistical models of public health with those of their criminal justice counterparts is needed, as are increased linkages between epidemiological analogies, theories, and models and the corresponding tools of criminology. We outline disciplinary commonalities and distinctions, present policy examples that integrate similarities, and propose "epidemiological criminology" as a bridging framework.

  17. Explicit continuous charge-based compact model for long channel heavily doped surrounding-gate MOSFETs incorporating interface traps and quantum effects

    NASA Astrophysics Data System (ADS)

    Hamzah, Afiq; Hamid, Fatimah A.; Ismail, Razali

    2016-12-01

    An explicit solution for long-channel surrounding-gate (SRG) MOSFETs is presented from intrinsic to heavily doped body including the effects of interface traps and fixed oxide charges. The solution is based on the core SRGMOSFETs model of the Unified Charge Control Model (UCCM) for heavily doped conditions. The UCCM model of highly doped SRGMOSFETs is derived to obtain the exact equivalent expression as in the undoped case. Taking advantage of the undoped explicit charge-based expression, the asymptotic limits for below threshold and above threshold have been redefined to include the effect of trap states for heavily doped cases. After solving the asymptotic limits, an explicit mobile charge expression is obtained which includes the trap state effects. The explicit mobile charge model shows very good agreement with respect to numerical simulation over practical terminal voltages, doping concentration, geometry effects, and trap state effects due to the fixed oxide charges and interface traps. Then, the drain current is obtained using the Pao-Sah's dual integral, which is expressed as a function of inversion charge densities at the source/drain ends. The drain current agreed well with the implicit solution and numerical simulation for all regions of operation without employing any empirical parameters. A comparison with previous explicit models has been conducted to verify the competency of the proposed model with the doping concentration of 1× {10}19 {{cm}}-3, as the proposed model has better advantages in terms of its simplicity and accuracy at a higher doping concentration.

  18. From Cycle Rooted Spanning Forests to the Critical Ising Model: an Explicit Construction

    NASA Astrophysics Data System (ADS)

    de Tilière, Béatrice

    2013-04-01

    Fisher established an explicit correspondence between the 2-dimensional Ising model defined on a graph G and the dimer model defined on a decorated version {{G}} of this graph (Fisher in J Math Phys 7:1776-1781, 1966). In this paper we explicitly relate the dimer model associated to the critical Ising model and critical cycle rooted spanning forests (CRSFs). This relation is established through characteristic polynomials, whose definition only depends on the respective fundamental domains, and which encode the combinatorics of the model. We first show a matrix-tree type theorem establishing that the dimer characteristic polynomial counts CRSFs of the decorated fundamental domain {{G}_1}. Our main result consists in explicitly constructing CRSFs of {{G}_1} counted by the dimer characteristic polynomial, from CRSFs of G 1, where edges are assigned Kenyon's critical weight function (Kenyon in Invent Math 150(2):409-439, 2002); thus proving a relation on the level of configurations between two well known 2-dimensional critical models.

  19. An explicit microphysics thunderstorm model.

    Treesearch

    R. Solomon; C.M. Medaglia; C. Adamo; S. Dietrick; A. Mugnai; U. Biader Ceipidor

    2005-01-01

    The authors present a brief description of a 1.5-dimensional thunderstorm model with a lightning parameterization that utilizes an explicit microphysical scheme to model lightning-producing clouds. The main intent of this work is to describe the basic microphysical and electrical properties of the model, with a small illustrative section to show how the model may be...

  20. Graph-based analysis of connectivity in spatially-explicit population models: HexSim and the Connectivity Analysis Toolkit

    EPA Science Inventory

    Background / Question / Methods Planning for the recovery of threatened species is increasingly informed by spatially-explicit population models. However, using simulation model results to guide land management decisions can be difficult due to the volume and complexity of model...

  1. Quantum spin dynamics with pairwise-tunable, long-range interactions

    PubMed Central

    Hung, C.-L.; González-Tudela, Alejandro; Cirac, J. Ignacio; Kimble, H. J.

    2016-01-01

    We present a platform for the simulation of quantum magnetism with full control of interactions between pairs of spins at arbitrary distances in 1D and 2D lattices. In our scheme, two internal atomic states represent a pseudospin for atoms trapped within a photonic crystal waveguide (PCW). With the atomic transition frequency aligned inside a band gap of the PCW, virtual photons mediate coherent spin–spin interactions between lattice sites. To obtain full control of interaction coefficients at arbitrary atom–atom separations, ground-state energy shifts are introduced as a function of distance across the PCW. In conjunction with auxiliary pump fields, spin-exchange versus atom–atom separation can be engineered with arbitrary magnitude and phase, and arranged to introduce nontrivial Berry phases in the spin lattice, thus opening new avenues for realizing topological spin models. We illustrate the broad applicability of our scheme by explicit construction for several well-known spin models. PMID:27496329

  2. CONSTRUCTING, PERTURBATION ANALYSIIS AND TESTING OF A MULTI-HABITAT PERIODIC MATRIX POPULATION MODEL

    EPA Science Inventory

    We present a matrix model that explicitly incorporates spatial habitat structure and seasonality and discuss preliminary results from a landscape level experimental test. Ecological risk to populations is often modeled without explicit treatment of spatially or temporally distri...

  3. Green-Ampt approximations: A comprehensive analysis

    NASA Astrophysics Data System (ADS)

    Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.

    2016-04-01

    Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.

  4. Direct versus Indirect Explicit Methods of Enhancing EFL Students' English Grammatical Competence: A Concept Checking-Based Consciousness-Raising Tasks Model

    ERIC Educational Resources Information Center

    Dang, Trang Thi Doan; Nguyen, Huong Thu

    2013-01-01

    Two approaches to grammar instruction are often discussed in the ESL literature: direct explicit grammar instruction (DEGI) (deduction) and indirect explicit grammar instruction (IEGI) (induction). This study aims to explore the effects of indirect explicit grammar instruction on EFL learners' mastery of English tenses. Ninety-four…

  5. DEFINING RECOVERY GOALS AND STRATEGIES FOR ENDANGERED SPECIES USING SPATIALLY-EXPLICIT POPULATION MODELS

    EPA Science Inventory

    We used a spatially explicit population model of wolves (Canis lupus) to propose a framework for defining rangewide recovery priorities and finer-scale strategies for regional reintroductions. The model predicts that Yellowstone and central Idaho, where wolves have recently been ...

  6. Analysis of Highly-Resolved Simulations of 2-D Humps Toward Improvement of Second-Moment Closures

    NASA Technical Reports Server (NTRS)

    Jeyapaul, Elbert; Rumsey Christopher

    2013-01-01

    Fully resolved simulation data of flow separation over 2-D humps has been used to analyze the modeling terms in second-moment closures of the Reynolds-averaged Navier- Stokes equations. Existing models for the pressure-strain and dissipation terms have been analyzed using a priori calculations. All pressure-strain models are incorrect in the high-strain region near separation, although a better match is observed downstream, well into the separated-flow region. Near-wall inhomogeneity causes pressure-strain models to predict incorrect signs for the normal components close to the wall. In a posteriori computations, full Reynolds stress and explicit algebraic Reynolds stress models predict the separation point with varying degrees of success. However, as with one- and two-equation models, the separation bubble size is invariably over-predicted.

  7. Dissipation models for central difference schemes

    NASA Astrophysics Data System (ADS)

    Eliasson, Peter

    1992-12-01

    In this paper different flux limiters are used to construct dissipation models. The flux limiters are usually of Total Variation Diminishing (TVD type and are applied to the characteristic variables for the hyperbolic Euler equations in one, two or three dimensions. A number of simplified dissipation models with a reduced number of limiters are considered to reduce the computational effort. The most simplified methods use only one limiter, the dissipation model by Jameson belongs to this class since the Jameson pressure switch is considered as a limiter, not TVD though. Other one-limiter models with TVD limiters are also investigated. Models in between the most simplified one-limiter models and the full model with limiters on all the different characteristics are considered where different dissipation models are applied to the linear and non-linear characteristcs. In this paper the theory by Yee is extended to a general explicit Runge-Kutta type of schemes.

  8. Full-Scale Crash Tests and Analyses of Three High-Wing Single

    NASA Technical Reports Server (NTRS)

    Annett, Martin S.; Littell, Justin D.; Stimson, Chad M.; Jackson, Karen E.; Mason, Brian H.

    2015-01-01

    The NASA Emergency Locator Transmitter Survivability and Reliability (ELTSAR) project was initiated in 2014 to assess the crash performance standards for the next generation of ELT systems. Three Cessna 172 aircraft have been acquired to conduct crash testing at NASA Langley Research Center's Landing and Impact Research Facility. Testing is scheduled for the summer of 2015 and will simulate three crash conditions; a flare to stall while emergency landing, and two controlled flight into terrain scenarios. Instrumentation and video coverage, both onboard and external, will also provide valuable data of airframe response. Full-scale finite element analyses will be performed using two separate commercial explicit solvers. Calibration and validation of the models will be based on the airframe response under these varying crash conditions.

  9. Application of the Yoshida-Ruth Techniques to Implicit Integration and Multi-Map Explicit Integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forest, E.; Bengtsson, J.; Reusch, M.F.

    1991-04-01

    The full power of Yoshida's technique is exploited to produce an arbitrary order implicit symplectic integrator and multi-map explicit integrator. This implicit integrator uses a characteristic function involving the force term alone. Also we point out the usefulness of the plain Ruth algorithm in computing Taylor series map using the techniques first introduced by Berz in his 'COSY-INFINITY' code.

  10. On Spatially Explicit Models of Cholera Epidemics: Hydrologic controls, environmental drivers, human-mediated transmissions (Invited)

    NASA Astrophysics Data System (ADS)

    Rinaldo, A.; Bertuzzo, E.; Mari, L.; Righetto, L.; Gatto, M.; Casagrandi, R.; Rodriguez-Iturbe, I.

    2010-12-01

    A recently proposed model for cholera epidemics is examined. The model accounts for local communities of susceptibles and infectives in a spatially explicit arrangement of nodes linked by networks having different topologies. The vehicle of infection (Vibrio cholerae) is transported through the network links which are thought of as hydrological connections among susceptible communities. The mathematical tools used are borrowed from general schemes of reactive transport on river networks acting as the environmental matrix for the circulation and mixing of water-borne pathogens. The results of a large-scale application to the Kwa Zulu (Natal) epidemics of 2001-2002 will be discussed. Useful theoretical results derived in the spatially-explicit context will also be reviewed (like e.g. the exact derivation of the speed of propagation for traveling fronts of epidemics on regular lattices endowed with uniform population density). Network effects will be discussed. The analysis of the limit case of uniformly distributed population density proves instrumental in establishing the overall conditions for the relevance of spatially explicit models. To that extent, it is shown that the ratio between spreading and disease outbreak timescales proves the crucial parameter. The relevance of our results lies in the major differences potentially arising between the predictions of spatially explicit models and traditional compartmental models of the SIR-like type. Our results suggest that in many cases of real-life epidemiological interest timescales of disease dynamics may trigger outbreaks that significantly depart from the predictions of compartmental models. Finally, a view on further developments includes: hydrologically improved aquatic reservoir models for pathogens; human mobility patterns affecting disease propagation; double-peak emergence and seasonality in the spatially explicit epidemic context.

  11. Spatially explicit watershed modeling: tracking water, mercury and nitrogen in multiple systems under diverse conditions

    EPA Science Inventory

    Environmental decision-making and the influences of various stressors, such as landscape and climate changes on water quantity and quality, requires the application of environmental modeling. Spatially explicit environmental and watershed-scale models using GIS as a base framewor...

  12. HexSim - A general purpose framework for spatially-explicit, individual-based modeling

    EPA Science Inventory

    HexSim is a framework for constructing spatially-explicit, individual-based computer models designed for simulating terrestrial wildlife population dynamics and interactions. HexSim is useful for a broad set of modeling applications. This talk will focus on a subset of those ap...

  13. Studies of implicit and explicit solution techniques in transient thermal analysis of structures

    NASA Technical Reports Server (NTRS)

    Adelman, H. M.; Haftka, R. T.; Robinson, J. C.

    1982-01-01

    Studies aimed at an increase in the efficiency of calculating transient temperature fields in complex aerospace vehicle structures are reported. The advantages and disadvantages of explicit and implicit algorithms are discussed and a promising set of implicit algorithms with variable time steps, known as GEARIB, is described. Test problems, used for evaluating and comparing various algorithms, are discussed and finite element models of the configurations are described. These problems include a coarse model of the Space Shuttle wing, an insulated frame tst article, a metallic panel for a thermal protection system, and detailed models of sections of the Space Shuttle wing. Results generally indicate a preference for implicit over explicit algorithms for transient structural heat transfer problems when the governing equations are stiff (typical of many practical problems such as insulated metal structures). The effects on algorithm performance of different models of an insulated cylinder are demonstrated. The stiffness of the problem is highly sensitive to modeling details and careful modeling can reduce the stiffness of the equations to the extent that explicit methods may become the best choice. Preliminary applications of a mixed implicit-explicit algorithm and operator splitting techniques for speeding up the solution of the algebraic equations are also described.

  14. Studies of implicit and explicit solution techniques in transient thermal analysis of structures

    NASA Astrophysics Data System (ADS)

    Adelman, H. M.; Haftka, R. T.; Robinson, J. C.

    1982-08-01

    Studies aimed at an increase in the efficiency of calculating transient temperature fields in complex aerospace vehicle structures are reported. The advantages and disadvantages of explicit and implicit algorithms are discussed and a promising set of implicit algorithms with variable time steps, known as GEARIB, is described. Test problems, used for evaluating and comparing various algorithms, are discussed and finite element models of the configurations are described. These problems include a coarse model of the Space Shuttle wing, an insulated frame tst article, a metallic panel for a thermal protection system, and detailed models of sections of the Space Shuttle wing. Results generally indicate a preference for implicit over explicit algorithms for transient structural heat transfer problems when the governing equations are stiff (typical of many practical problems such as insulated metal structures). The effects on algorithm performance of different models of an insulated cylinder are demonstrated. The stiffness of the problem is highly sensitive to modeling details and careful modeling can reduce the stiffness of the equations to the extent that explicit methods may become the best choice. Preliminary applications of a mixed implicit-explicit algorithm and operator splitting techniques for speeding up the solution of the algebraic equations are also described.

  15. Object links in the repository

    NASA Technical Reports Server (NTRS)

    Beck, Jon; Eichmann, David

    1991-01-01

    Some of the architectural ramifications of extending the Eichmann/Atkins lattice-based classification scheme to encompass the assets of the full life-cycle of software development are explored. In particular, we wish to consider a model which provides explicit links between objects in addition to the edges connecting classification vertices in the standard lattice. The model we consider uses object-oriented terminology. Thus, the lattice is viewed as a data structure which contains class objects which exhibit inheritance. A description of the types of objects in the repository is presented, followed by a discussion of how they interrelate. We discuss features of the object-oriented model which support these objects and their links, and consider behavior which an implementation of the model should exhibit. Finally, we indicate some thoughts on implementing a prototype of this repository architecture.

  16. On explicit algebraic stress models for complex turbulent flows

    NASA Technical Reports Server (NTRS)

    Gatski, T. B.; Speziale, C. G.

    1992-01-01

    Explicit algebraic stress models that are valid for three-dimensional turbulent flows in noninertial frames are systematically derived from a hierarchy of second-order closure models. This represents a generalization of the model derived by Pope who based his analysis on the Launder, Reece, and Rodi model restricted to two-dimensional turbulent flows in an inertial frame. The relationship between the new models and traditional algebraic stress models -- as well as anistropic eddy visosity models -- is theoretically established. The need for regularization is demonstrated in an effort to explain why traditional algebraic stress models have failed in complex flows. It is also shown that these explicit algebraic stress models can shed new light on what second-order closure models predict for the equilibrium states of homogeneous turbulent flows and can serve as a useful alternative in practical computations.

  17. Explicit least squares system parameter identification for exact differential input/output models

    NASA Technical Reports Server (NTRS)

    Pearson, A. E.

    1993-01-01

    The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.

  18. Evaluation of Multiclass Model Observers in PET LROC Studies

    NASA Astrophysics Data System (ADS)

    Gifford, H. C.; Kinahan, P. E.; Lartizien, C.; King, M. A.

    2007-02-01

    A localization ROC (LROC) study was conducted to evaluate nonprewhitening matched-filter (NPW) and channelized NPW (CNPW) versions of a multiclass model observer as predictors of human tumor-detection performance with PET images. Target localization is explicitly performed by these model observers. Tumors were placed in the liver, lungs, and background soft tissue of a mathematical phantom, and the data simulation modeled a full-3D acquisition mode. Reconstructions were performed with the FORE+AWOSEM algorithm. The LROC study measured observer performance with 2D images consisting of either coronal, sagittal, or transverse views of the same set of cases. Versions of the CNPW observer based on two previously published difference-of-Gaussian channel models demonstrated good quantitative agreement with human observers. One interpretation of these results treats the CNPW observer as a channelized Hotelling observer with implicit internal noise

  19. Comparisons of time explicit hybrid kinetic-fluid code Architect for Plasma Wakefield Acceleration with a full PIC code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Massimo, F., E-mail: francesco.massimo@ensta-paristech.fr; Dipartimento SBAI, Università di Roma “La Sapienza“, Via A. Scarpa 14, 00161 Roma; Atzeni, S.

    Architect, a time explicit hybrid code designed to perform quick simulations for electron driven plasma wakefield acceleration, is described. In order to obtain beam quality acceptable for applications, control of the beam-plasma-dynamics is necessary. Particle in Cell (PIC) codes represent the state-of-the-art technique to investigate the underlying physics and possible experimental scenarios; however PIC codes demand the necessity of heavy computational resources. Architect code substantially reduces the need for computational resources by using a hybrid approach: relativistic electron bunches are treated kinetically as in a PIC code and the background plasma as a fluid. Cylindrical symmetry is assumed for themore » solution of the electromagnetic fields and fluid equations. In this paper both the underlying algorithms as well as a comparison with a fully three dimensional particle in cell code are reported. The comparison highlights the good agreement between the two models up to the weakly non-linear regimes. In highly non-linear regimes the two models only disagree in a localized region, where the plasma electrons expelled by the bunch close up at the end of the first plasma oscillation.« less

  20. Flexible explicit but rigid implicit learning in a visuomotor adaptation task

    PubMed Central

    Bond, Krista M.

    2015-01-01

    There is mounting evidence for the idea that performance in a visuomotor rotation task can be supported by both implicit and explicit forms of learning. The implicit component of learning has been well characterized in previous experiments and is thought to arise from the adaptation of an internal model driven by sensorimotor prediction errors. However, the role of explicit learning is less clear, and previous investigations aimed at characterizing the explicit component have relied on indirect measures such as dual-task manipulations, posttests, and descriptive computational models. To address this problem, we developed a new method for directly assaying explicit learning by having participants verbally report their intended aiming direction on each trial. While our previous research employing this method has demonstrated the possibility of measuring explicit learning over the course of training, it was only tested over a limited scope of manipulations common to visuomotor rotation tasks. In the present study, we sought to better characterize explicit and implicit learning over a wider range of task conditions. We tested how explicit and implicit learning change as a function of the specific visual landmarks used to probe explicit learning, the number of training targets, and the size of the rotation. We found that explicit learning was remarkably flexible, responding appropriately to task demands. In contrast, implicit learning was strikingly rigid, with each task condition producing a similar degree of implicit learning. These results suggest that explicit learning is a fundamental component of motor learning and has been overlooked or conflated in previous visuomotor tasks. PMID:25855690

  1. Gaussian approximation potential modeling of lithium intercalation in carbon nanostructures

    NASA Astrophysics Data System (ADS)

    Fujikake, So; Deringer, Volker L.; Lee, Tae Hoon; Krynski, Marcin; Elliott, Stephen R.; Csányi, Gábor

    2018-06-01

    We demonstrate how machine-learning based interatomic potentials can be used to model guest atoms in host structures. Specifically, we generate Gaussian approximation potential (GAP) models for the interaction of lithium atoms with graphene, graphite, and disordered carbon nanostructures, based on reference density functional theory data. Rather than treating the full Li-C system, we demonstrate how the energy and force differences arising from Li intercalation can be modeled and then added to a (prexisting and unmodified) GAP model of pure elemental carbon. Furthermore, we show the benefit of using an explicit pair potential fit to capture "effective" Li-Li interactions and to improve the performance of the GAP model. This provides proof-of-concept for modeling guest atoms in host frameworks with machine-learning based potentials and in the longer run is promising for carrying out detailed atomistic studies of battery materials.

  2. Visco-elastic controlled-source full waveform inversion without surface waves

    NASA Astrophysics Data System (ADS)

    Paschke, Marco; Krause, Martin; Bleibinhaus, Florian

    2016-04-01

    We developed a frequency-domain visco-elastic full waveform inversion for onshore seismic experiments with topography. The forward modeling is based on a finite-difference time-domain algorithm by Robertsson that uses the image-method to ensure a stress-free condition at the surface. The time-domain data is Fourier-transformed at every point in the model space during the forward modeling for a given set of frequencies. The motivation for this approach is the reduced amount of memory when computing kernels, and the straightforward implementation of the multiscale approach. For the inversion, we calculate the Frechet derivative matrix explicitly, and we implement a Levenberg-Marquardt scheme that allows for computing the resolution matrix. To reduce the size of the Frechet derivative matrix, and to stabilize the inversion, an adapted inverse mesh is used. The node spacing is controlled by the velocity distribution and the chosen frequencies. To focus the inversion on body waves (P, P-coda, and S) we mute the surface waves from the data. Consistent spatiotemporal weighting factors are applied to the wavefields during the Fourier transform to obtain the corresponding kernels. We test our code with a synthetic study using the Marmousi model with arbitrary topography. This study also demonstrates the importance of topography and muting surface waves in controlled-source full waveform inversion.

  3. “Epidemiological Criminology”: Coming Full Circle

    PubMed Central

    Lanier, Mark M.

    2009-01-01

    Members of the public health and criminal justice disciplines often work with marginalized populations: people at high risk of drug use, health problems, incarceration, and other difficulties. As these fields increasingly overlap, distinctions between them are blurred, as numerous research reports and funding trends document. However, explicit theoretical and methodological linkages between the 2 disciplines remain rare. A new paradigm that links methods and statistical models of public health with those of their criminal justice counterparts is needed, as are increased linkages between epidemiological analogies, theories, and models and the corresponding tools of criminology. We outline disciplinary commonalities and distinctions, present policy examples that integrate similarities, and propose “epidemiological criminology” as a bridging framework. PMID:19150901

  4. Activation Product Inverse Calculations with NDI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gray, Mark Girard

    NDI based forward calculations of activation product concentrations can be systematically used to infer structural element concentrations from measured activation product concentrations with an iterative algorithm. The algorithm converges exactly for the basic production-depletion chain with explicit activation product production and approximately, in the least-squares sense, for the full production-depletion chain with explicit activation product production and nosub production-depletion chain. The algorithm is suitable for automation.

  5. A multi-species exchange model for fully fluctuating polymer field theory simulations.

    PubMed

    Düchs, Dominik; Delaney, Kris T; Fredrickson, Glenn H

    2014-11-07

    Field-theoretic models have been used extensively to study the phase behavior of inhomogeneous polymer melts and solutions, both in self-consistent mean-field calculations and in numerical simulations of the full theory capturing composition fluctuations. The models commonly used can be grouped into two categories, namely, species models and exchange models. Species models involve integrations of functionals that explicitly depend on fields originating both from species density operators and their conjugate chemical potential fields. In contrast, exchange models retain only linear combinations of the chemical potential fields. In the two-component case, development of exchange models has been instrumental in enabling stable complex Langevin (CL) simulations of the full complex-valued theory. No comparable stable CL approach has yet been established for field theories of the species type. Here, we introduce an extension of the exchange model to an arbitrary number of components, namely, the multi-species exchange (MSE) model, which greatly expands the classes of soft material systems that can be accessed by the complex Langevin simulation technique. We demonstrate the stability and accuracy of the MSE-CL sampling approach using numerical simulations of triblock and tetrablock terpolymer melts, and tetrablock quaterpolymer melts. This method should enable studies of a wide range of fluctuation phenomena in multiblock/multi-species polymer blends and composites.

  6. Assessment of the reduction methods used to develop chemical schemes: building of a new chemical scheme for VOC oxidation suited to three-dimensional multiscale HOx-NOx-VOC chemistry simulations

    NASA Astrophysics Data System (ADS)

    Szopa, S.; Aumont, B.; Madronich, S.

    2005-09-01

    The objective of this work was to develop and assess an automatic procedure to generate reduced chemical schemes for the atmospheric photooxidation of volatile organic carbon (VOC) compounds. The procedure is based on (i) the development of a tool for writing the fully explicit schemes for VOC oxidation (see companion paper Aumont et al., 2005), (ii) the application of several commonly used reduction methods to the fully explicit scheme, and (iii) the assessment of resulting errors based on direct comparison between the reduced and full schemes.

    The reference scheme included seventy emitted VOCs chosen to be representative of both anthropogenic and biogenic emissions, and their atmospheric degradation chemistry required more than two million reactions among 350000 species. Three methods were applied to reduce the size of the reference chemical scheme: (i) use of operators, based on the redundancy of the reaction sequences involved in the VOC oxidation, (ii) grouping of primary species having similar reactivities into surrogate species and (iii) grouping of some secondary products into surrogate species. The number of species in the final reduced scheme is 147, this being small enough for practical inclusion in current three-dimensional models. Comparisons between the fully explicit and reduced schemes, carried out with a box model for several typical tropospheric conditions, showed that the reduced chemical scheme accurately predicts ozone concentrations and some other aspects of oxidant chemistry for both polluted and clean tropospheric conditions.

  7. Solvent Reaction Field Potential inside an Uncharged Globular Protein: A Bridge between Implicit and Explicit Solvent Models?

    PubMed Central

    Baker, Nathan A.; McCammon, J. Andrew

    2008-01-01

    The solvent reaction field potential of an uncharged protein immersed in Simple Point Charge/Extended (SPC/E) explicit solvent was computed over a series of molecular dynamics trajectories, intotal 1560 ns of simulation time. A finite, positive potential of 13 to 24 kbTec−1 (where T = 300K), dependent on the geometry of the solvent-accessible surface, was observed inside the biomolecule. The primary contribution to this potential arose from a layer of positive charge density 1.0 Å from the solute surface, on average 0.008 ec/Å3, which we found to be the product of a highly ordered first solvation shell. Significant second solvation shell effects, including additional layers of charge density and a slight decrease in the short-range solvent-solvent interaction strength, were also observed. The impact of these findings on implicit solvent models was assessed by running similar explicit-solvent simulations on the fully charged protein system. When the energy due to the solvent reaction field in the uncharged system is accounted for, correlation between per-atom electrostatic energies for the explicit solvent model and a simple implicit (Poisson) calculation is 0.97, and correlation between per-atom energies for the explicit solvent model and a previously published, optimized Poisson model is 0.99. PMID:17949217

  8. Solvent reaction field potential inside an uncharged globular protein: A bridge between implicit and explicit solvent models?

    NASA Astrophysics Data System (ADS)

    Cerutti, David S.; Baker, Nathan A.; McCammon, J. Andrew

    2007-10-01

    The solvent reaction field potential of an uncharged protein immersed in simple point charge/extended explicit solvent was computed over a series of molecular dynamics trajectories, in total 1560ns of simulation time. A finite, positive potential of 13-24 kbTec-1 (where T =300K), dependent on the geometry of the solvent-accessible surface, was observed inside the biomolecule. The primary contribution to this potential arose from a layer of positive charge density 1.0Å from the solute surface, on average 0.008ec/Å3, which we found to be the product of a highly ordered first solvation shell. Significant second solvation shell effects, including additional layers of charge density and a slight decrease in the short-range solvent-solvent interaction strength, were also observed. The impact of these findings on implicit solvent models was assessed by running similar explicit solvent simulations on the fully charged protein system. When the energy due to the solvent reaction field in the uncharged system is accounted for, correlation between per-atom electrostatic energies for the explicit solvent model and a simple implicit (Poisson) calculation is 0.97, and correlation between per-atom energies for the explicit solvent model and a previously published, optimized Poisson model is 0.99.

  9. Ancient numerical daemons of conceptual hydrological modeling: 1. Fidelity and efficiency of time stepping schemes

    NASA Astrophysics Data System (ADS)

    Clark, Martyn P.; Kavetski, Dmitri

    2010-10-01

    A major neglected weakness of many current hydrological models is the numerical method used to solve the governing model equations. This paper thoroughly evaluates several classes of time stepping schemes in terms of numerical reliability and computational efficiency in the context of conceptual hydrological modeling. Numerical experiments are carried out using 8 distinct time stepping algorithms and 6 different conceptual rainfall-runoff models, applied in a densely gauged experimental catchment, as well as in 12 basins with diverse physical and hydroclimatic characteristics. Results show that, over vast regions of the parameter space, the numerical errors of fixed-step explicit schemes commonly used in hydrology routinely dwarf the structural errors of the model conceptualization. This substantially degrades model predictions, but also, disturbingly, generates fortuitously adequate performance for parameter sets where numerical errors compensate for model structural errors. Simply running fixed-step explicit schemes with shorter time steps provides a poor balance between accuracy and efficiency: in some cases daily-step adaptive explicit schemes with moderate error tolerances achieved comparable or higher accuracy than 15 min fixed-step explicit approximations but were nearly 10 times more efficient. From the range of simple time stepping schemes investigated in this work, the fixed-step implicit Euler method and the adaptive explicit Heun method emerge as good practical choices for the majority of simulation scenarios. In combination with the companion paper, where impacts on model analysis, interpretation, and prediction are assessed, this two-part study vividly highlights the impact of numerical errors on critical performance aspects of conceptual hydrological models and provides practical guidelines for robust numerical implementation.

  10. Empirical methods for modeling landscape change, ecosystem services, and biodiversity

    Treesearch

    David Lewis; Ralph Alig

    2009-01-01

    The purpose of this paper is to synthesize recent economics research aimed at integrating discrete-choice econometric models of land-use change with spatially-explicit landscape simulations and quantitative ecology. This research explicitly models changes in the spatial pattern of landscapes in two steps: 1) econometric estimation of parcel-scale transition...

  11. SPATIALLY EXPLICIT MICRO-LEVEL MODELLING OF LAND USE CHANGE AT THE RURAL-URBAN INTERFACE. (R828012)

    EPA Science Inventory

    This paper describes micro-economic models of land use change applicable to the rural–urban interface in the US. Use of a spatially explicit micro-level modelling approach permits the analysis of regional patterns of land use as the aggregate outcomes of many, disparate...

  12. Detection of the toughest: Pedestrian injury risk as a smooth function of age.

    PubMed

    Niebuhr, Tobias; Junge, Mirko

    2017-07-04

    Though it is common to refer to age-specific groups (e.g., children, adults, elderly), smooth trends conditional on age are mainly ignored in the literature. The present study examines the pedestrian injury risk in full-frontal pedestrian-to-passenger car accidents and incorporates age-in addition to collision speed and injury severity-as a plug-in parameter. Recent work introduced a model for pedestrian injury risk functions using explicit formulae with easily interpretable model parameters. This model is expanded by pedestrian age as another model parameter. Using the German In-Depth Accident Study (GIDAS) to obtain age-specific risk proportions, the model parameters are fitted to the raw data and then smoothed by broken-line regression. The approach supplies explicit probabilities for pedestrian injury risk conditional on pedestrian age, collision speed, and injury severity under investigation. All results yield consistency to each other in the sense that risks for more severe injuries are less probable than those for less severe injuries. As a side product, the approach indicates specific ages at which the risk behavior fundamentally changes. These threshold values can be interpreted as the most robust ages for pedestrians. The obtained age-wise risk functions can be aggregated and adapted to any population. The presented approach is formulated in such general terms that in can be directly used for other data sets or additional parameters; for example, the pedestrian's sex. Thus far, no other study using age as a plug-in parameter can be found.

  13. A Galilean Invariant Explicit Algebraic Reynolds Stress Model for Curved Flows

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath

    1996-01-01

    A Galilean invariant weak-equilbrium hypothesis that is sensitive to streamline curvature is proposed. The hypothesis leads to an algebraic Reynolds stress model for curved flows that is fully explicit and self-consistent. The model is tested in curved homogeneous shear flow: the agreement is excellent with Reynolds stress closure model and adequate with available experimental data.

  14. A functional-dynamic reflection on participatory processes in modeling projects.

    PubMed

    Seidl, Roman

    2015-12-01

    The participation of nonscientists in modeling projects/studies is increasingly employed to fulfill different functions. However, it is not well investigated if and how explicitly these functions and the dynamics of a participatory process are reflected by modeling projects in particular. In this review study, I explore participatory modeling projects from a functional-dynamic process perspective. The main differences among projects relate to the functions of participation-most often, more than one per project can be identified, along with the degree of explicit reflection (i.e., awareness and anticipation) on the dynamic process perspective. Moreover, two main approaches are revealed: participatory modeling covering diverse approaches and companion modeling. It becomes apparent that the degree of reflection on the participatory process itself is not always explicit and perfectly visible in the descriptions of the modeling projects. Thus, the use of common protocols or templates is discussed to facilitate project planning, as well as the publication of project results. A generic template may help, not in providing details of a project or model development, but in explicitly reflecting on the participatory process. It can serve to systematize the particular project's approach to stakeholder collaboration, and thus quality management.

  15. High Performance Programming Using Explicit Shared Memory Model on Cray T3D1

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Saini, Subhash; Grassi, Charles

    1994-01-01

    The Cray T3D system is the first-phase system in Cray Research, Inc.'s (CRI) three-phase massively parallel processing (MPP) program. This system features a heterogeneous architecture that closely couples DEC's Alpha microprocessors and CRI's parallel-vector technology, i.e., the Cray Y-MP and Cray C90. An overview of the Cray T3D hardware and available programming models is presented. Under Cray Research adaptive Fortran (CRAFT) model four programming methods (data parallel, work sharing, message-passing using PVM, and explicit shared memory model) are available to the users. However, at this time data parallel and work sharing programming models are not available to the user community. The differences between standard PVM and CRI's PVM are highlighted with performance measurements such as latencies and communication bandwidths. We have found that the performance of neither standard PVM nor CRI s PVM exploits the hardware capabilities of the T3D. The reasons for the bad performance of PVM as a native message-passing library are presented. This is illustrated by the performance of NAS Parallel Benchmarks (NPB) programmed in explicit shared memory model on Cray T3D. In general, the performance of standard PVM is about 4 to 5 times less than obtained by using explicit shared memory model. This degradation in performance is also seen on CM-5 where the performance of applications using native message-passing library CMMD on CM-5 is also about 4 to 5 times less than using data parallel methods. The issues involved (such as barriers, synchronization, invalidating data cache, aligning data cache etc.) while programming in explicit shared memory model are discussed. Comparative performance of NPB using explicit shared memory programming model on the Cray T3D and other highly parallel systems such as the TMC CM-5, Intel Paragon, Cray C90, IBM-SP1, etc. is presented.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Düchs, Dominik; Delaney, Kris T., E-mail: kdelaney@mrl.ucsb.edu; Fredrickson, Glenn H., E-mail: ghf@mrl.ucsb.edu

    Field-theoretic models have been used extensively to study the phase behavior of inhomogeneous polymer melts and solutions, both in self-consistent mean-field calculations and in numerical simulations of the full theory capturing composition fluctuations. The models commonly used can be grouped into two categories, namely, species models and exchange models. Species models involve integrations of functionals that explicitly depend on fields originating both from species density operators and their conjugate chemical potential fields. In contrast, exchange models retain only linear combinations of the chemical potential fields. In the two-component case, development of exchange models has been instrumental in enabling stable complexmore » Langevin (CL) simulations of the full complex-valued theory. No comparable stable CL approach has yet been established for field theories of the species type. Here, we introduce an extension of the exchange model to an arbitrary number of components, namely, the multi-species exchange (MSE) model, which greatly expands the classes of soft material systems that can be accessed by the complex Langevin simulation technique. We demonstrate the stability and accuracy of the MSE-CL sampling approach using numerical simulations of triblock and tetrablock terpolymer melts, and tetrablock quaterpolymer melts. This method should enable studies of a wide range of fluctuation phenomena in multiblock/multi-species polymer blends and composites.« less

  17. Simulating ectomycorrhiza in boreal forests: implementing ectomycorrhizal fungi model MYCOFON in CoupModel (v5)

    NASA Astrophysics Data System (ADS)

    He, Hongxing; Meyer, Astrid; Jansson, Per-Erik; Svensson, Magnus; Rütting, Tobias; Klemedtsson, Leif

    2018-02-01

    The symbiosis between plants and Ectomycorrhizal fungi (ECM) is shown to considerably influence the carbon (C) and nitrogen (N) fluxes between the soil, rhizosphere, and plants in boreal forest ecosystems. However, ECM are either neglected or presented as an implicit, undynamic term in most ecosystem models, which can potentially reduce the predictive power of models.

    In order to investigate the necessity of an explicit consideration of ECM in ecosystem models, we implement the previously developed MYCOFON model into a detailed process-based, soil-plant-atmosphere model, Coup-MYCOFON, which explicitly describes the C and N fluxes between ECM and roots. This new Coup-MYCOFON model approach (ECM explicit) is compared with two simpler model approaches: one containing ECM implicitly as a dynamic uptake of organic N considering the plant roots to represent the ECM (ECM implicit), and the other a static N approach in which plant growth is limited to a fixed N level (nonlim). Parameter uncertainties are quantified using Bayesian calibration in which the model outputs are constrained to current forest growth and soil C / N ratio for four forest sites along a climate and N deposition gradient in Sweden and simulated over a 100-year period.

    The nonlim approach could not describe the soil C / N ratio due to large overestimation of soil N sequestration but simulate the forest growth reasonably well. The ECM implicit and explicit approaches both describe the soil C / N ratio well but slightly underestimate the forest growth. The implicit approach simulated lower litter production and soil respiration than the explicit approach. The ECM explicit Coup-MYCOFON model provides a more detailed description of internal ecosystem fluxes and feedbacks of C and N between plants, soil, and ECM. Our modeling highlights the need to incorporate ECM and organic N uptake into ecosystem models, and the nonlim approach is not recommended for future long-term soil C and N predictions. We also provide a key set of posterior fungal parameters that can be further investigated and evaluated in future ECM studies.

  18. The importance of explicitly mapping instructional analogies in science education

    NASA Astrophysics Data System (ADS)

    Asay, Loretta Johnson

    Analogies are ubiquitous during instruction in science classrooms, yet research about the effectiveness of using analogies has produced mixed results. An aspect seldom studied is a model of instruction when using analogies. The few existing models for instruction with analogies have not often been examined quantitatively. The Teaching With Analogies (TWA) model (Glynn, 1991) is one of the models frequently cited in the variety of research about analogies. The TWA model outlines steps for instruction, including the step of explicitly mapping the features of the source to the target. An experimental study was conducted to examine the effects of explicitly mapping the features of the source and target in an analogy during computer-based instruction about electrical circuits. Explicit mapping was compared to no mapping and to a control with no analogy. Participants were ninth- and tenth-grade biology students who were each randomly assigned to one of three conditions (no analogy module, analogy module, or explicitly mapped analogy module) for computer-based instruction. Subjects took a pre-test before the instruction, which was used to assign them to a level of previous knowledge about electrical circuits for analysis of any differential effects. After the instruction modules, students took a post-test about electrical circuits. Two weeks later, they took a delayed post-test. No advantage was found for explicitly mapping the analogy. Learning patterns were the same, regardless of the type of instruction. Those who knew the least about electrical circuits, based on the pre-test, made the most gains. After the two-week delay, this group maintained the largest amount of their gain. Implications exist for science education classrooms, as analogy use should be based on research about effective practices. Further studies are suggested to foster the building of research-based models for classroom instruction with analogies.

  19. Combining Model-driven and Schema-based Program Synthesis

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Whittle, John

    2004-01-01

    We describe ongoing work which aims to extend the schema-based program synthesis paradigm with explicit models. In this context, schemas can be considered as model-to-model transformations. The combination of schemas with explicit models offers a number of advantages, namely, that building synthesis systems becomes much easier since the models can be used in verification and in adaptation of the synthesis systems. We illustrate our approach using an example from signal processing.

  20. Cohen's Kappa and classification table metrics 2.0: An ArcView 3.x extension for accuracy assessment of spatially explicit models

    Treesearch

    Jeff Jenness; J. Judson Wynne

    2005-01-01

    In the field of spatially explicit modeling, well-developed accuracy assessment methodologies are often poorly applied. Deriving model accuracy metrics have been possible for decades, but these calculations were made by hand or with the use of a spreadsheet application. Accuracy assessments may be useful for: (1) ascertaining the quality of a model; (2) improving model...

  1. Moderators of the Relationship between Implicit and Explicit Evaluation

    PubMed Central

    Nosek, Brian A.

    2005-01-01

    Automatic and controlled modes of evaluation sometimes provide conflicting reports of the quality of social objects. This paper presents evidence for four moderators of the relationship between automatic (implicit) and controlled (explicit) evaluations. Implicit and explicit preferences were measured for a variety of object pairs using a large sample. The average correlation was r = .36, and 52 of the 57 object pairs showed a significant positive correlation. Results of multilevel modeling analyses suggested that: (a) implicit and explicit preferences are related, (b) the relationship varies as a function of the objects assessed, and (c) at least four variables moderate the relationship – self-presentation, evaluative strength, dimensionality, and distinctiveness. The variables moderated implicit-explicit correspondence across individuals and accounted for much of the observed variation across content domains. The resulting model of the relationship between automatic and controlled evaluative processes is grounded in personal experience with the targets of evaluation. PMID:16316292

  2. FAST TRACK COMMUNICATION: Freezing and extreme-value statistics in a random energy model with logarithmically correlated potential

    NASA Astrophysics Data System (ADS)

    Fyodorov, Yan V.; Bouchaud, Jean-Philippe

    2008-09-01

    We investigate some implications of the freezing scenario proposed by Carpentier and Le Doussal (CLD) for a random energy model (REM) with logarithmically correlated random potential. We introduce a particular (circular) variant of the model, and show that the integer moments of the partition function in the high-temperature phase are given by the well-known Dyson Coulomb gas integrals. The CLD freezing scenario allows one to use those moments for extracting the distribution of the free energy in both high- and low-temperature phases. In particular, it yields the full distribution of the minimal value in the potential sequence. This provides an explicit new class of extreme-value statistics for strongly correlated variables, manifestly different from the standard Gumbel class.

  3. The 1/ N Expansion of Tensor Models Beyond Perturbation Theory

    NASA Astrophysics Data System (ADS)

    Gurau, Razvan

    2014-09-01

    We analyze in full mathematical rigor the most general quartically perturbed invariant probability measure for a random tensor. Using a version of the Loop Vertex Expansion (which we call the mixed expansion) we show that the cumulants write as explicit series in 1/ N plus bounded rest terms. The mixed expansion recasts the problem of determining the subleading corrections in 1/ N into a simple combinatorial problem of counting trees decorated by a finite number of loop edges. As an aside, we use the mixed expansion to show that the (divergent) perturbative expansion of the tensor models is Borel summable and to prove that the cumulants respect an uniform scaling bound. In particular the quartically perturbed measures fall, in the N→ ∞ limit, in the universality class of Gaussian tensor models.

  4. Latin hypercube sampling and geostatistical modeling of spatial uncertainty in a spatially explicit forest landscape model simulation

    Treesearch

    Chonggang Xu; Hong S. He; Yuanman Hu; Yu Chang; Xiuzhen Li; Rencang Bu

    2005-01-01

    Geostatistical stochastic simulation is always combined with Monte Carlo method to quantify the uncertainty in spatial model simulations. However, due to the relatively long running time of spatially explicit forest models as a result of their complexity, it is always infeasible to generate hundreds or thousands of Monte Carlo simulations. Thus, it is of great...

  5. Strategies for efficient resolution analysis in full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Fichtner, A.; van Leeuwen, T.; Trampert, J.

    2016-12-01

    Full-waveform inversion is developing into a standard method in the seismological toolbox. It combines numerical wave propagation for heterogeneous media with adjoint techniques in order to improve tomographic resolution. However, resolution becomes increasingly difficult to quantify because of the enormous computational requirements. Here we present two families of methods that can be used for efficient resolution analysis in full-waveform inversion. They are based on the targeted extraction of resolution proxies from the Hessian matrix, which is too large to store and to compute explicitly. Fourier methods rest on the application of the Hessian to Earth models with harmonic oscillations. This yields the Fourier spectrum of the Hessian for few selected wave numbers, from which we can extract properties of the tomographic point-spread function for any point in space. Random probing methods use uncorrelated, random test models instead of harmonic oscillations. Auto-correlating the Hessian-model applications for sufficiently many test models also characterises the point-spread function. Both Fourier and random probing methods provide a rich collection of resolution proxies. These include position- and direction-dependent resolution lengths, and the volume of point-spread functions as indicator of amplitude recovery and inter-parameter trade-offs. The computational requirements of these methods are equivalent to approximately 7 conjugate-gradient iterations in full-waveform inversion. This is significantly less than the optimisation itself, which may require tens to hundreds of iterations to reach convergence. In addition to the theoretical foundations of the Fourier and random probing methods, we show various illustrative examples from real-data full-waveform inversion for crustal and mantle structure.

  6. Implicit and explicit ethnocentrism: revisiting the ideologies of prejudice.

    PubMed

    Cunningham, William A; Nezlek, John B; Banaji, Mahzarin R

    2004-10-01

    Two studies investigated relationships among individual differences in implicit and explicit prejudice, right-wing ideology, and rigidity in thinking. The first study examined these relationships focusing on White Americans' prejudice toward Black Americans. The second study provided the first test of implicit ethnocentrism and its relationship to explicit ethnocentrism by studying the relationship between attitudes toward five social groups. Factor analyses found support for both implicit and explicit ethnocentrism. In both studies, mean explicit attitudes toward out groups were positive, whereas implicit attitudes were negative, suggesting that implicit and explicit prejudices are distinct; however, in both studies, implicit and explicit attitudes were related (r = .37, .47). Latent variable modeling indicates a simple structure within this ethnocentric system, with variables organized in order of specificity. These results lead to the conclusion that (a) implicit ethnocentrism exists and (b) it is related to and distinct from explicit ethnocentrism.

  7. Modelling temporal and spatial dynamics of benthic fauna in North-West-European shelf seas

    NASA Astrophysics Data System (ADS)

    Lessin, Gennadi; Bruggeman, Jorn; Artioli, Yuri; Butenschön, Momme; Blackford, Jerry

    2017-04-01

    Benthic zones of shallow shelf seas receive high amounts of organic material. Physical processes such as resuspension, as well as complex transformations mediated by diverse faunal and microbial communities, define fate of this material, which can be returned to the water column, reworked within sediments or ultimately buried. In recent years, numerical models of various complexity and serving different goals have been developed and applied in order to better understand and predict dynamics of benthic processes. ERSEM includes explicit parameterisations of several groups of benthic biota, which makes it particularly applicable for studies of benthic biodiversity, biological interactions within sediments and benthic-pelagic coupling. To assess model skill in reproducing temporal (inter-annual and seasonal) dynamics of major benthic macrofaunal groups, 1D model simulation results were compared with data from the Western Channel Observatory (WCO) benthic survey. The benthic model was forced with organic matter deposition rates inferred from observed phytoplankton abundance and model parameters were subsequently recalibrated. Based on model results and WCO data comparison, deposit-feeders exert clear seasonal variability, while for suspension-feeders inter-annual variability is more pronounced. Spatial distribution of benthic fauna was investigated using results of a full-scale NEMO-ERSEM hindcast simulation of the North-West European Shelf Seas area, covering the period of 1981-2014. Results suggest close relationship between spatial distribution of biomass of benthic faunal functional groups in relation to bathymetry, hydrodynamic conditions and organic matter supply. Our work highlights that it is feasible to construct, implement and validate models that explicitly include functional groups of benthic macrofauna. Moreover, the modelling approach delivers detailed information on benthic biogeochemistry and food-web at spatial and temporal scales that are unavailable through other sources but highly relevant to marine management, planning and policy.

  8. Modeling Transport of Turbulent Fluxes in a Heterogeneous Urban Canopy Using a Spatially Explicit Energy Balance

    NASA Astrophysics Data System (ADS)

    Moody, M.; Bailey, B.; Stoll, R., II

    2017-12-01

    Understanding how changes in the microclimate near individual plants affects the surface energy budget is integral to modeling land-atmosphere interactions and a wide range of near surface atmospheric boundary layer phenomena. In urban areas, the complex geometry of the urban canopy layer results in large spatial deviations of turbulent fluxes further complicating the development of models. Accurately accounting for this heterogeneity in order to model urban energy and water use requires a sub-plant level understanding of microclimate variables. We present analysis of new experimental field data taken in and around two Blue Spruce (Picea pungens) trees at the University of Utah in 2015. The test sites were chosen in order study the effects of heterogeneity in an urban environment. An array of sensors were placed in and around the conifers to quantify transport in the soil-plant-atmosphere continuum: radiative fluxes, temperature, sap fluxes, etc. A spatial array of LEMS (Local Energy Measurement Systems) were deployed to obtain pressure, surrounding air temperature and relative humidity. These quantities are used to calculate the radiative and turbulent fluxes. Relying on measurements alone is insufficient to capture the complexity of microclimate distribution as one reaches sub-plant scales. A spatially-explicit radiation and energy balance model previously developed for deciduous trees was extended to include conifers. The model discretizes the tree into isothermal sub-volumes on which energy balances are performed and utilizes incoming radiation as the primary forcing input. The radiative transfer component of the model yields good agreement between measured and modeled upward longwave and shortwave radiative fluxes. Ultimately, the model was validated through an examination of the full energy budget including radiative and turbulent fluxes through isolated Picea pungens in an urban environment.

  9. Pre-Service Teachers' Implicit and Explicit Attitudes toward Obesity Influence Their Judgments of Students

    ERIC Educational Resources Information Center

    Glock, Sabine; Beverborg, Arnoud Oude Groote; Müller, Barbara C. N.

    2016-01-01

    Obese children experience disadvantages in school and discrimination from their teachers. Teachers' implicit and explicit attitudes have been identified as contributing to these disadvantages. Drawing on dual process models, we investigated the nature of pre-service teachers' implicit and explicit attitudes, their motivation to respond without…

  10. Communication: Role of explicit water models in the helix folding/unfolding processes

    NASA Astrophysics Data System (ADS)

    Palazzesi, Ferruccio; Salvalaglio, Matteo; Barducci, Alessandro; Parrinello, Michele

    2016-09-01

    In the last years, it has become evident that computer simulations can assume a relevant role in modelling protein dynamical motions for their ability to provide a full atomistic image of the processes under investigation. The ability of the current protein force-fields in reproducing the correct thermodynamics and kinetics systems behaviour is thus an essential ingredient to improve our understanding of many relevant biological functionalities. In this work, employing the last developments of the metadynamics framework, we compare the ability of state-of-the-art all-atom empirical functions and water models to consistently reproduce the folding and unfolding of a helix turn motif in a model peptide. This theoretical study puts in evidence that the choice of the water models can influence the thermodynamic and the kinetics of the system under investigation, and for this reason cannot be considered trivial.

  11. Explicit and Implicit Stigma of Mental Illness as Predictors of the Recovery Attitudes of Assertive Community Treatment Practitioners.

    PubMed

    Stull, Laura G; McConnell, Haley; McGrew, John; Salyers, Michelle P

    2017-01-01

    While explicit negative stereotypes of mental illness are well established as barriers to recovery, implicit attitudes also may negatively impact outcomes. The current study is unique in its focus on both explicit and implicit stigma as predictors of recovery attitudes of mental health practitioners. Assertive Community Treatment practitioners (n = 154) from 55 teams completed online measures of stigma, recovery attitudes, and an Implicit Association Test (IAT). Three of four explicit stigma variables (perceptions of blameworthiness, helplessness, and dangerousness) and all three implicit stigma variables were associated with lower recovery attitudes. In a multivariate, hierarchical model, however, implicit stigma did not explain additional variance in recovery attitudes. In the overall model, perceptions of dangerousness and implicitly associating mental illness with "bad" were significant individual predictors of lower recovery attitudes. The current study demonstrates a need for interventions to lower explicit stigma, particularly perceptions of dangerousness, to increase mental health providers' expectations for recovery. The extent to which implicit and explicit stigma differentially predict outcomes, including recovery attitudes, needs further research.

  12. A Comprehensive Structural Dynamic Analysis Approach for Multi Mission Earth Entry Vehicle (MMEEV) Development

    NASA Technical Reports Server (NTRS)

    Perino, Scott; Bayandor, Javid; Siddens, Aaron

    2012-01-01

    The anticipated NASA Mars Sample Return Mission (MSR) requires a simple and reliable method in which to return collected Martian samples back to earth for scientific analysis. The Multi-Mission Earth Entry Vehicle (MMEEV) is NASA's proposed solution to this MSR requirement. Key aspects of the MMEEV are its reliable and passive operation, energy absorbing foam-composite structure, and modular impact sphere (IS) design. To aid in the development of an EEV design that can be modified for various missions requirements, two fully parametric finite element models were developed. The first model was developed in an explicit finite element code and was designed to evaluate the impact response of the vehicle and payload during the final stage of the vehicle's return to earth. The second model was developed in an explicit code and was designed to evaluate the static and dynamic structural response of the vehicle during launch and reentry. In contrast to most other FE models, built through a Graphical User Interface (GUI) pre-processor, the current model was developed using a coding technique that allows the analyst to quickly change nearly all aspects of the model including: geometric dimensions, material properties, load and boundary conditions, mesh properties, and analysis controls. Using the developed design tool, a full range of proposed designs can quickly be analyzed numerically and thus the design trade space for the EEV can be fully understood. An engineer can then quickly reach the best design for a specific mission and also adapt and optimize the general design for different missions.

  13. Assessing chemistry schemes and constraints in air quality models used to predict ozone in London against the detailed Master Chemical Mechanism.

    PubMed

    Malkin, Tamsin L; Heard, Dwayne E; Hood, Christina; Stocker, Jenny; Carruthers, David; MacKenzie, Ian A; Doherty, Ruth M; Vieno, Massimo; Lee, James; Kleffmann, Jörg; Laufs, Sebastian; Whalley, Lisa K

    2016-07-18

    Air pollution is the environmental factor with the greatest impact on human health in Europe. Understanding the key processes driving air quality across the relevant spatial scales, especially during pollution exceedances and episodes, is essential to provide effective predictions for both policymakers and the public. It is particularly important for policy regulators to understand the drivers of local air quality that can be regulated by national policies versus the contribution from regional pollution transported from mainland Europe or elsewhere. One of the main objectives of the Coupled Urban and Regional processes: Effects on AIR quality (CUREAIR) project is to determine local and regional contributions to ozone events. A detailed zero-dimensional (0-D) box model run with the Master Chemical Mechanism (MCMv3.2) is used as the benchmark model against which the less explicit chemistry mechanisms of the Generic Reaction Set (GRS) and the Common Representative Intermediates (CRIv2-R5) schemes are evaluated. GRS and CRI are used by the Atmospheric Dispersion Modelling System (ADMS-Urban) and the regional chemistry transport model EMEP4UK, respectively. The MCM model uses a near-explicit chemical scheme for the oxidation of volatile organic compounds (VOCs) and is constrained to observations of VOCs, NOx, CO, HONO (nitrous acid), photolysis frequencies and meteorological parameters measured during the ClearfLo (Clean Air for London) campaign. The sensitivity of the less explicit chemistry schemes to different model inputs has been investigated: Constraining GRS to the total VOC observed during ClearfLo as opposed to VOC derived from ADMS-Urban dispersion calculations, including emissions and background concentrations, led to a significant increase (674% during winter) in modelled ozone. The inclusion of HONO chemistry in this mechanism, particularly during wintertime when other radical sources are limited, led to substantial increases in the ozone levels predicted (223%). When the GRS and CRIv2-R5 schemes are run with the equivalent model constraints to the MCM, they are able to reproduce the level of ozone predicted by the near-explicit MCM to within 40% and 20% respectively for the majority of the time. An exception to this trend was observed during pollution episodes experienced in the summer, when anticyclonic conditions favoured increased temperatures and elevated O3. The in situ O3 predicted by the MCM was heavily influenced by biogenic VOCs during these conditions and the low GRS [O3] : MCM [O3] ratio (and low CRIv2-R5 [O3] : MCM [O3] ratio) demonstrates that these less explicit schemes under-represent the full O3 creation potential of these VOCs. To fully assess the influence of the in situ O3 generated from local emissions versus O3 generated upwind of London and advected in, the time since emission (and, hence, how far the real atmosphere is from steady state) must be determined. From estimates of the mean transport time determined from the NOx : NOy ratio observed at North Kensington during the summer and comparison of the O3 predicted by the MCM model after this time, ∼60% of the median observed [O3] could be generated from local emissions. During the warmer conditions experienced during the easterly flows, however, the observed [O3] may be even more heavily influenced by London's emissions.

  14. Occupant Responses in a Full-Scale Crash Test of the Sikorsky ACAP Helicopter

    NASA Technical Reports Server (NTRS)

    Jackson, Karen E.; Fasanella, Edwin L.; Boitnott, Richard L.; McEntire, Joseph; Lewis, Alan

    2002-01-01

    A full-scale crash test of the Sikorsky Advanced Composite Airframe Program (ACAP) helicopter was performed in 1999 to generate experimental data for correlation with a crash simulation developed using an explicit nonlinear, transient dynamic finite element code. The airframe was the residual flight test hardware from the ACAP program. For the test, the aircraft was outfitted with two crew and two troop seats, and four anthropomorphic test dummies. While the results of the impact test and crash simulation have been documented fairly extensively in the literature, the focus of this paper is to present the detailed occupant response data obtained from the crash test and to correlate the results with injury prediction models. These injury models include the Dynamic Response Index (DRI), the Head Injury Criteria (HIC), the spinal load requirement defined in FAR Part 27.562(c), and a comparison of the duration and magnitude of the occupant vertical acceleration responses with the Eiband whole-body acceleration tolerance curve.

  15. Regulatory T cell effects in antitumor laser immunotherapy: a mathematical model and analysis

    NASA Astrophysics Data System (ADS)

    Dawkins, Bryan A.; Laverty, Sean M.

    2016-03-01

    Regulatory T cells (Tregs) have tremendous influence on treatment outcomes in patients receiving immunotherapy for cancerous tumors. We present a mathematical model incorporating the primary cellular and molecular components of antitumor laser immunotherapy. We explicitly model developmental classes of dendritic cells (DCs), cytotoxic T cells (CTLs), primary and metastatic tumor cells, and tumor antigen. Regulatory T cells have been shown to kill antigen presenting cells, to influence dendritic cell maturation and migration, to kill activated killer CTLs in the tumor microenvironment, and to influence CTL proliferation. Since Tregs affect explicitly modeled cells, but we do not explicitly model dynamics of Treg themselves, we use model parameters to analyze effects of Treg immunosuppressive activity. We will outline a systematic method for assigning clinical outcomes to model simulations and use this condition to associate simulated patient treatment outcome with Treg activity.

  16. A Unified Framework for Monetary Theory and Policy Analysis.

    ERIC Educational Resources Information Center

    Lagos, Ricardo; Wright, Randall

    2005-01-01

    Search-theoretic models of monetary exchange are based on explicit descriptions of the frictions that make money essential. However, tractable versions of these models typically make strong assumptions that render them ill suited for monetary policy analysis. We propose a new framework, based on explicit micro foundations, within which macro…

  17. A Naturalistic Inquiry into Praxis When Education Instructors Use Explicit Metacognitive Modeling

    ERIC Educational Resources Information Center

    Shannon, Nancy Gayle

    2014-01-01

    This naturalistic inquiry brought together six education instructors in one small teacher preparation program to explore what happens to educational instructors' praxis when the education instructors use explicit metacognitive modeling to reveal their thinking behind their pedagogical decision-making. The participants, while teaching an…

  18. Modeling trends from North American Breeding Bird Survey data: a spatially explicit approach

    USGS Publications Warehouse

    Bled, Florent; Sauer, John R.; Pardieck, Keith L.; Doherty, Paul; Royle, J. Andy

    2013-01-01

    Population trends, defined as interval-specific proportional changes in population size, are often used to help identify species of conservation interest. Efficient modeling of such trends depends on the consideration of the correlation of population changes with key spatial and environmental covariates. This can provide insights into causal mechanisms and allow spatially explicit summaries at scales that are of interest to management agencies. We expand the hierarchical modeling framework used in the North American Breeding Bird Survey (BBS) by developing a spatially explicit model of temporal trend using a conditional autoregressive (CAR) model. By adopting a formal spatial model for abundance, we produce spatially explicit abundance and trend estimates. Analyses based on large-scale geographic strata such as Bird Conservation Regions (BCR) can suffer from basic imbalances in spatial sampling. Our approach addresses this issue by providing an explicit weighting based on the fundamental sample allocation unit of the BBS. We applied the spatial model to three species from the BBS. Species have been chosen based upon their well-known population change patterns, which allows us to evaluate the quality of our model and the biological meaning of our estimates. We also compare our results with the ones obtained for BCRs using a nonspatial hierarchical model (Sauer and Link 2011). Globally, estimates for mean trends are consistent between the two approaches but spatial estimates provide much more precise trend estimates in regions on the edges of species ranges that were poorly estimated in non-spatial analyses. Incorporating a spatial component in the analysis not only allows us to obtain relevant and biologically meaningful estimates for population trends, but also enables us to provide a flexible framework in order to obtain trend estimates for any area.

  19. Can a continuum solvent model reproduce the free energy landscape of a -hairpin folding in water?

    NASA Astrophysics Data System (ADS)

    Zhou, Ruhong; Berne, Bruce J.

    2002-10-01

    The folding free energy landscape of the C-terminal -hairpin of protein G is explored using the surface-generalized Born (SGB) implicit solvent model, and the results are compared with the landscape from an earlier study with explicit solvent model. The OPLSAA force field is used for the -hairpin in both implicit and explicit solvent simulations, and the conformational space sampling is carried out with a highly parallel replica-exchange method. Surprisingly, we find from exhaustive conformation space sampling that the free energy landscape from the implicit solvent model is quite different from that of the explicit solvent model. In the implicit solvent model some nonnative states are heavily overweighted, and more importantly, the lowest free energy state is no longer the native -strand structure. An overly strong salt-bridge effect between charged residues (E42, D46, D47, E56, and K50) is found to be responsible for this behavior in the implicit solvent model. Despite this, we find that the OPLSAA/SGB energies of all the nonnative structures are higher than that of the native structure; thus the OPLSAA/SGB energy is still a good scoring function for structure prediction for this -hairpin. Furthermore, the -hairpin population at 282 K is found to be less than 40% from the implicit solvent model, which is much smaller than the 72% from the explicit solvent model and 80% from experiment. On the other hand, both implicit and explicit solvent simulations with the OPLSAA force field exhibit no meaningful helical content during the folding process, which is in contrast to some very recent studies using other force fields.

  20. Can a continuum solvent model reproduce the free energy landscape of a β-hairpin folding in water?

    PubMed Central

    Zhou, Ruhong; Berne, Bruce J.

    2002-01-01

    The folding free energy landscape of the C-terminal β-hairpin of protein G is explored using the surface-generalized Born (SGB) implicit solvent model, and the results are compared with the landscape from an earlier study with explicit solvent model. The OPLSAA force field is used for the β-hairpin in both implicit and explicit solvent simulations, and the conformational space sampling is carried out with a highly parallel replica-exchange method. Surprisingly, we find from exhaustive conformation space sampling that the free energy landscape from the implicit solvent model is quite different from that of the explicit solvent model. In the implicit solvent model some nonnative states are heavily overweighted, and more importantly, the lowest free energy state is no longer the native β-strand structure. An overly strong salt-bridge effect between charged residues (E42, D46, D47, E56, and K50) is found to be responsible for this behavior in the implicit solvent model. Despite this, we find that the OPLSAA/SGB energies of all the nonnative structures are higher than that of the native structure; thus the OPLSAA/SGB energy is still a good scoring function for structure prediction for this β-hairpin. Furthermore, the β-hairpin population at 282 K is found to be less than 40% from the implicit solvent model, which is much smaller than the 72% from the explicit solvent model and ≈80% from experiment. On the other hand, both implicit and explicit solvent simulations with the OPLSAA force field exhibit no meaningful helical content during the folding process, which is in contrast to some very recent studies using other force fields. PMID:12242327

  1. Can a continuum solvent model reproduce the free energy landscape of a beta -hairpin folding in water?

    PubMed

    Zhou, Ruhong; Berne, Bruce J

    2002-10-01

    The folding free energy landscape of the C-terminal beta-hairpin of protein G is explored using the surface-generalized Born (SGB) implicit solvent model, and the results are compared with the landscape from an earlier study with explicit solvent model. The OPLSAA force field is used for the beta-hairpin in both implicit and explicit solvent simulations, and the conformational space sampling is carried out with a highly parallel replica-exchange method. Surprisingly, we find from exhaustive conformation space sampling that the free energy landscape from the implicit solvent model is quite different from that of the explicit solvent model. In the implicit solvent model some nonnative states are heavily overweighted, and more importantly, the lowest free energy state is no longer the native beta-strand structure. An overly strong salt-bridge effect between charged residues (E42, D46, D47, E56, and K50) is found to be responsible for this behavior in the implicit solvent model. Despite this, we find that the OPLSAA/SGB energies of all the nonnative structures are higher than that of the native structure; thus the OPLSAA/SGB energy is still a good scoring function for structure prediction for this beta-hairpin. Furthermore, the beta-hairpin population at 282 K is found to be less than 40% from the implicit solvent model, which is much smaller than the 72% from the explicit solvent model and approximately equal 80% from experiment. On the other hand, both implicit and explicit solvent simulations with the OPLSAA force field exhibit no meaningful helical content during the folding process, which is in contrast to some very recent studies using other force fields.

  2. Spatially Explicit Life Cycle Analysis of Cellulosic Ethanol Production Scenarios in Southwestern Michigan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cronin, Keith R.; Runge, Troy M.; Zhang, Xuesong

    By modeling the life cycle of fuel pathways for cellulosic ethanol (CE) it can help identify logistical barriers and anticipated impacts for the emerging commercial CE industry. Such models contain high amounts of variability, primarily due to the varying nature of agricultural production but also because of limitations in the availability of data at the local scale, resulting in the typical practice of using average values. In this study, 12 spatially explicit, cradle-to-refinery gate CE pathways were developed that vary by feedstock (corn stover, switchgrass, and Miscanthus), nitrogen application rate (higher, lower), pretreatment method (ammonia fiber expansion [AFEX], dilute acid),more » and co-product treatment method (mass allocation, sub-division), in which feedstock production was modeled at the watershed scale over a nine-county area in Southwestern Michigan. When comparing feedstocks, the model showed that corn stover yielded higher global warming potential (GWP), acidification potential (AP), and eutrophication potential (EP) than the perennial feedstocks of switchgrass and Miscanthus, on an average per area basis. Full life cycle results per MJ of produced ethanol demonstrated more mixed results, with corn stover-derived CE scenarios that use sub-division as a co-product treatment method yielding similarly favorable outcomes as switchgrass- and Miscanthus-derived CE scenarios. Variability was found to be greater between feedstocks than watersheds. Additionally, scenarios using dilute acid pretreatment had more favorable results than those using AFEX pretreatment.« less

  3. Spatially Explicit Life Cycle Analysis of Cellulosic Ethanol Production Scenarios in Southwestern Michigan

    DOE PAGES

    Cronin, Keith R.; Runge, Troy M.; Zhang, Xuesong; ...

    2016-07-13

    By modeling the life cycle of fuel pathways for cellulosic ethanol (CE) it can help identify logistical barriers and anticipated impacts for the emerging commercial CE industry. Such models contain high amounts of variability, primarily due to the varying nature of agricultural production but also because of limitations in the availability of data at the local scale, resulting in the typical practice of using average values. In this study, 12 spatially explicit, cradle-to-refinery gate CE pathways were developed that vary by feedstock (corn stover, switchgrass, and Miscanthus), nitrogen application rate (higher, lower), pretreatment method (ammonia fiber expansion [AFEX], dilute acid),more » and co-product treatment method (mass allocation, sub-division), in which feedstock production was modeled at the watershed scale over a nine-county area in Southwestern Michigan. When comparing feedstocks, the model showed that corn stover yielded higher global warming potential (GWP), acidification potential (AP), and eutrophication potential (EP) than the perennial feedstocks of switchgrass and Miscanthus, on an average per area basis. Full life cycle results per MJ of produced ethanol demonstrated more mixed results, with corn stover-derived CE scenarios that use sub-division as a co-product treatment method yielding similarly favorable outcomes as switchgrass- and Miscanthus-derived CE scenarios. Variability was found to be greater between feedstocks than watersheds. Additionally, scenarios using dilute acid pretreatment had more favorable results than those using AFEX pretreatment.« less

  4. 3-D Magnetotelluric Forward Modeling And Inversion Incorporating Topography By Using Vector Finite-Element Method Combined With Divergence Corrections Based On The Magnetic Field (VFEH++)

    NASA Astrophysics Data System (ADS)

    Shi, X.; Utada, H.; Jiaying, W.

    2009-12-01

    The vector finite-element method combined with divergence corrections based on the magnetic field H, referred to as VFEH++ method, is developed to simulate the magnetotelluric (MT) responses of 3-D conductivity models. The advantages of the new VFEH++ method are the use of edge-elements to eliminate the vector parasites and the divergence corrections to explicitly guarantee the divergence-free conditions in the whole modeling domain. 3-D MT topographic responses are modeling using the new VFEH++ method, and are compared with those calculated by other numerical methods. The results show that MT responses can be modeled highly accurate using the VFEH+ +method. The VFEH++ algorithm is also employed for the 3-D MT data inversion incorporating topography. The 3-D MT inverse problem is formulated as a minimization problem of the regularized misfit function. In order to avoid the huge memory requirement and very long time for computing the Jacobian sensitivity matrix for Gauss-Newton method, we employ the conjugate gradient (CG) approach to solve the inversion equation. In each iteration of CG algorithm, the cost computation is the product of the Jacobian sensitivity matrix with a model vector x or its transpose with a data vector y, which can be transformed into two pseudo-forwarding modeling. This avoids the full explicitly Jacobian matrix calculation and storage which leads to considerable savings in the memory required by the inversion program in PC computer. The performance of CG algorithm will be illustrated by several typical 3-D models with horizontal earth surface and topographic surfaces. The results show that the VFEH++ and CG algorithms can be effectively employed to 3-D MT field data inversion.

  5. Computing the Sensitivity Kernels for 2.5-D Seismic Waveform Inversion in Heterogeneous, Anisotropic Media

    NASA Astrophysics Data System (ADS)

    Zhou, Bing; Greenhalgh, S. A.

    2011-10-01

    2.5-D modeling and inversion techniques are much closer to reality than the simple and traditional 2-D seismic wave modeling and inversion. The sensitivity kernels required in full waveform seismic tomographic inversion are the Fréchet derivatives of the displacement vector with respect to the independent anisotropic model parameters of the subsurface. They give the sensitivity of the seismograms to changes in the model parameters. This paper applies two methods, called `the perturbation method' and `the matrix method', to derive the sensitivity kernels for 2.5-D seismic waveform inversion. We show that the two methods yield the same explicit expressions for the Fréchet derivatives using a constant-block model parameterization, and are available for both the line-source (2-D) and the point-source (2.5-D) cases. The method involves two Green's function vectors and their gradients, as well as the derivatives of the elastic modulus tensor with respect to the independent model parameters. The two Green's function vectors are the responses of the displacement vector to the two directed unit vectors located at the source and geophone positions, respectively; they can be generally obtained by numerical methods. The gradients of the Green's function vectors may be approximated in the same manner as the differential computations in the forward modeling. The derivatives of the elastic modulus tensor with respect to the independent model parameters can be obtained analytically, dependent on the class of medium anisotropy. Explicit expressions are given for two special cases—isotropic and tilted transversely isotropic (TTI) media. Numerical examples are given for the latter case, which involves five independent elastic moduli (or Thomsen parameters) plus one angle defining the symmetry axis.

  6. Modeling Task Switching without Switching Tasks: A Short-Term Priming Account of Explicitly Cued Performance

    ERIC Educational Resources Information Center

    Schneider, Darryl W.; Logan, Gordon D.

    2005-01-01

    Switch costs in task switching are commonly attributed to an executive control process of task-set reconfiguration, particularly in studies involving the explicit task-cuing procedure. The authors propose an alternative account of explicitly cued performance that is based on 2 mechanisms: priming of cue encoding from residual activation of cues in…

  7. The Things You Do: Internal Models of Others’ Expected Behaviour Guide Action Observation

    PubMed Central

    Schenke, Kimberley C.; Wyer, Natalie A.; Bach, Patric

    2016-01-01

    Predictions allow humans to manage uncertainties within social interactions. Here, we investigate how explicit and implicit person models–how different people behave in different situations–shape these predictions. In a novel action identification task, participants judged whether actors interacted with or withdrew from objects. In two experiments, we manipulated, unbeknownst to participants, the two actors action likelihoods across situations, such that one actor typically interacted with one object and withdrew from the other, while the other actor showed the opposite behaviour. In Experiment 2, participants additionally received explicit information about the two individuals that either matched or mismatched their actual behaviours. The data revealed direct but dissociable effects of both kinds of person information on action identification. Implicit action likelihoods affected response times, speeding up the identification of typical relative to atypical actions, irrespective of the explicit knowledge about the individual’s behaviour. Explicit person knowledge, in contrast, affected error rates, causing participants to respond according to expectations instead of observed behaviour, even when they were aware that the explicit information might not be valid. Together, the data show that internal models of others’ behaviour are routinely re-activated during action observation. They provide first evidence of a person-specific social anticipation system, which predicts forthcoming actions from both explicit information and an individuals’ prior behaviour in a situation. These data link action observation to recent models of predictive coding in the non-social domain where similar dissociations between implicit effects on stimulus identification and explicit behavioural wagers have been reported. PMID:27434265

  8. Alcohol-Approach Inclinations and Drinking Identity as Predictors of Behavioral Economic Demand for Alcohol

    PubMed Central

    Ramirez, Jason J.; Dennhardt, Ashley A.; Baldwin, Scott A.; Murphy, James G.; Lindgren, Kristen P.

    2016-01-01

    Behavioral economic demand curve indices of alcohol consumption reflect decisions to consume alcohol at varying costs. Although these indices predict alcohol-related problems beyond established predictors, little is known about the determinants of elevated demand. Two cognitive constructs that may underlie alcohol demand are alcohol-approach inclinations and drinking identity. The aim of this study was to evaluate implicit and explicit measures of these constructs as predictors of alcohol demand curve indices. College student drinkers (N = 223, 59% female) completed implicit and explicit measures of drinking identity and alcohol-approach inclinations at three timepoints separated by three-month intervals, and completed the Alcohol Purchase Task to assess demand at Time 3. Given no change in our alcohol-approach inclinations and drinking identity measures over time, random intercept-only models were used to predict two demand indices: Amplitude, which represents maximum hypothetical alcohol consumption and expenditures, and Persistence, which represents sensitivity to increasing prices. When modeled separately, implicit and explicit measures of drinking identity and alcohol-approach inclinations positively predicted demand indices. When implicit and explicit measures were included in the same model, both measures of drinking identity predicted Amplitude, but only explicit drinking identity predicted Persistence. In contrast, explicit measures of alcohol-approach inclinations, but not implicit measures, predicted both demand indices. Therefore, there was more support for explicit, versus implicit, measures as unique predictors of alcohol demand. Overall, drinking identity and alcohol-approach inclinations both exhibit positive associations with alcohol demand and represent potentially modifiable cognitive constructs that may underlie elevated demand in college student drinkers. PMID:27379444

  9. An Asynchronous Many-Task Implementation of In-Situ Statistical Analysis using Legion.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pebay, Philippe Pierre; Bennett, Janine Camille

    2015-11-01

    In this report, we propose a framework for the design and implementation of in-situ analy- ses using an asynchronous many-task (AMT) model, using the Legion programming model together with the MiniAero mini-application as a surrogate for full-scale parallel scientific computing applications. The bulk of this work consists of converting the Learn/Derive/Assess model which we had initially developed for parallel statistical analysis using MPI [PTBM11], from a SPMD to an AMT model. In this goal, we propose an original use of the concept of Legion logical regions as a replacement for the parallel communication schemes used for the only operation ofmore » the statistics engines that require explicit communication. We then evaluate this proposed scheme in a shared memory environment, using the Legion port of MiniAero as a proxy for a full-scale scientific application, as a means to provide input data sets of variable size for the in-situ statistical analyses in an AMT context. We demonstrate in particular that the approach has merit, and warrants further investigation, in collaboration with ongoing efforts to improve the overall parallel performance of the Legion system.« less

  10. A Three-Stage Model of Housing Search,

    DTIC Science & Technology

    1980-05-01

    Hanushek and Quigley, 1978) that recognize housing search as a transaction cost but rarely - .. examine search behavior; and descriptive studies of search...explicit mobility models that have recently appeared in the liter- ature (Speare et al., 1975; Hanushek and Quigley, 1978; Brummell, 1979). Although...1978; Hanushek and Quigley, 1978; Cronin, 1978). By explicitly assigning dollar values, the economic models attempt to obtain an objective measure of

  11. DoD Product Line Practice Workshop Report

    DTIC Science & Technology

    1998-05-01

    capability. The essential enterprise management practices include ensuring sound business goals providing an appropriate funding model performing...business. This way requires vision and explicit support at the organizational level. There must be an explicit funding model to support the development...the same group seems to work best in smaller organizations. A funding model for core asset development also needs to be developed because the core

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wieder, William R.; Allison, Steven D.; Davidson, Eric A.

    Microbes influence soil organic matter (SOM) decomposition and the long-term stabilization of carbon (C) in soils. We contend that by revising the representation of microbial processes and their interactions with the physicochemical soil environment, Earth system models (ESMs) may make more realistic global C cycle projections. Explicit representation of microbial processes presents considerable challenges due to the scale at which these processes occur. Thus, applying microbial theory in ESMs requires a framework to link micro-scale process-level understanding and measurements to macro-scale models used to make decadal- to century-long projections. Here, we review the diversity, advantages, and pitfalls of simulating soilmore » biogeochemical cycles using microbial-explicit modeling approaches. We present a roadmap for how to begin building, applying, and evaluating reliable microbial-explicit model formulations that can be applied in ESMs. Drawing from experience with traditional decomposition models we suggest: (1) guidelines for common model parameters and output that can facilitate future model intercomparisons; (2) development of benchmarking and model-data integration frameworks that can be used to effectively guide, inform, and evaluate model parameterizations with data from well-curated repositories; and (3) the application of scaling methods to integrate microbial-explicit soil biogeochemistry modules within ESMs. With contributions across scientific disciplines, we feel this roadmap can advance our fundamental understanding of soil biogeochemical dynamics and more realistically project likely soil C response to environmental change at global scales.« less

  13. Self-Love or Other-Love? Explicit Other-Preference but Implicit Self-Preference

    PubMed Central

    Gebauer, Jochen E.; Göritz, Anja S.; Hofmann, Wilhelm; Sedikides, Constantine

    2012-01-01

    Do humans prefer the self even over their favorite other person? This question has pervaded philosophy and social-behavioral sciences. Psychology’s distinction between explicit and implicit preferences calls for a two-tiered solution. Our evolutionarily-based Dissociative Self-Preference Model offers two hypotheses. Other-preferences prevail at an explicit level, because they convey caring for others, which strengthens interpersonal bonds–a major evolutionary advantage. Self-preferences, however, prevail at an implicit level, because they facilitate self-serving automatic behavior, which favors the self in life-or-die situations–also a major evolutionary advantage. We examined the data of 1,519 participants, who completed an explicit measure and one of five implicit measures of preferences for self versus favorite other. The results were consistent with the Dissociative Self-Preference Model. Explicitly, participants preferred their favorite other over the self. Implicitly, however, they preferred the self over their favorite other (be it their child, romantic partner, or best friend). Results are discussed in relation to evolutionary theorizing on self-deception. PMID:22848605

  14. A Metacognitive Approach to "Implicit" and "Explicit" Evaluations: Comment on Gawronski and Bodenhausen (2006)

    ERIC Educational Resources Information Center

    Petty, Richard E.; Brinol, Pablo

    2006-01-01

    Comments on the article by B. Gawronski and G. V. Bodenhausen (see record 2006-10465-003). A metacognitive model (MCM) is presented to describe how automatic (implicit) and deliberative (explicit) measures of attitudes respond to change attempts. The model assumes that contemporary implicit measures tap quick evaluative associations, whereas…

  15. A Watershed-based spatially-explicit demonstration of an Integrated Environmental Modeling Framework for Ecosystem Services in the Coal River Basin (WV, USA)

    EPA Science Inventory

    We demonstrate a spatially-explicit regional assessment of current condition of aquatic ecoservices in the Coal River Basin (CRB), with limited sensitivity analysis for the atmospheric contaminant mercury. The integrated modeling framework (IMF) forecasts water quality and quant...

  16. USING THE ECLPSS SOFTWARE ENVIRONMENT TO BUILD A SPATIALLY EXPLICIT COMPONENT-BASED MODEL OF OZONE EFFECTS ON FOREST ECOSYSTEMS. (R827958)

    EPA Science Inventory

    We have developed a modeling framework to support grid-based simulation of ecosystems at multiple spatial scales, the Ecological Component Library for Parallel Spatial Simulation (ECLPSS). ECLPSS helps ecologists to build robust spatially explicit simulations of ...

  17. A masked negative self-esteem? Implicit and explicit self-esteem in patients with Narcissistic Personality Disorder.

    PubMed

    Marissen, Marlies A E; Brouwer, Marlies E; Hiemstra, Annemarie M F; Deen, Mathijs L; Franken, Ingmar H A

    2016-08-30

    The mask model of narcissism states that the narcissistic traits of patients with NPD are the result of a compensatory reaction to underlying ego fragility. This model assumes that high explicit self-esteem masks low implicit self-esteem. However, research on narcissism has predominantly focused on non-clinical participants and data derived from patients diagnosed with Narcissistic Personality Disorder (NPD) remain scarce. Therefore, the goal of the present study was to test the mask model hypothesis of narcissism among patients with NPD. Male patients with NPD were compared to patients with other PD's and healthy participants on implicit and explicit self-esteem. NPD patients did not differ in levels of explicit and implicit self-esteem compared to both the psychiatric and the healthy control group. Overall, the current study found no evidence in support of the mask model of narcissism among a clinical group. This implicates that it might not be relevant for clinicians to focus treatment of NPD on an underlying negative self-esteem. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Studying the Structure of Condensables Jupiter’s 24deg Jet

    NASA Astrophysics Data System (ADS)

    Flom, Abigail; Sankar, Ramanakumar; Palotai, Csaba J.; Dowling, Timothy E.

    2017-10-01

    Simulations of the atmospheres of Jovian planets can be used to check our current understanding of the physics of their atmospheres. Such studies have been performed in the past, but the development of cloud microphysics models allows us to gain new insight in how the clouds form and behave in areas of interest. This study conducts high resolution cloudy simulations of the 24 degree north high speed jet for a period of 200 days. The models were created using the Explicit Planetary Isentropic_Coordinate (EPIC) general circulation model (Dowling et al 1998, 2006) that includes full hydrological cycle for multiple condensible species (Palotai and dowling 2008, Palotai et al 2016). This builds off of work presented by our group last year at DPS. The simulations were run under various conditions again in order to test what parameters led to stable simulations. These results help describe which physical parameters can lead to stable high speed jets and how water and ammonia behave within these features. Reference: [1] T. Dowling, A. Fischer, P. Gierasch, J. Harrington, R. Lebeau, and C. Santori. The explicit planetary isentropic-coordinate (epic) atmospheric model. Icarus, 1998. [2] T. E. Dowling, M. E. Bradley, E. Colon, J. Kramer, R. P. LeBeau, G. C. H. Lee, T. I. Mattox, R. Morales-Juberias, C. J. Palotai, V. k. Parimi, and A. P. Showman. The epic atmospheric model with an isentropic/terrain-following hybrid vertical coordinate. Icarus, 182:259-273, may 2006.[3] C. Palotai and T. E. Dowling. Addition of water and ammonia cloud microphysics to the epic model. Icarus, 2008.[4] C. J. Palotai, R. P. Le Beau, R. Shankar, A. Flom, J. Lashley, and T. McCabe. A cloud microphysics model for the gas giant planets. In AAS/Division for Planetary Sciences Meeting Abstracts, 2016.

  19. Modeling the oxidation of ebselen and other organoselenium compounds using explicit solvent networks.

    PubMed

    Bayse, Craig A; Antony, Sonia

    2009-05-14

    The oxidation of dimethylselenide, dimethyldiselenide, S-methylselenenyl-methylmercaptan, and truncated and full models of ebselen (N-phenyl-1,2-benzisoselenazol-3(2H)-one) by methyl hydrogen peroxide has been modeled using density functional theory (DFT) and solvent-assisted proton exchange (SAPE), a method of microsolvation that employs explicit solvent networks to facilitate proton transfer reactions. The calculated activation barriers for these systems were substantially lower in energy (DeltaG(double dagger) + DeltaG(solv) = 13 to 26 kcal/mol) than models that neglect the participation of solvent in proton exchange. The comparison of two- and three-water SAPE networks showed a reduction in the strain in the model system but without a substantial reduction in the activation barriers. Truncating the ebselen model to N-methylisoselenazol-3(2H)-one gave a larger activation barrier than ebselen or N-methyl-1,2-benzisoselenazol-3(2H)-one but provided an efficient means of determining an initial guess for larger transition-state models. The similar barriers obtained for ebselen and Me(2)Se(2) (DeltaG(double dagger) + DeltaG(solv) = 20.65 and 20.40 kcal/mol, respectively) were consistent with experimentally determined rate constants. The activation barrier for MeSeSMe (DeltaG(double dagger) + DeltaG(solv) = 21.25 kcal/mol) was similar to that of ebselen and Me(2)Se(2) despite its significantly lower experimental rate for oxidation of an ebselen selenenyl sulfide by hydrogen peroxide relative to ebselen and ebselen diselenide. The disparity is attributed to intramolecular Se-O interactions, which decrease the nucleophilicity of the selenium center of the selenenyl sulfide.

  20. Spatial modeling in ecology: the flexibility of eigenfunction spatial analyses.

    PubMed

    Griffith, Daniel A; Peres-Neto, Pedro R

    2006-10-01

    Recently, analytical approaches based on the eigenfunctions of spatial configuration matrices have been proposed in order to consider explicitly spatial predictors. The present study demonstrates the usefulness of eigenfunctions in spatial modeling applied to ecological problems and shows equivalencies of and differences between the two current implementations of this methodology. The two approaches in this category are the distance-based (DB) eigenvector maps proposed by P. Legendre and his colleagues, and spatial filtering based upon geographic connectivity matrices (i.e., topology-based; CB) developed by D. A. Griffith and his colleagues. In both cases, the goal is to create spatial predictors that can be easily incorporated into conventional regression models. One important advantage of these two approaches over any other spatial approach is that they provide a flexible tool that allows the full range of general and generalized linear modeling theory to be applied to ecological and geographical problems in the presence of nonzero spatial autocorrelation.

  1. Quasi-steady-state analysis of coupled flashing ratchets.

    PubMed

    Levien, Ethan; Bressloff, Paul C

    2015-10-01

    We perform a quasi-steady-state (QSS) reduction of a flashing ratchet to obtain a Brownian particle in an effective potential. The resulting system is analytically tractable and yet preserves essential dynamical features of the full model. We first use the QSS reduction to derive an explicit expression for the velocity of a simple two-state flashing ratchet. In particular, we determine the relationship between perturbations from detailed balance, which are encoded in the transitions rates of the flashing ratchet, and a tilted-periodic potential. We then perform a QSS analysis of a pair of elastically coupled flashing ratchets, which reduces to a Brownian particle moving in a two-dimensional vector field. We suggest that the fixed points of this vector field accurately approximate the metastable spatial locations of the coupled ratchets, which are, in general, impossible to identify from the full system.

  2. A jellium model of a catalyst particle in carbon nanotube growth

    NASA Astrophysics Data System (ADS)

    Artyukhov, Vasilii I.; Liu, Mingjie; Penev, Evgeni S.; Yakobson, Boris I.

    2017-06-01

    We show how a jellium model can represent a catalyst particle within the density-functional theory based approaches to the growth mechanism of carbon nanotubes (CNTs). The advantage of jellium is an abridged, less computationally taxing description of the multi-atom metal particle, while at the same time in avoiding the uncertainty of selecting a particular atomic geometry of either a solid or ever-changing liquid catalyst particle. A careful choice of jellium sphere size and its electron density as a descriptive parameter allows one to calculate the CNT-metal interface energies close to explicit full atomistic models. Further, we show that using jellium permits computing and comparing the formation of topological defects (sole pentagons or heptagons, the culprits of growth termination) as well as pentagon-heptagon pairs 5|7 (known as chirality-switching dislocation).

  3. The Effects of Explicit Teaching of Strategies, Second-Order Concepts, and Epistemological Underpinnings on Students' Ability to Reason Causally in History

    ERIC Educational Resources Information Center

    Stoel, Gerhard L.; van Drie, Jannet P.; van Boxtel, Carla A. M.

    2017-01-01

    This article reports an experimental study on the effects of explicit teaching on 11th grade students' ability to reason causally in history. Underpinned by the model of domain learning, explicit teaching is conceptualized as multidimensional, focusing on strategies and second-order concepts to generate and verbalize causal explanations and…

  4. Investigating the predictive validity of implicit and explicit measures of motivation on condom use, physical activity and healthy eating.

    PubMed

    Keatley, David; Clarke, David D; Hagger, Martin S

    2012-01-01

    The literature on health-related behaviours and motivation is replete with research involving explicit processes and their relations with intentions and behaviour. Recently, interest has been focused on the impact of implicit processes and measures on health-related behaviours. Dual-systems models have been proposed to provide a framework for understanding the effects of explicit or deliberative and implicit or impulsive processes on health behaviours. Informed by a dual-systems approach and self-determination theory, the aim of this study was to test the effects of implicit and explicit motivation on three health-related behaviours in a sample of undergraduate students (N = 162). Implicit motives were hypothesised to predict behaviour independent of intentions while explicit motives would be mediated by intentions. Regression analyses indicated that implicit motivation predicted physical activity behaviour only. Across all behaviours, intention mediated the effects of explicit motivational variables from self-determination theory. This study provides limited support for dual-systems models and the role of implicit motivation in the prediction of health-related behaviour. Suggestions for future research into the role of implicit processes in motivation are outlined.

  5. The feasibility of using explicit method for linear correction of the particle size variation using NIR Spectroscopy combined with PLS2regression method

    NASA Astrophysics Data System (ADS)

    Yulia, M.; Suhandy, D.

    2018-03-01

    NIR spectra obtained from spectral data acquisition system contains both chemical information of samples as well as physical information of the samples, such as particle size and bulk density. Several methods have been established for developing calibration models that can compensate for sample physical information variations. One common approach is to include physical information variation in the calibration model both explicitly and implicitly. The objective of this study was to evaluate the feasibility of using explicit method to compensate the influence of different particle size of coffee powder in NIR calibration model performance. A number of 220 coffee powder samples with two different types of coffee (civet and non-civet) and two different particle sizes (212 and 500 µm) were prepared. Spectral data was acquired using NIR spectrometer equipped with an integrating sphere for diffuse reflectance measurement. A discrimination method based on PLS-DA was conducted and the influence of different particle size on the performance of PLS-DA was investigated. In explicit method, we add directly the particle size as predicted variable results in an X block containing only the NIR spectra and a Y block containing the particle size and type of coffee. The explicit inclusion of the particle size into the calibration model is expected to improve the accuracy of type of coffee determination. The result shows that using explicit method the quality of the developed calibration model for type of coffee determination is a little bit superior with coefficient of determination (R2) = 0.99 and root mean square error of cross-validation (RMSECV) = 0.041. The performance of the PLS2 calibration model for type of coffee determination with particle size compensation was quite good and able to predict the type of coffee in two different particle sizes with relatively high R2 pred values. The prediction also resulted in low bias and RMSEP values.

  6. An explicit asymptotic model for the surface wave in a viscoelastic half-space based on applying Rabotnov's fractional exponential integral operators

    NASA Astrophysics Data System (ADS)

    Wilde, M. V.; Sergeeva, N. V.

    2018-05-01

    An explicit asymptotic model extracting the contribution of a surface wave to the dynamic response of a viscoelastic half-space is derived. Fractional exponential Rabotnov's integral operators are used for describing of material properties. The model is derived by extracting the principal part of the poles corresponding to the surface waves after applying Laplace and Fourier transforms. The simplified equations for the originals are written by using power series expansions. Padè approximation is constructed to unite short-time and long-time models. The form of this approximation allows to formulate the explicit model using a fractional exponential Rabotnov's integral operator with parameters depending on the properties of surface wave. The applicability of derived models is studied by comparing with the exact solutions of a model problem. It is revealed that the model based on Padè approximation is highly effective for all the possible time domains.

  7. Age effects on explicit and implicit memory

    PubMed Central

    Ward, Emma V.; Berry, Christopher J.; Shanks, David R.

    2013-01-01

    It is well-documented that explicit memory (e.g., recognition) declines with age. In contrast, many argue that implicit memory (e.g., priming) is preserved in healthy aging. For example, priming on tasks such as perceptual identification is often not statistically different in groups of young and older adults. Such observations are commonly taken as evidence for distinct explicit and implicit learning/memory systems. In this article we discuss several lines of evidence that challenge this view. We describe how patterns of differential age-related decline may arise from differences in the ways in which the two forms of memory are commonly measured, and review recent research suggesting that under improved measurement methods, implicit memory is not age-invariant. Formal computational models are of considerable utility in revealing the nature of underlying systems. We report the results of applying single and multiple-systems models to data on age effects in implicit and explicit memory. Model comparison clearly favors the single-system view. Implications for the memory systems debate are discussed. PMID:24065942

  8. High-Order/Low-Order methods for ocean modeling

    DOE PAGES

    Newman, Christopher; Womeldorff, Geoff; Chacón, Luis; ...

    2015-06-01

    In this study, we examine a High Order/Low Order (HOLO) approach for a z-level ocean model and show that the traditional semi-implicit and split-explicit methods, as well as a recent preconditioning strategy, can easily be cast in the framework of HOLO methods. The HOLO formulation admits an implicit-explicit method that is algorithmically scalable and second-order accurate, allowing timesteps much larger than the barotropic time scale. We show how HOLO approaches, in particular the implicit-explicit method, can provide a solid route for ocean simulation to heterogeneous computing and exascale environments.

  9. Explicit robust schemes for implementation of a class of principal value-based constitutive models: Symbolic and numeric implementation

    NASA Technical Reports Server (NTRS)

    Arnold, S. M.; Saleeb, A. F.; Tan, H. Q.; Zhang, Y.

    1993-01-01

    The issue of developing effective and robust schemes to implement a class of the Ogden-type hyperelastic constitutive models is addressed. To this end, special purpose functions (running under MACSYMA) are developed for the symbolic derivation, evaluation, and automatic FORTRAN code generation of explicit expressions for the corresponding stress function and material tangent stiffness tensors. These explicit forms are valid over the entire deformation range, since the singularities resulting from repeated principal-stretch values have been theoretically removed. The required computational algorithms are outlined, and the resulting FORTRAN computer code is presented.

  10. Prediction of pilot reserve attention capacity during air-to-air target tracking

    NASA Technical Reports Server (NTRS)

    Onstott, E. D.; Faulkner, W. H.

    1977-01-01

    Reserve attention capacity of a pilot was calculated using a pilot model that allocates exclusive model attention according to the ranking of task urgency functions whose variables are tracking error and error rate. The modeled task consisted of tracking a maneuvering target aircraft both vertically and horizontally, and when possible, performing a diverting side task which was simulated by the precise positioning of an electrical stylus and modeled as a task of constant urgency in the attention allocation algorithm. The urgency of the single loop vertical task is simply the magnitude of the vertical tracking error, while the multiloop horizontal task requires a nonlinear urgency measure of error and error rate terms. Comparison of model results with flight simulation data verified the computed model statistics of tracking error of both axes, lateral and longitudinal stick amplitude and rate, and side task episodes. Full data for the simulation tracking statistics as well as the explicit equations and structure of the urgency function multiaxis pilot model are presented.

  11. Clinical records anonymisation and text extraction (CRATE): an open-source software system.

    PubMed

    Cardinal, Rudolf N

    2017-04-26

    Electronic medical records contain information of value for research, but contain identifiable and often highly sensitive confidential information. Patient-identifiable information cannot in general be shared outside clinical care teams without explicit consent, but anonymisation/de-identification allows research uses of clinical data without explicit consent. This article presents CRATE (Clinical Records Anonymisation and Text Extraction), an open-source software system with separable functions: (1) it anonymises or de-identifies arbitrary relational databases, with sensitivity and precision similar to previous comparable systems; (2) it uses public secure cryptographic methods to map patient identifiers to research identifiers (pseudonyms); (3) it connects relational databases to external tools for natural language processing; (4) it provides a web front end for research and administrative functions; and (5) it supports a specific model through which patients may consent to be contacted about research. Creation and management of a research database from sensitive clinical records with secure pseudonym generation, full-text indexing, and a consent-to-contact process is possible and practical using entirely free and open-source software.

  12. Class of self-limiting growth models in the presence of nonlinear diffusion

    NASA Astrophysics Data System (ADS)

    Kar, Sandip; Banik, Suman Kumar; Ray, Deb Shankar

    2002-06-01

    The source term in a reaction-diffusion system, in general, does not involve explicit time dependence. A class of self-limiting growth models dealing with animal and tumor growth and bacterial population in a culture, on the other hand, are described by kinetics with explicit functions of time. We analyze a reaction-diffusion system to study the propagation of spatial front for these models.

  13. Innovations in individual feature history management - The significance of feature-based temporal model

    USGS Publications Warehouse

    Choi, J.; Seong, J.C.; Kim, B.; Usery, E.L.

    2008-01-01

    A feature relies on three dimensions (space, theme, and time) for its representation. Even though spatiotemporal models have been proposed, they have principally focused on the spatial changes of a feature. In this paper, a feature-based temporal model is proposed to represent the changes of both space and theme independently. The proposed model modifies the ISO's temporal schema and adds new explicit temporal relationship structure that stores temporal topological relationship with the ISO's temporal primitives of a feature in order to keep track feature history. The explicit temporal relationship can enhance query performance on feature history by removing topological comparison during query process. Further, a prototype system has been developed to test a proposed feature-based temporal model by querying land parcel history in Athens, Georgia. The result of temporal query on individual feature history shows the efficiency of the explicit temporal relationship structure. ?? Springer Science+Business Media, LLC 2007.

  14. Labeling and Knowing: A Reconciliation of Implicit Theory and Explicit Theory among Students with Exceptionalities

    ERIC Educational Resources Information Center

    lo, C. Owen

    2014-01-01

    Using a realist grounded theory method, this study resulted in a theoretical model and 4 propositions. As displayed in the LINK model, the labeling practice is situated in and endorsed by a social context that carries explicit theory about and educational policies regarding the labels. Taking a developmental perspective, the labeling practice…

  15. 3D transient electromagnetic simulation using a modified correspondence principle for wave and diffusion fields

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Ji, Y.; Egbert, G. D.

    2015-12-01

    The fictitious time domain method (FTD), based on the correspondence principle for wave and diffusion fields, has been developed and used over the past few years primarily for marine electromagnetic (EM) modeling. Here we present results of our efforts to apply the FTD approach to land and airborne TEM problems which can reduce the computer time several orders of magnitude and preserve high accuracy. In contrast to the marine case, where sources are in the conductive sea water, we must model the EM fields in the air; to allow for topography air layers must be explicitly included in the computational domain. Furthermore, because sources for most TEM applications generally must be modeled as finite loops, it is useful to solve directly for the impulse response appropriate to the problem geometry, instead of the point-source Green functions typically used for marine problems. Our approach can be summarized as follows: (1) The EM diffusion equation is transformed to a fictitious wave equation. (2) The FTD wave equation is solved with an explicit finite difference time-stepping scheme, with CPML (Convolutional PML) boundary conditions for the whole computational domain including the air and earth , with FTD domain source corresponding to the actual transmitter geometry. Resistivity of the air layers is kept as low as possible, to compromise between efficiency (longer fictitious time step) and accuracy. We have generally found a host/air resistivity contrast of 10-3 is sufficient. (3)A "Modified" Fourier Transform (MFT) allow us recover system's impulse response from the fictitious time domain to the diffusion (frequency) domain. (4) The result is multiplied by the Fourier transformation (FT) of the real source current avoiding time consuming convolutions in the time domain. (5) The inverse FT is employed to get the final full waveform and full time response of the system in the time domain. In general, this method can be used to efficiently solve most time-domain EM simulation problems for non-point sources.

  16. The mixed impact of medical school on medical students' implicit and explicit weight bias.

    PubMed

    Phelan, Sean M; Puhl, Rebecca M; Burke, Sara E; Hardeman, Rachel; Dovidio, John F; Nelson, David B; Przedworski, Julia; Burgess, Diana J; Perry, Sylvia; Yeazel, Mark W; van Ryn, Michelle

    2015-10-01

    Health care trainees demonstrate implicit (automatic, unconscious) and explicit (conscious) bias against people from stigmatised and marginalised social groups, which can negatively influence communication and decision making. Medical schools are well positioned to intervene and reduce bias in new physicians. This study was designed to assess medical school factors that influence change in implicit and explicit bias against individuals from one stigmatised group: people with obesity. This was a prospective cohort study of medical students enrolled at 49 US medical schools randomly selected from all US medical schools within the strata of public and private schools and region. Participants were 1795 medical students surveyed at the beginning of their first year and end of their fourth year. Web-based surveys included measures of weight bias, and medical school experiences and climate. Bias change was compared with changes in bias in the general public over the same period. Linear mixed models were used to assess the impact of curriculum, contact with people with obesity, and faculty role modelling on weight bias change. Increased implicit and explicit biases were associated with less positive contact with patients with obesity and more exposure to faculty role modelling of discriminatory behaviour or negative comments about patients with obesity. Increased implicit bias was associated with training in how to deal with difficult patients. On average, implicit weight bias decreased and explicit bias increased during medical school, over a period of time in which implicit weight bias in the general public increased and explicit bias remained stable. Medical schools may reduce students' weight biases by increasing positive contact between students and patients with obesity, eliminating unprofessional role modelling by faculty members and residents, and altering curricula focused on treating difficult patients. © 2015 John Wiley & Sons Ltd.

  17. Explicit and implicit learning: The case of computer programming

    NASA Astrophysics Data System (ADS)

    Mancy, Rebecca

    The central question of this thesis concerns the role of explicit and implicit learning in the acquisition of a complex skill, namely computer programming. This issue is explored with reference to information processing models of memory drawn from cognitive science. These models indicate that conscious information processing occurs in working memory where information is stored and manipulated online, but that this mode of processing shows serious limitations in terms of capacity or resources. Some information processing models also indicate information processing in the absence of conscious awareness through automation and implicit learning. It was hypothesised that students would demonstrate implicit and explicit knowledge and that both would contribute to their performance in programming. This hypothesis was investigated via two empirical studies. The first concentrated on temporary storage and online processing in working memory and the second on implicit and explicit knowledge. Storage and processing were tested using two tools: temporary storage capacity was measured using a digit span test; processing was investigated with a disembedding test. The results were used to calculate correlation coefficients with performance on programming examinations. Individual differences in temporary storage had only a small role in predicting programming performance and this factor was not a major determinant of success. Individual differences in disembedding were more strongly related to programming achievement. The second study used interviews to investigate the use of implicit and explicit knowledge. Data were analysed according to a grounded theory paradigm. The results indicated that students possessed implicit and explicit knowledge, but that the balance between the two varied between students and that the most successful students did not necessarily possess greater explicit knowledge. The ways in which students described their knowledge led to the development of a framework which extends beyond the implicit-explicit dichotomy to four descriptive categories of knowledge along this dimension. Overall, the results demonstrated that explicit and implicit knowledge both contribute to the acquisition ofprogramming skills. Suggestions are made for further research, and the results are discussed in the context of their implications for education.

  18. Cluster-Continuum Calculations of Hydration Free Energies of Anions and Group 12 Divalent Cations.

    PubMed

    Riccardi, Demian; Guo, Hao-Bo; Parks, Jerry M; Gu, Baohua; Liang, Liyuan; Smith, Jeremy C

    2013-01-08

    Understanding aqueous phase processes involving group 12 metal cations is relevant to both environmental and biological sciences. Here, quantum chemical methods and polarizable continuum models are used to compute the hydration free energies of a series of divalent group 12 metal cations (Zn(2+), Cd(2+), and Hg(2+)) together with Cu(2+) and the anions OH(-), SH(-), Cl(-), and F(-). A cluster-continuum method is employed, in which gas-phase clusters of the ion and explicit solvent molecules are immersed in a dielectric continuum. Two approaches to define the size of the solute-water cluster are compared, in which the number of explicit waters used is either held constant or determined variationally as that of the most favorable hydration free energy. Results obtained with various polarizable continuum models are also presented. Each leg of the relevant thermodynamic cycle is analyzed in detail to determine how different terms contribute to the observed mean signed error (MSE) and the standard deviation of the error (STDEV) between theory and experiment. The use of a constant number of water molecules for each set of ions is found to lead to predicted relative trends that benefit from error cancellation. Overall, the best results are obtained with MP2 and the Solvent Model D polarizable continuum model (SMD), with eight explicit water molecules for anions and 10 for the metal cations, yielding a STDEV of 2.3 kcal mol(-1) and MSE of 0.9 kcal mol(-1) between theoretical and experimental hydration free energies, which range from -72.4 kcal mol(-1) for SH(-) to -505.9 kcal mol(-1) for Cu(2+). Using B3PW91 with DFT-D3 dispersion corrections (B3PW91-D) and SMD yields a STDEV of 3.3 kcal mol(-1) and MSE of 1.6 kcal mol(-1), to which adding MP2 corrections from smaller divalent metal cation water molecule clusters yields very good agreement with the full MP2 results. Using B3PW91-D and SMD, with two explicit water molecules for anions and six for divalent metal cations, also yields reasonable agreement with experimental values, due in part to fortuitous error cancellation associated with the metal cations. Overall, the results indicate that the careful application of quantum chemical cluster-continuum methods provides valuable insight into aqueous ionic processes that depend on both local and long-range electrostatic interactions with the solvent.

  19. Cloud Modeling

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Moncrieff, Mitchell; Einaud, Franco (Technical Monitor)

    2001-01-01

    Numerical cloud models have been developed and applied extensively to study cloud-scale and mesoscale processes during the past four decades. The distinctive aspect of these cloud models is their ability to treat explicitly (or resolve) cloud-scale dynamics. This requires the cloud models to be formulated from the non-hydrostatic equations of motion that explicitly include the vertical acceleration terms since the vertical and horizontal scales of convection are similar. Such models are also necessary in order to allow gravity waves, such as those triggered by clouds, to be resolved explicitly. In contrast, the hydrostatic approximation, usually applied in global or regional models, does allow the presence of gravity waves. In addition, the availability of exponentially increasing computer capabilities has resulted in time integrations increasing from hours to days, domain grids boxes (points) increasing from less than 2000 to more than 2,500,000 grid points with 500 to 1000 m resolution, and 3-D models becoming increasingly prevalent. The cloud resolving model is now at a stage where it can provide reasonably accurate statistical information of the sub-grid, cloud-resolving processes poorly parameterized in climate models and numerical prediction models.

  20. Modelling tidewater glacier calving: from detailed process models to simple calving laws

    NASA Astrophysics Data System (ADS)

    Benn, Doug; Åström, Jan; Zwinger, Thomas; Todd, Joe; Nick, Faezeh

    2017-04-01

    The simple calving laws currently used in ice sheet models do not adequately reflect the complexity and diversity of calving processes. To be effective, calving laws must be grounded in a sound understanding of how calving actually works. We have developed a new approach to formulating calving laws, using a) the Helsinki Discrete Element Model (HiDEM) to explicitly model fracture and calving processes, and b) the full-Stokes continuum model Elmer/Ice to identify critical stress states associated with HiDEM calving events. A range of observed calving processes emerges spontaneously from HiDEM in response to variations in ice-front buoyancy and the size of subaqueous undercuts, and we show that HiDEM calving events are associated with characteristic stress patterns simulated in Elmer/Ice. Our results open the way to developing calving laws that properly reflect the diversity of calving processes, and provide a framework for a unified theory of the calving process continuum.

  1. Testing the cognitive catalyst model of rumination with explicit and implicit cognitive content.

    PubMed

    Sova, Christopher C; Roberts, John E

    2018-06-01

    The cognitive catalyst model posits that rumination and negative cognitive content, such as negative schema, interact to predict depressive affect. Past research has found support for this model using explicit measures of negative cognitive content such as self-report measures of trait self-esteem and dysfunctional attitudes. The present study tested whether these findings would extend to implicit measures of negative cognitive content such as implicit self-esteem, and whether effects would depend on initial mood state and history of depression. Sixty-one undergraduate students selected on the basis of depression history (27 previously depressed; 34 never depressed) completed explicit and implicit measures of negative cognitive content prior to random assignment to a rumination induction followed by a distraction induction or vice versa. Dysphoric affect was measured both before and after these inductions. Analyses revealed that explicit measures, but not implicit measures, interacted with rumination to predict change in dysphoric affect, and these interactions were further moderated by baseline levels of dysphoria. Limitations include the small nonclinical sample and use of a self-report measure of depression history. These findings suggest that rumination amplifies the association between explicit negative cognitive content and depressive affect primarily among people who are already experiencing sad mood. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Effect of explicit dimension instruction on speech category learning

    PubMed Central

    Chandrasekaran, Bharath; Yi, Han-Gyol; Smayda, Kirsten E.; Maddox, W. Todd

    2015-01-01

    Learning non-native speech categories is often considered a challenging task in adulthood. This difficulty is driven by cross-language differences in weighting critical auditory dimensions that differentiate speech categories. For example, previous studies have shown that differentiating Mandarin tonal categories requires attending to dimensions related to pitch height and direction. Relative to native speakers of Mandarin, the pitch direction dimension is under-weighted by native English speakers. In the current study, we examined the effect of explicit instructions (dimension instruction) on native English speakers' Mandarin tone category learning within the framework of a dual-learning systems (DLS) model. This model predicts that successful speech category learning is initially mediated by an explicit, reflective learning system that frequently utilizes unidimensional rules, with an eventual switch to a more implicit, reflexive learning system that utilizes multidimensional rules. Participants were explicitly instructed to focus and/or ignore the pitch height dimension, the pitch direction dimension, or were given no explicit prime. Our results show that instruction instructing participants to focus on pitch direction, and instruction diverting attention away from pitch height resulted in enhanced tone categorization. Computational modeling of participant responses suggested that instruction related to pitch direction led to faster and more frequent use of multidimensional reflexive strategies, and enhanced perceptual selectivity along the previously underweighted pitch direction dimension. PMID:26542400

  3. The explicit and implicit dance in psychoanalytic change.

    PubMed

    Fosshage, James L

    2004-02-01

    How the implicit/non-declarative and explicit/declarative cognitive domains interact is centrally important in the consideration of effecting change within the psychoanalytic arena. Stern et al. (1998) declare that long-lasting change occurs in the domain of implicit relational knowledge. In the view of this author, the implicit and explicit domains are intricately intertwined in an interactive dance within a psychoanalytic process. The author views that a spirit of inquiry (Lichtenberg, Lachmann & Fosshage 2002) serves as the foundation of the psychoanalytic process. Analyst and patient strive to explore, understand and communicate and, thereby, create a 'spirit' of interaction that contributes, through gradual incremental learning, to new implicit relational knowledge. This spirit, as part of the implicit relational interaction, is a cornerstone of the analytic relationship. The 'inquiry' more directly brings explicit/declarative processing to the foreground in the joint attempt to explore and understand. The spirit of inquiry in the psychoanalytic arena highlights both the autobiographical scenarios of the explicit memory system and the mental models of the implicit memory system as each contributes to a sense of self, other, and self with other. This process facilitates the extrication and suspension of the old models, so that new models based on current relational experience can be gradually integrated into both memory systems for lasting change.

  4. Integrating remote sensing and spatially explicit epidemiological modeling

    NASA Astrophysics Data System (ADS)

    Finger, Flavio; Knox, Allyn; Bertuzzo, Enrico; Mari, Lorenzo; Bompangue, Didier; Gatto, Marino; Rinaldo, Andrea

    2015-04-01

    Spatially explicit epidemiological models are a crucial tool for the prediction of epidemiological patterns in time and space as well as for the allocation of health care resources. In addition they can provide valuable information about epidemiological processes and allow for the identification of environmental drivers of the disease spread. Most epidemiological models rely on environmental data as inputs. They can either be measured in the field by the means of conventional instruments or using remote sensing techniques to measure suitable proxies of the variables of interest. The later benefit from several advantages over conventional methods, including data availability, which can be an issue especially in developing, and spatial as well as temporal resolution of the data, which is particularly crucial for spatially explicit models. Here we present the case study of a spatially explicit, semi-mechanistic model applied to recurring cholera outbreaks in the Lake Kivu area (Democratic Republic of the Congo). The model describes the cholera incidence in eight health zones on the shore of the lake. Remotely sensed datasets of chlorophyll a concentration in the lake, precipitation and indices of global climate anomalies are used as environmental drivers. Human mobility and its effect on the disease spread is also taken into account. Several model configurations are tested on a data set of reported cases. The best models, accounting for different environmental drivers, and selected using the Akaike information criterion, are formally compared via cross validation. The best performing model accounts for seasonality, El Niño Southern Oscillation, precipitation and human mobility.

  5. Modeling the Explicit Chemistry of Anthropogenic and Biogenic Organic Aerosols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madronich, Sasha

    2015-12-09

    The atmospheric burden of Secondary Organic Aerosols (SOA) remains one of the most important yet uncertain aspects of the radiative forcing of climate. This grant focused on improving our quantitative understanding of SOA formation and evolution, by developing, applying, and improving a highly detailed model of atmospheric organic chemistry, the Generation of Explicit Chemistry and Kinetics of Organics in the Atmosphere (GECKO-A) model. Eleven (11) publications have resulted from this grant.

  6. Quantum SU(2|1) supersymmetric Calogero-Moser spinning systems

    NASA Astrophysics Data System (ADS)

    Fedoruk, Sergey; Ivanov, Evgeny; Lechtenfeld, Olaf; Sidorov, Stepan

    2018-04-01

    SU(2|1) supersymmetric multi-particle quantum mechanics with additional semi-dynamical spin degrees of freedom is considered. In particular, we provide an N=4 supersymmetrization of the quantum U(2) spin Calogero-Moser model, with an intrinsic mass parameter coming from the centrally-extended superalgebra \\widehat{su}(2\\Big|1) . The full system admits an SU(2|1) covariant separation into the center-of-mass sector and the quotient. We derive explicit expressions for the classical and quantum SU(2|1) generators in both sectors as well as for the total system, and we determine the relevant energy spectra, degeneracies, and the sets of physical states.

  7. Surface plasmons for doped graphene

    NASA Astrophysics Data System (ADS)

    Bordag, M.; Pirozhenko, I. G.

    2015-04-01

    Within the Dirac model for the electronic excitations of graphene, we calculate the full polarization tensor with finite mass and chemical potential. It has, besides the (00)-component, a second form factor, which must be accounted for. We obtain explicit formulas for both form factors and for the reflection coefficients. Using these, we discuss the regions in the momentum-frequency plane where plasmons may exist and give numeric solutions for the plasmon dispersion relations. It turns out that plasmons exist for both, transverse electric and transverse magnetic polarizations over the whole range of the ratio of mass to chemical potential, except for zero chemical potential, where only a TE plasmon exists.

  8. AGU Climate Scientists Offer Question-and-Answer Service for Media

    NASA Astrophysics Data System (ADS)

    Jackson, Stacy

    2010-03-01

    In fall 2009, AGU launched a member-driven pilot project to improve the accuracy of climate science coverage in the media and to improve public understanding of climate science. The project's goal was to increase the accessibility of climate science experts to journalists across the full spectrum of media outlets. As a supplement to the traditional one-to-one journalist-expert relationship model, the project tested the novel approach of providing a question-and-answer (Q&A) service with a pool of expert scientists and a Web-based interface with journalists. Questions were explicitly limited to climate science to maintain a nonadvocacy, nonpartisan perspective.

  9. Performance of hashed cache data migration schemes on multicomputers

    NASA Technical Reports Server (NTRS)

    Hiranandani, Seema; Saltz, Joel; Mehrotra, Piyush; Berryman, Harry

    1991-01-01

    After conducting an examination of several data-migration mechanisms which permit an explicit and controlled mapping of data to memory, a set of schemes for storage and retrieval of off-processor array elements is experimentally evaluated and modeled. All schemes considered have their basis in the use of hash tables for efficient access of nonlocal data. The techniques in question are those of hashed cache, partial enumeration, and full enumeration; in these, nonlocal data are stored in hash tables, so that the operative difference lies in the amount of memory used by each scheme and in the retrieval mechanism used for nonlocal data.

  10. The full spectrum of AdS5/CFT4 I: representation theory and one-loop Q-system

    NASA Astrophysics Data System (ADS)

    Marboe, Christian; Volin, Dmytro

    2018-04-01

    With the formulation of the quantum spectral curve for the AdS5/CFT4 integrable system, it became potentially possible to compute its full spectrum with high efficiency. This is the first paper in a series devoted to the explicit design of such computations, with no restrictions to particular subsectors being imposed. We revisit the representation theoretical classification of possible states in the spectrum and map the symmetry multiplets to solutions of the quantum spectral curve at zero coupling. To this end it is practical to introduce a generalisation of Young diagrams to the case of non-compact representations and define algebraic Q-systems directly on these diagrams. Furthermore, we propose an algorithm to explicitly solve such Q-systems that circumvents the traditional usage of Bethe equations and simplifies the computation effort. For example, our algorithm quickly obtains explicit analytic results for all 495 multiplets that accommodate single-trace operators in N=4 SYM with classical conformal dimension up to \\frac{13}{2} . We plan to use these results as the seed for solving the quantum spectral curve perturbatively to high loop orders in the next paper of the series.

  11. Including resonances in the multiperipheral model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pinsky, S.S.; Snider, D.R.; Thomas, G.H.

    1973-10-01

    A simple generalization of the multiperipheral model (MPM) and the Mueller--Regge Model (MRM) is given which has improved phenomenological capabilities by explicitly incorporating resonance phenomena, and still is simple enough to be an important theoretical laboratory. The model is discussed both with and without charge. In addition, the one channel, two channel, three channel and N channel cases are explicitly treated. Particular attention is paid to the constraints of charge conservation and positivity in the MRM. The recently proven equivalence between the MRM and MPM is extended to this model, and is used extensively. (auth)

  12. Explicit Pore Pressure Material Model in Carbon-Cloth Phenolic

    NASA Technical Reports Server (NTRS)

    Gutierrez-Lemini, Danton; Ehle, Curt

    2003-01-01

    An explicit material model that uses predicted pressure in the pores of a carbon-cloth phenolic (CCP) composite has been developed. This model is intended to be used within a finite-element model to predict phenomena specific to CCP components of solid-fuel-rocket nozzles subjected to high operating temperatures and to mechanical stresses that can be great enough to cause structural failures. Phenomena that can be predicted with the help of this model include failures of specimens in restrained-thermal-growth (RTG) tests, pocketing erosion, and ply lifting

  13. Ginzburg criterion for ionic fluids: the effect of Coulomb interactions.

    PubMed

    Patsahan, O

    2013-08-01

    The effect of the Coulomb interactions on the crossover between mean-field and Ising critical behavior in ionic fluids is studied using the Ginzburg criterion. We consider the charge-asymmetric primitive model supplemented by short-range attractive interactions in the vicinity of the gas-liquid critical point. The model without Coulomb interactions exhibiting typical Ising critical behavior is used to calibrate the Ginzburg temperature of the systems comprising electrostatic interactions. Using the collective variables method, we derive a microscopic-based effective Hamiltonian for the full model. We obtain explicit expressions for all the relevant Hamiltonian coefficients within the framework of the same approximation, i.e., the one-loop approximation. Then we consistently calculate the reduced Ginzburg temperature t(G) for both the purely Coulombic model (a restricted primitive model) and the purely nonionic model (a hard-sphere square-well model) as well as for the model parameters ranging between these two limiting cases. Contrary to the previous theoretical estimates, we obtain the reduced Ginzburg temperature for the purely Coulombic model to be about 20 times smaller than for the nonionic model. For the full model including both short-range and long-range interactions, we show that t(G) approaches the value found for the purely Coulombic model when the strength of the Coulomb interactions becomes sufficiently large. Our results suggest a key role of Coulomb interactions in the crossover behavior observed experimentally in ionic fluids as well as confirm the Ising-like criticality in the Coulomb-dominated ionic systems.

  14. Effects of prompting and reinforcement of one response pattern upon imitation of a different modeled pattern

    PubMed Central

    Bondy, Andrew S.

    1982-01-01

    Twelve preschool children participated in a study of the effects of explicit training on the imitation of modeled behavior. The responses trained involved a marble-dropping pattern that differed from the modeled pattern. Training consisted of physical prompts and verbal praise during a single session. No prompts or praise were used during test periods. After operant levels of the experimental responses were measured, training either preceded or was interposed within a series of exposures to modeled behavior that differed from the trained behavior. Children who were initially exposed to a modeling session immediately imitated, whereas those children who were initially trained immediately performed the appropriate response. Children initially trained on one pattern generally continued to exhibit that pattern even after many modeling sessions. Children who first viewed the modeled response and then were exposed to explicit training of a different response reversed their response pattern from the trained response to the modeled response within a few sessions. The results suggest that under certain conditions explicit training will exert greater control over responding than immediate modeling stimuli. PMID:16812260

  15. Transient modeling/analysis of hyperbolic heat conduction problems employing mixed implicit-explicit alpha method

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; D'Costa, Joseph F.

    1991-01-01

    This paper describes the evaluation of mixed implicit-explicit finite element formulations for hyperbolic heat conduction problems involving non-Fourier effects. In particular, mixed implicit-explicit formulations employing the alpha method proposed by Hughes et al. (1987, 1990) are described for the numerical simulation of hyperbolic heat conduction models, which involves time-dependent relaxation effects. Existing analytical approaches for modeling/analysis of such models involve complex mathematical formulations for obtaining closed-form solutions, while in certain numerical formulations the difficulties include severe oscillatory solution behavior (which often disguises the true response) in the vicinity of the thermal disturbances, which propagate with finite velocities. In view of these factors, the alpha method is evaluated to assess the control of the amount of numerical dissipation for predicting the transient propagating thermal disturbances. Numerical test models are presented, and pertinent conclusions are drawn for the mixed-time integration simulation of hyperbolic heat conduction models involving non-Fourier effects.

  16. Uncertainties in SOA Formation from the Photooxidation of α-pinene

    NASA Astrophysics Data System (ADS)

    McVay, R.; Zhang, X.; Aumont, B.; Valorso, R.; Camredon, M.; La, S.; Seinfeld, J.

    2015-12-01

    Explicit chemical models such as GECKO-A (the Generator for Explicit Chemistry and Kinetics of Organics in the Atmosphere) enable detailed modeling of gas-phase photooxidation and secondary organic aerosol (SOA) formation. Comparison between these explicit models and chamber experiments can provide insight into processes that are missing or unknown in these models. GECKO-A is used to model seven SOA formation experiments from α-pinene photooxidation conducted at varying seed particle concentrations with varying oxidation rates. We investigate various physical and chemical processes to evaluate the extent of agreement between the experiments and the model predictions. We examine the effect of vapor wall loss on SOA formation and how the importance of this effect changes at different oxidation rates. Proposed gas-phase autoxidation mechanisms are shown to significantly affect SOA predictions. The potential effects of particle-phase dimerization and condensed-phase photolysis are investigated. We demonstrate the extent to which SOA predictions in the α-pinene photooxidation system depend on uncertainties in the chemical mechanism.

  17. Testing the Use of Implicit Solvent in the Molecular Dynamics Modelling of DNA Flexibility

    NASA Astrophysics Data System (ADS)

    Mitchell, J.; Harris, S.

    DNA flexibility controls packaging, looping and in some cases sequence specific protein binding. Molecular dynamics simulations carried out with a computationally efficient implicit solvent model are potentially a powerful tool for studying larger DNA molecules than can be currently simulated when water and counterions are represented explicitly. In this work we compare DNA flexibility at the base pair step level modelled using an implicit solvent model to that previously determined from explicit solvent simulations and database analysis. Although much of the sequence dependent behaviour is preserved in implicit solvent, the DNA is considerably more flexible when the approximate model is used. In addition we test the ability of the implicit solvent to model stress induced DNA disruptions by simulating a series of DNA minicircle topoisomers which vary in size and superhelical density. When compared with previously run explicit solvent simulations, we find that while the levels of DNA denaturation are similar using both computational methodologies, the specific structural form of the disruptions is different.

  18. Spatially explicit modelling of cholera epidemics

    NASA Astrophysics Data System (ADS)

    Finger, F.; Bertuzzo, E.; Mari, L.; Knox, A. C.; Gatto, M.; Rinaldo, A.

    2013-12-01

    Epidemiological models can provide crucial understanding about the dynamics of infectious diseases. Possible applications range from real-time forecasting and allocation of health care resources to testing alternative intervention mechanisms such as vaccines, antibiotics or the improvement of sanitary conditions. We apply a spatially explicit model to the cholera epidemic that struck Haiti in October 2010 and is still ongoing. The dynamics of susceptibles as well as symptomatic and asymptomatic infectives are modelled at the scale of local human communities. Dissemination of Vibrio cholerae through hydrological transport and human mobility along the road network is explicitly taken into account, as well as the effect of rainfall as a driver of increasing disease incidence. The model is calibrated using a dataset of reported cholera cases. We further model the long term impact of several types of interventions on the disease dynamics by varying parameters appropriately. Key epidemiological mechanisms and parameters which affect the efficiency of treatments such as antibiotics are identified. Our results lead to conclusions about the influence of different intervention strategies on the overall epidemiological dynamics.

  19. Assessing implicit models for nonpolar mean solvation forces: The importance of dispersion and volume terms

    PubMed Central

    Wagoner, Jason A.; Baker, Nathan A.

    2006-01-01

    Continuum solvation models provide appealing alternatives to explicit solvent methods because of their ability to reproduce solvation effects while alleviating the need for expensive sampling. Our previous work has demonstrated that Poisson-Boltzmann methods are capable of faithfully reproducing polar explicit solvent forces for dilute protein systems; however, the popular solvent-accessible surface area model was shown to be incapable of accurately describing nonpolar solvation forces at atomic-length scales. Therefore, alternate continuum methods are needed to reproduce nonpolar interactions at the atomic scale. In the present work, we address this issue by supplementing the solvent-accessible surface area model with additional volume and dispersion integral terms suggested by scaled particle models and Weeks–Chandler–Andersen theory, respectively. This more complete nonpolar implicit solvent model shows very good agreement with explicit solvent results and suggests that, although often overlooked, the inclusion of appropriate dispersion and volume terms are essential for an accurate implicit solvent description of atomic-scale nonpolar forces. PMID:16709675

  20. Development and assessment of 30-meter pine density maps for landscape-level modeling of mountain pine beetle dynamics

    Treesearch

    Benjamin A. Crabb; James A. Powell; Barbara J. Bentz

    2012-01-01

    Forecasting spatial patterns of mountain pine beetle (MPB) population success requires spatially explicit information on host pine distribution. We developed a means of producing spatially explicit datasets of pine density at 30-m resolution using existing geospatial datasets of vegetation composition and structure. Because our ultimate goal is to model MPB population...

  1. Emergence of a coherent and cohesive swarm based on mutual anticipation

    PubMed Central

    Murakami, Hisashi; Niizato, Takayuki; Gunji, Yukio-Pegio

    2017-01-01

    Collective behavior emerging out of self-organization is one of the most striking properties of an animal group. Typically, it is hypothesized that each individual in an animal group tends to align its direction of motion with those of its neighbors. Most previous models for collective behavior assume an explicit alignment rule, by which an agent matches its velocity with that of neighbors in a certain neighborhood, to reproduce a collective order pattern by simple interactions. Recent empirical studies, however, suggest that there is no evidence for explicit matching of velocity, and that collective polarization arises from interactions other than those that follow the explicit alignment rule. We here propose a new lattice-based computational model that does not incorporate the explicit alignment rule but is based instead on mutual anticipation and asynchronous updating. Moreover, we show that this model can realize densely collective motion with high polarity. Furthermore, we focus on the behavior of a pair of individuals, and find that the turning response is drastically changed depending on the distance between two individuals rather than the relative heading, and is consistent with the empirical observations. Therefore, the present results suggest that our approach provides an alternative model for collective behavior. PMID:28406173

  2. Flory-type theories of polymer chains under different external stimuli

    NASA Astrophysics Data System (ADS)

    Budkov, Yu A.; Kiselev, M. G.

    2018-01-01

    In this Review, we present a critical analysis of various applications of the Flory-type theories to a theoretical description of the conformational behavior of single polymer chains in dilute polymer solutions under a few external stimuli. Different theoretical models of flexible polymer chains in the supercritical fluid are discussed and analysed. Different points of view on the conformational behavior of the polymer chain near the liquid-gas transition critical point of the solvent are presented. A theoretical description of the co-solvent-induced coil-globule transitions within the implicit-solvent-explicit-co-solvent models is discussed. Several explicit-solvent-explicit-co-solvent theoretical models of the coil-to-globule-to-coil transition of the polymer chain in a mixture of good solvents (co-nonsolvency) are analysed and compared with each other. Finally, a new theoretical model of the conformational behavior of the dielectric polymer chain under the external constant electric field in the dilute polymer solution with an explicit account for the many-body dipole correlations is discussed. The polymer chain collapse induced by many-body dipole correlations of monomers in the context of statistical thermodynamics of dielectric polymers is analysed.

  3. Implicit-Explicit Time Integration Methods for Non-hydrostatic Atmospheric Models

    NASA Astrophysics Data System (ADS)

    Gardner, D. J.; Guerra, J. E.; Hamon, F. P.; Reynolds, D. R.; Ullrich, P. A.; Woodward, C. S.

    2016-12-01

    The Accelerated Climate Modeling for Energy (ACME) project is developing a non-hydrostatic atmospheric dynamical core for high-resolution coupled climate simulations on Department of Energy leadership class supercomputers. An important factor in computational efficiency is avoiding the overly restrictive time step size limitations of fully explicit time integration methods due to the stiffest modes present in the model (acoustic waves). In this work we compare the accuracy and performance of different Implicit-Explicit (IMEX) splittings of the non-hydrostatic equations and various Additive Runge-Kutta (ARK) time integration methods. Results utilizing the Tempest non-hydrostatic atmospheric model and the ARKode package show that the choice of IMEX splitting and ARK scheme has a significant impact on the maximum stable time step size as well as solution quality. Horizontally Explicit Vertically Implicit (HEVI) approaches paired with certain ARK methods lead to greatly improved runtimes. With effective preconditioning IMEX splittings that incorporate some implicit horizontal dynamics can be competitive with HEVI results. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-699187

  4. Accurate quantification of within- and between-host HBV evolutionary rates requires explicit transmission chain modelling.

    PubMed

    Vrancken, Bram; Suchard, Marc A; Lemey, Philippe

    2017-07-01

    Analyses of virus evolution in known transmission chains have the potential to elucidate the impact of transmission dynamics on the viral evolutionary rate and its difference within and between hosts. Lin et al. (2015, Journal of Virology , 89/7: 3512-22) recently investigated the evolutionary history of hepatitis B virus in a transmission chain and postulated that the 'colonization-adaptation-transmission' model can explain the differential impact of transmission on synonymous and non-synonymous substitution rates. Here, we revisit this dataset using a full probabilistic Bayesian phylogenetic framework that adequately accounts for the non-independence of sequence data when estimating evolutionary parameters. Examination of the transmission chain data under a flexible coalescent prior reveals a general inconsistency between the estimated timings and clustering patterns and the known transmission history, highlighting the need to incorporate host transmission information in the analysis. Using an explicit genealogical transmission chain model, we find strong support for a transmission-associated decrease of the overall evolutionary rate. However, in contrast to the initially reported larger transmission effect on non-synonymous substitution rate, we find a similar decrease in both non-synonymous and synonymous substitution rates that cannot be adequately explained by the colonization-adaptation-transmission model. An alternative explanation may involve a transmission/establishment advantage of hepatitis B virus variants that have accumulated fewer within-host substitutions, perhaps by spending more time in the covalently closed circular DNA state between each round of viral replication. More generally, this study illustrates that ignoring phylogenetic relationships can lead to misleading evolutionary estimates.

  5. The Construction of Visual-spatial Situation Models in Children's Reading and Their Relation to Reading Comprehension

    PubMed Central

    Barnes, Marcia A.; Raghubar, Kimberly P.; Faulkner, Heather; Denton, Carolyn A.

    2014-01-01

    Readers construct mental models of situations described by text to comprehend what they read, updating these situation models based on explicitly described and inferred information about causal, temporal, and spatial relations. Fluent adult readers update their situation models while reading narrative text based in part on spatial location information that is consistent with the perspective of the protagonist. The current study investigates whether children update spatial situation models in a similar way, whether there are age-related changes in children's formation of spatial situation models during reading, and whether measures of the ability to construct and update spatial situation models are predictive of reading comprehension. Typically-developing children from ages 9 through 16 years (n=81) were familiarized with a physical model of a marketplace. Then the model was covered, and children read stories that described the movement of a protagonist through the marketplace and were administered items requiring memory for both explicitly stated and inferred information about the character's movements. Accuracy of responses and response times were evaluated. Results indicated that: (a) location and object information during reading appeared to be activated and updated not simply from explicit text-based information but from a mental model of the real world situation described by the text; (b) this pattern showed no age-related differences; and (c) the ability to update the situation model of the text based on inferred information, but not explicitly stated information, was uniquely predictive of reading comprehension after accounting for word decoding. PMID:24315376

  6. High order spectral volume and spectral difference methods on unstructured grids

    NASA Astrophysics Data System (ADS)

    Kannan, Ravishekar

    The spectral volume (SV) and the spectral difference (SD) methods were developed by Wang and Liu and their collaborators for conservation laws on unstructured grids. They were introduced to achieve high-order accuracy in an efficient manner. Recently, these methods were extended to three-dimensional systems and to the Navier Stokes equations. The simplicity and robustness of these methods have made them competitive against other higher order methods such as the discontinuous Galerkin and residual distribution methods. Although explicit TVD Runge-Kutta schemes for the temporal advancement are easy to implement, they suffer from small time step limited by the Courant-Friedrichs-Lewy (CFL) condition. When the polynomial order is high or when the grid is stretched due to complex geometries or boundary layers, the convergence rate of explicit schemes slows down rapidly. Solution strategies to remedy this problem include implicit methods and multigrid methods. A novel implicit lower-upper symmetric Gauss-Seidel (LU-SGS) relaxation method is employed as an iterative smoother. It is compared to the explicit TVD Runge-Kutta smoothers. For some p-multigrid calculations, combining implicit and explicit smoothers for different p-levels is also studied. The multigrid method considered is nonlinear and uses Full Approximation Scheme (FAS). An overall speed-up factor of up to 150 is obtained using a three-level p-multigrid LU-SGS approach in comparison with the single level explicit method for the Euler equations for the 3rd order SD method. A study of viscous flux formulations was carried out for the SV method. Three formulations were used to discretize the viscous fluxes: local discontinuous Galerkin (LDG), a penalty method and the 2nd method of Bassi and Rebay. Fourier analysis revealed some interesting advantages for the penalty method. These were implemented in the Navier Stokes solver. An implicit and p-multigrid method was also implemented for the above. An overall speed-up factor of up to 1500 is obtained using a three-level p-multigrid LU-SGS approach in comparison with the single level explicit method for the Navier-Stokes equations. The SV method was also extended to turbulent flows. The RANS based SA model was used to close the Reynolds stresses. The numerical results are very promising and indicate that the approaches have great potentials for 3D flow problems.

  7. Rapid Response Tools and Datasets for Post-fire Erosion Modeling: Lessons Learned from the Rock House and High Park Fires

    NASA Astrophysics Data System (ADS)

    Miller, Mary Ellen; Elliot, William E.; MacDonald, Lee H.

    2013-04-01

    Once the danger posed by an active wildfire has passed, land managers must rapidly assess the threat from post-fire runoff and erosion due to the loss of surface cover and fire-induced changes in soil properties. Increased runoff and sediment delivery are of great concern to both the pubic and resource managers. Post-fire assessments and proposals to mitigate these threats are typically undertaken by interdisciplinary Burned Area Emergency Response (BAER) teams. These teams are under very tight deadlines, so they often begin their analysis while the fire is still burning and typically must complete their plans within a couple of weeks. Many modeling tools and datasets have been developed over the years to assist BAER teams, but process-based, spatially explicit models are currently under-utilized relative to simpler, lumped models because they are more difficult to set up and require the preparation of spatially-explicit data layers such as digital elevation models, soils, and land cover. The difficulty of acquiring and utilizing these data layers in spatially-explicit models increases with increasing fire size. Spatially-explicit post-fire erosion modeling was attempted for a small watershed in the 1270 km2 Rock House fire in Texas, but the erosion modeling work could not be completed in time. The biggest limitation was the time required to extract the spatially explicit soils data needed to run the preferred post-fire erosion model (GeoWEPP with Disturbed WEPP parameters). The solution is to have the spatial soil, land cover, and DEM data layers prepared ahead of time, and to have a clear methodology for the BAER teams to incorporate these layers in spatially-explicit modeling interfaces like GeoWEPP. After a fire occurs the data layers can quickly be clipped to the fire perimeter. The soil and land cover parameters can then be adjusted according to the burn severity map, which is one of the first products generated for the BAER teams. Under a previous project for the U.S. Environmental Protection Agency this preparatory work was done for much of Colorado, and in June 2012 the High Park wildfire in north central Colorado burned over 340 km2. The data layers for the entire burn area were quickly assembled and the spatially explicit runoff and erosion modeling was completed in less than three days. The resulting predictions were then used by the BAER team to quantify downstream risks and delineate priority areas for different post-fire treatments. These two contrasting case studies demonstrate the feasibility and the value of preparing datasets and modeling tools ahead of time. In recognition of this, the U.S. National Aeronautic and Space Administration has agreed to fund a pilot project to demonstrate the utility of acquiring and preparing the necessary data layers for fire-prone wildlands across the western U.S. A similar modeling and data acquisition approach could be followed

  8. Explicit and implicit cognition: a preliminary test of a dual-process theory of cognitive vulnerability to depression.

    PubMed

    Haeffel, Gerald J; Abramson, Lyn Y; Brazy, Paige C; Shah, James Y; Teachman, Bethany A; Nosek, Brian A

    2007-06-01

    Two studies were conducted to test a dual-process theory of cognitive vulnerability to depression. According to this theory, implicit and explicit cognitive processes have differential effects on depressive reactions to stressful life events. Implicit processes are hypothesized to be critical in determining an individual's immediate affective reaction to stress whereas explicit cognitions are thought to be more involved in long-term depressive reactions. Consistent with hypotheses, the results of study 1 (cross-sectional; N=237) showed that implicit, but not explicit, cognitions predicted immediate affective reactions to a lab stressor. Study 2 (longitudinal; N=251) also supported the dual-process model of cognitive vulnerability to depression. Results showed that both the implicit and explicit measures interacted with life stress to predict prospective changes in depressive symptoms, respectively. However, when both implicit and explicit predictors were entered into a regression equation simultaneously, only the explicit measure interacted with stress to remain a unique predictor of depressive symptoms over the five-week prospective interval.

  9. Comparison of Damage Path Predictions for Composite Laminates by Explicit and Standard Finite Element Analysis Tools

    NASA Technical Reports Server (NTRS)

    Bogert, Philip B.; Satyanarayana, Arunkumar; Chunchu, Prasad B.

    2006-01-01

    Splitting, ultimate failure load and the damage path in center notched composite specimens subjected to in-plane tension loading are predicted using progressive failure analysis methodology. A 2-D Hashin-Rotem failure criterion is used in determining intra-laminar fiber and matrix failures. This progressive failure methodology has been implemented in the Abaqus/Explicit and Abaqus/Standard finite element codes through user written subroutines "VUMAT" and "USDFLD" respectively. A 2-D finite element model is used for predicting the intra-laminar damages. Analysis results obtained from the Abaqus/Explicit and Abaqus/Standard code show good agreement with experimental results. The importance of modeling delamination in progressive failure analysis methodology is recognized for future studies. The use of an explicit integration dynamics code for simple specimen geometry and static loading establishes a foundation for future analyses where complex loading and nonlinear dynamic interactions of damage and structure will necessitate it.

  10. Group-based differences in anti-aging bias among medical students.

    PubMed

    Ruiz, Jorge G; Andrade, Allen D; Anam, Ramanakumar; Taldone, Sabrina; Karanam, Chandana; Hogue, Christie; Mintzer, Michael J

    2015-01-01

    Medical students (MS) may develop ageist attitudes early in their training that may predict their future avoidance of caring for the elderly. This study sought to determine MS' patterns of explicit and implicit anti-aging bias, intent to practice with older people and using the quad model, the role of gender, race, and motivation-based differences. One hundred and three MS completed an online survey that included explicit and implicit measures. Explicit measures revealed a moderately positive perception of older people. Female medical students and those high in internal motivation showed lower anti-aging bias, and both were more likely to intend to practice with older people. Although the implicit measure revealed more negativity toward the elderly than the explicit measures, there were no group differences. However, using the quad model the authors identified gender, race, and motivation-based differences in controlled and automatic processes involved in anti-aging bias.

  11. Radiometric Calibration of a Dual-Wavelength, Full-Waveform Terrestrial Lidar.

    PubMed

    Li, Zhan; Jupp, David L B; Strahler, Alan H; Schaaf, Crystal B; Howe, Glenn; Hewawasam, Kuravi; Douglas, Ewan S; Chakrabarti, Supriya; Cook, Timothy A; Paynter, Ian; Saenz, Edward J; Schaefer, Michael

    2016-03-02

    Radiometric calibration of the Dual-Wavelength Echidna(®) Lidar (DWEL), a full-waveform terrestrial laser scanner with two simultaneously-pulsing infrared lasers at 1064 nm and 1548 nm, provides accurate dual-wavelength apparent reflectance (ρ(app)), a physically-defined value that is related to the radiative and structural characteristics of scanned targets and independent of range and instrument optics and electronics. The errors of ρ(app) are 8.1% for 1064 nm and 6.4% for 1548 nm. A sensitivity analysis shows that ρ(app) error is dominated by range errors at near ranges, but by lidar intensity errors at far ranges. Our semi-empirical model for radiometric calibration combines a generalized logistic function to explicitly model telescopic effects due to defocusing of return signals at near range with a negative exponential function to model the fall-off of return intensity with range. Accurate values of ρ(app) from the radiometric calibration improve the quantification of vegetation structure, facilitate the comparison and coupling of lidar datasets from different instruments, campaigns or wavelengths and advance the utilization of bi- and multi-spectral information added to 3D scans by novel spectral lidars.

  12. Quantification of pathogen inactivation efficacy by free chlorine disinfection of drinking water for QMRA.

    PubMed

    Petterson, S R; Stenström, T A

    2015-09-01

    To support the implementation of quantitative microbial risk assessment (QMRA) for managing infectious risks associated with drinking water systems, a simple modeling approach for quantifying Log10 reduction across a free chlorine disinfection contactor was developed. The study was undertaken in three stages: firstly, review of the laboratory studies published in the literature; secondly, development of a conceptual approach to apply the laboratory studies to full-scale conditions; and finally implementation of the calculations for a hypothetical case study system. The developed model explicitly accounted for variability in residence time and pathogen specific chlorine sensitivity. Survival functions were constructed for a range of pathogens relying on the upper bound of the reported data transformed to a common metric. The application of the model within a hypothetical case study demonstrated the importance of accounting for variable residence time in QMRA. While the overall Log10 reduction may appear high, small parcels of water with short residence time can compromise the overall performance of the barrier. While theoretically simple, the approach presented is of great value for undertaking an initial assessment of a full-scale disinfection contactor based on limited site-specific information.

  13. Radiometric Calibration of a Dual-Wavelength, Full-Waveform Terrestrial Lidar

    PubMed Central

    Li, Zhan; Jupp, David L. B.; Strahler, Alan H.; Schaaf, Crystal B.; Howe, Glenn; Hewawasam, Kuravi; Douglas, Ewan S.; Chakrabarti, Supriya; Cook, Timothy A.; Paynter, Ian; Saenz, Edward J.; Schaefer, Michael

    2016-01-01

    Radiometric calibration of the Dual-Wavelength Echidna® Lidar (DWEL), a full-waveform terrestrial laser scanner with two simultaneously-pulsing infrared lasers at 1064 nm and 1548 nm, provides accurate dual-wavelength apparent reflectance (ρapp), a physically-defined value that is related to the radiative and structural characteristics of scanned targets and independent of range and instrument optics and electronics. The errors of ρapp are 8.1% for 1064 nm and 6.4% for 1548 nm. A sensitivity analysis shows that ρapp error is dominated by range errors at near ranges, but by lidar intensity errors at far ranges. Our semi-empirical model for radiometric calibration combines a generalized logistic function to explicitly model telescopic effects due to defocusing of return signals at near range with a negative exponential function to model the fall-off of return intensity with range. Accurate values of ρapp from the radiometric calibration improve the quantification of vegetation structure, facilitate the comparison and coupling of lidar datasets from different instruments, campaigns or wavelengths and advance the utilization of bi- and multi-spectral information added to 3D scans by novel spectral lidars. PMID:26950126

  14. Mathematical modeling of PDC bit drilling process based on a single-cutter mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wojtanowicz, A.K.; Kuru, E.

    1993-12-01

    An analytical development of a new mechanistic drilling model for polycrystalline diamond compact (PDC) bits is presented. The derivation accounts for static balance of forces acting on a single PDC cutter and is based on assumed similarity between bit and cutter. The model is fully explicit with physical meanings given to all constants and functions. Three equations constitute the mathematical model: torque, drilling rate, and bit life. The equations comprise cutter`s geometry, rock properties drilling parameters, and four empirical constants. The constants are used to match the model to a PDC drilling process. Also presented are qualitative and predictive verificationsmore » of the model. Qualitative verification shows that the model`s response to drilling process variables is similar to the behavior of full-size PDC bits. However, accuracy of the model`s predictions of PDC bit performance is limited primarily by imprecision of bit-dull evaluation. The verification study is based upon the reported laboratory drilling and field drilling tests as well as field data collected by the authors.« less

  15. A different time and place test of ArcHSI: A spatially explicit habitat model for elk in the Black Hills

    Treesearch

    Mark A. Rumble; Lakhdar Benkobi; R. Scott Gamo

    2007-01-01

    We tested predictions of the spatially explicit ArcHSI habitat model for elk. The distribution of elk relative to proximity of forage and cover differed from that predicted. Elk used areas near primary roads similar to that predicted by the model, but elk were farther from secondary roads. Elk used areas categorized as good (> 0.7), fair (> 0.42 to 0.7), and poor...

  16. Pulsar distances and the galactic distribution of free electrons

    NASA Technical Reports Server (NTRS)

    Taylor, J. H.; Cordes, J. M.

    1993-01-01

    The present quantitative model for Galactic free electron distribution abandons the assumption of axisymmetry and explicitly incorporates spiral arms; their shapes and locations are derived from existing radio and optical observations of H II regions. The Gum Nebula's dispersion-measure contributions are also explicitly modeled. Adjustable quantities are calibrated by reference to three different types of data. The new model is estimated to furnish distance estimates to known pulsars that are accurate to about 25 percent.

  17. Baldovin-Stella stochastic volatility process and Wiener process mixtures

    NASA Astrophysics Data System (ADS)

    Peirano, P. P.; Challet, D.

    2012-08-01

    Starting from inhomogeneous time scaling and linear decorrelation between successive price returns, Baldovin and Stella recently proposed a powerful and consistent way to build a model describing the time evolution of a financial index. We first make it fully explicit by using Student distributions instead of power law-truncated Lévy distributions and show that the analytic tractability of the model extends to the larger class of symmetric generalized hyperbolic distributions and provide a full computation of their multivariate characteristic functions; more generally, we show that the stochastic processes arising in this framework are representable as mixtures of Wiener processes. The basic Baldovin and Stella model, while mimicking well volatility relaxation phenomena such as the Omori law, fails to reproduce other stylized facts such as the leverage effect or some time reversal asymmetries. We discuss how to modify the dynamics of this process in order to reproduce real data more accurately.

  18. An improved Cauchy number approach for predicting the drag and reconfiguration of flexible vegetation

    NASA Astrophysics Data System (ADS)

    Whittaker, Peter; Wilson, Catherine A. M. E.; Aberle, Jochen

    2015-09-01

    An improved model to describe the drag and reconfiguration of flexible riparian vegetation is proposed. The key improvement over previous models is the use of a refined 'vegetative' Cauchy number to explicitly determine the magnitude and rate of the vegetation's reconfiguration. After being derived from dimensional consideration, the model is applied to two experimental data sets. The first contains high-resolution drag force and physical property measurements for twenty-one foliated and defoliated full-scale trees, including specimens of Alnus glutinosa, Populus nigra and Salix alba. The second data set is independent and of a different scale, consisting of drag force and physical property measurements for natural and artificial branches of willow and poplar, under partially and fully submerged flow conditions. Good agreement between the measured and predicted drag forces is observed for both data sets, especially when compared to a more typical 'rigid' approximation, where the effects of reconfiguration are neglected.

  19. Continuum Fatigue Damage Modeling for Use in Life Extending Control

    NASA Technical Reports Server (NTRS)

    Lorenzo, Carl F.

    1994-01-01

    This paper develops a simplified continuum (continuous wrp to time, stress, etc.) fatigue damage model for use in Life Extending Controls (LEC) studies. The work is based on zero mean stress local strain cyclic damage modeling. New nonlinear explicit equation forms of cyclic damage in terms of stress amplitude are derived to facilitate the continuum modeling. Stress based continuum models are derived. Extension to plastic strain-strain rate models are also presented. Application of these models to LEC applications is considered. Progress toward a nonzero mean stress based continuum model is presented. Also, new nonlinear explicit equation forms in terms of stress amplitude are also derived for this case.

  20. Importance of spatial autocorrelation in modeling bird distributions at a continental scale

    USGS Publications Warehouse

    Bahn, V.; O'Connor, R.J.; Krohn, W.B.

    2006-01-01

    Spatial autocorrelation in species' distributions has been recognized as inflating the probability of a type I error in hypotheses tests, causing biases in variable selection, and violating the assumption of independence of error terms in models such as correlation or regression. However, it remains unclear whether these problems occur at all spatial resolutions and extents, and under which conditions spatially explicit modeling techniques are superior. Our goal was to determine whether spatial models were superior at large extents and across many different species. In addition, we investigated the importance of purely spatial effects in distribution patterns relative to the variation that could be explained through environmental conditions. We studied distribution patterns of 108 bird species in the conterminous United States using ten years of data from the Breeding Bird Survey. We compared the performance of spatially explicit regression models with non-spatial regression models using Akaike's information criterion. In addition, we partitioned the variance in species distributions into an environmental, a pure spatial and a shared component. The spatially-explicit conditional autoregressive regression models strongly outperformed the ordinary least squares regression models. In addition, partialling out the spatial component underlying the species' distributions showed that an average of 17% of the explained variation could be attributed to purely spatial effects independent of the spatial autocorrelation induced by the underlying environmental variables. We concluded that location in the range and neighborhood play an important role in the distribution of species. Spatially explicit models are expected to yield better predictions especially for mobile species such as birds, even in coarse-grained models with a large extent. ?? Ecography.

  1. LES with and without explicit filtering: comparison and assessment of various models

    NASA Astrophysics Data System (ADS)

    Winckelmans, Gregoire S.; Jeanmart, Herve; Wray, Alan A.; Carati, Daniele

    2000-11-01

    The proper mathematical formalism for large eddy simulation (LES) of turbulent flows assumes that a regular ``explicit" filter (i.e., a filter with a well-defined second moment, such as the gaussian, the top hat, etc.) is applied to the equations of fluid motion. This filter is then responsible for a ``filtered-scale" stress. Because of the discretization of the filtered equations, using the LES grid, there is also a ``subgrid-scale" stress. The global effective stress is found to be the discretization of a filtered-scale stress plus a subgrid-scale stress. The former can be partially reconstructed from an exact, infinite, series, the first term of which is the ``tensor-diffusivity" model of Leonard and is found, in practice, to be sufficient for modeling. Alternatively, sufficient reconstruction can also be achieved using the ``scale-similarity" model of Bardina. The latter corresponds to loss of information: it cannot be reconstructed; its effect (essentially dissipation) must be modeled using ad hoc modeling strategies (such as the dynamic version of the ``effective viscosity" model of Smagorinsky). Practitionners also often assume LES without explicit filtering: the effective stress is then only a subgrid-scale stress. We here compare the performance of various LES models for both approaches (with and without explicit filtering), and for cases without solid boundaries: (1) decay of isotropic turbulence; (2) decay of aircraft wake vortices in a turbulent atmosphere. One main conclusion is that better subgrid-scale models are still needed, the effective viscosity models being too active at the large scales.

  2. On the performance of explicit and implicit algorithms for transient thermal analysis

    NASA Astrophysics Data System (ADS)

    Adelman, H. M.; Haftka, R. T.

    1980-09-01

    The status of an effort to increase the efficiency of calculating transient temperature fields in complex aerospace vehicle structures is described. The advantages and disadvantages of explicit and implicit algorithms are discussed. A promising set of implicit algorithms, known as the GEAR package is described. Four test problems, used for evaluating and comparing various algorithms, have been selected and finite element models of the configurations are discribed. These problems include a space shuttle frame component, an insulated cylinder, a metallic panel for a thermal protection system and a model of the space shuttle orbiter wing. Calculations were carried out using the SPAR finite element program, the MITAS lumped parameter program and a special purpose finite element program incorporating the GEAR algorithms. Results generally indicate a preference for implicit over explicit algorithms for solution of transient structural heat transfer problems when the governing equations are stiff. Careful attention to modeling detail such as avoiding thin or short high-conducting elements can sometimes reduce the stiffness to the extent that explicit methods become advantageous.

  3. The Importance of Explicitly Representing Soil Carbon with Depth over the Permafrost Region in Earth System Models: Implications for Atmospheric Carbon Dynamics at Multiple Temporal Scales between 1960 and 2300.

    NASA Astrophysics Data System (ADS)

    McGuire, A. D.

    2014-12-01

    We conducted an assessment of changes in permafrost area and carbon storage simulated by process-based models between 1960 and 2300. The models participating in this comparison were those that had joined the model integration team of the Vulnerability of Permafrost Carbon Research Coordination Network (see http://www.biology.ufl.edu/permafrostcarbon/). Each of the models in this comparison conducted simulations over the permafrost land region in the Northern Hemisphere driven by CCSM4-simulated climate for RCP 4.5 and 8.5 scenarios. Among the models, the area of permafrost (defined as the area for which active layer thickness was less than 3 m) ranged between 13.2 and 20.0 million km2. Between 1960 and 2300, models indicated the loss of permafrost area between 5.1 to 6.0 million km2 for RCP 4.5 and between 7.1 and 15.2 million km2 for RCP 8.5. Among the models, the density of soil carbon storage in 1960 ranged between 13 and 42 thousand g C m-2; models that explicitly represented carbon with depth had estimates greater than 27 thousand g C m-2. For the RCP 4.5 scenario, changes in soil carbon between 1960 and 2300 ranged between losses of 32 Pg C to gains of 58 Pg C, in which models that explicitly represent soil carbon with depth simulated losses or lower gains of soil carbon in comparison with those that did not. For the RCP 8.5 scenario, changes in soil carbon between 1960 and 2300 ranged between losses of 642 Pg C to gains of 66 Pg C, in which those models that represent soil carbon explicitly with depth all simulated losses, while those that do not all simulated gains. These results indicate that there are substantial differences in responses of carbon dynamics between model that do and do not explicitly represent soil carbon with depth in the permafrost region. We present analyses of the implications of the differences for atmospheric carbon dynamics at multiple temporal scales between 1960 and 2300.

  4. A Dual-Process Approach to the Role of Mother's Implicit and Explicit Attitudes toward Their Child in Parenting Models

    ERIC Educational Resources Information Center

    Sturge-Apple, Melissa L.; Rogge, Ronald D.; Skibo, Michael A.; Peltz, Jack S.; Suor, Jennifer H.

    2015-01-01

    Extending dual process frameworks of cognition to a novel domain, the present study examined how mothers' explicit and implicit attitudes about her child may operate in models of parenting. To assess implicit attitudes, two separate studies were conducted using the same child-focused Go/No-go Association Task (GNAT-Child). In Study 1, model…

  5. Calabi-Yau structures on categories of matrix factorizations

    NASA Astrophysics Data System (ADS)

    Shklyarov, Dmytro

    2017-09-01

    Using tools of complex geometry, we construct explicit proper Calabi-Yau structures, that is, non-degenerate cyclic cocycles on differential graded categories of matrix factorizations of regular functions with isolated critical points. The formulas involve the Kapustin-Li trace and its higher corrections. From the physics perspective, our result yields explicit 'off-shell' models for categories of topological D-branes in B-twisted Landau-Ginzburg models.

  6. Does Teaching Students How to Explicitly Model the Causal Structure of Systems Improve Their Understanding of These Systems?

    ERIC Educational Resources Information Center

    Jensen, Eva

    2014-01-01

    If students really understand the systems they study, they would be able to tell how changes in the system would affect a result. This demands that the students understand the mechanisms that drive its behaviour. The study investigates potential merits of learning how to explicitly model the causal structure of systems. The approach and…

  7. An Explicit Algorithm for the Simulation of Fluid Flow through Porous Media

    NASA Astrophysics Data System (ADS)

    Trapeznikova, Marina; Churbanova, Natalia; Lyupa, Anastasiya

    2018-02-01

    The work deals with the development of an original mathematical model of porous medium flow constructed by analogy with the quasigasdynamic system of equations and allowing implementation via explicit numerical methods. The model is generalized to the case of multiphase multicomponent fluid and takes into account possible heat sources. The proposed approach is verified by a number of test predictions.

  8. Development and Validation of Spatially Explicit Habitat Models for Cavity-nesting Birds in Fishlake National Forest, Utah

    Treesearch

    Randall A., Jr. Schultz; Thomas C., Jr. Edwards; Gretchen G. Moisen; Tracey S. Frescino

    2005-01-01

    The ability of USDA Forest Service Forest Inventory and Analysis (FIA) generated spatial products to increase the predictive accuracy of spatially explicit, macroscale habitat models was examined for nest-site selection by cavity-nesting birds in Fishlake National Forest, Utah. One FIA-derived variable (percent basal area of aspen trees) was significant in the habitat...

  9. Metal-rich, Metal-poor: Updated Stellar Population Models for Old Stellar Systems

    NASA Astrophysics Data System (ADS)

    Conroy, Charlie; Villaume, Alexa; van Dokkum, Pieter G.; Lind, Karin

    2018-02-01

    We present updated stellar population models appropriate for old ages (>1 Gyr) and covering a wide range in metallicities (‑1.5 ≲ [Fe/H] ≲ 0.3). These models predict the full spectral variation associated with individual element abundance variation as a function of metallicity and age. The models span the optical–NIR wavelength range (0.37–2.4 μm), include a range of initial mass functions, and contain the flexibility to vary 18 individual elements including C, N, O, Mg, Si, Ca, Ti, and Fe. To test the fidelity of the models, we fit them to integrated light optical spectra of 41 Galactic globular clusters (GCs). The value of testing models against GCs is that their ages, metallicities, and detailed abundance patterns have been derived from the Hertzsprung–Russell diagram in combination with high-resolution spectroscopy of individual stars. We determine stellar population parameters from fits to all wavelengths simultaneously (“full spectrum fitting”), and demonstrate explicitly with mock tests that this approach produces smaller uncertainties at fixed signal-to-noise ratio than fitting a standard set of 14 line indices. Comparison of our integrated-light results to literature values reveals good agreement in metallicity, [Fe/H]. When restricting to GCs without prominent blue horizontal branch populations, we also find good agreement with literature values for ages, [Mg/Fe], [Si/Fe], and [Ti/Fe].

  10. Full-Scale Crash Test and Finite Element Simulation of a Composite Prototype Helicopter

    NASA Technical Reports Server (NTRS)

    Jackson, Karen E.; Fasanella, Edwin L.; Boitnott, Richard L.; Lyle, Karen H.

    2003-01-01

    A full-scale crash test of a prototype composite helicopter was performed at the Impact Dynamics Research Facility at NASA Langley Research Center in 1999 to obtain data for validation of a finite element crash simulation. The helicopter was the flight test article built by Sikorsky Aircraft during the Advanced Composite Airframe Program (ACAP). The composite helicopter was designed to meet the stringent Military Standard (MIL-STD-1290A) crashworthiness criteria and was outfitted with two crew and two troop seats and four anthropomorphic dummies. The test was performed at 38-ft/s vertical and 32.5-ft/s horizontal velocity onto a rigid surface. An existing modal-vibration model of the Sikorsky ACAP helicopter was converted into a model suitable for crash simulation. A two-stage modeling approach was implemented and an external user-defined subroutine was developed to represent the complex landing gear response. The crash simulation was executed with a nonlinear, explicit transient dynamic finite element code. Predictions of structural deformation and failure, the sequence of events, and the dynamic response of the airframe structure were generated and the numerical results were correlated with the experimental data to validate the simulation. The test results, the model development, and the test-analysis correlation are described.

  11. Combining Distributed and Shared Memory Models: Approach and Evolution of the Global Arrays Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nieplocha, Jarek; Harrison, Robert J.; Kumar, Mukul

    2002-07-29

    Both shared memory and distributed memory models have advantages and shortcomings. Shared memory model is much easier to use but it ignores data locality/placement. Given the hierarchical nature of the memory subsystems in the modern computers this characteristic might have a negative impact on performance and scalability. Various techniques, such as code restructuring to increase data reuse and introducing blocking in data accesses, can address the problem and yield performance competitive with message passing[Singh], however at the cost of compromising the ease of use feature. Distributed memory models such as message passing or one-sided communication offer performance and scalability butmore » they compromise the ease-of-use. In this context, the message-passing model is sometimes referred to as?assembly programming for the scientific computing?. The Global Arrays toolkit[GA1, GA2] attempts to offer the best features of both models. It implements a shared-memory programming model in which data locality is managed explicitly by the programmer. This management is achieved by explicit calls to functions that transfer data between a global address space (a distributed array) and local storage. In this respect, the GA model has similarities to the distributed shared-memory models that provide an explicit acquire/release protocol. However, the GA model acknowledges that remote data is slower to access than local data and allows data locality to be explicitly specified and hence managed. The GA model exposes to the programmer the hierarchical memory of modern high-performance computer systems, and by recognizing the communication overhead for remote data transfer, it promotes data reuse and locality of reference. This paper describes the characteristics of the Global Arrays programming model, capabilities of the toolkit, and discusses its evolution.« less

  12. Parameter and uncertainty estimation for mechanistic, spatially explicit epidemiological models

    NASA Astrophysics Data System (ADS)

    Finger, Flavio; Schaefli, Bettina; Bertuzzo, Enrico; Mari, Lorenzo; Rinaldo, Andrea

    2014-05-01

    Epidemiological models can be a crucially important tool for decision-making during disease outbreaks. The range of possible applications spans from real-time forecasting and allocation of health-care resources to testing alternative intervention mechanisms such as vaccines, antibiotics or the improvement of sanitary conditions. Our spatially explicit, mechanistic models for cholera epidemics have been successfully applied to several epidemics including, the one that struck Haiti in late 2010 and is still ongoing. Calibration and parameter estimation of such models represents a major challenge because of properties unusual in traditional geoscientific domains such as hydrology. Firstly, the epidemiological data available might be subject to high uncertainties due to error-prone diagnosis as well as manual (and possibly incomplete) data collection. Secondly, long-term time-series of epidemiological data are often unavailable. Finally, the spatially explicit character of the models requires the comparison of several time-series of model outputs with their real-world counterparts, which calls for an appropriate weighting scheme. It follows that the usual assumption of a homoscedastic Gaussian error distribution, used in combination with classical calibration techniques based on Markov chain Monte Carlo algorithms, is likely to be violated, whereas the construction of an appropriate formal likelihood function seems close to impossible. Alternative calibration methods, which allow for accurate estimation of total model uncertainty, particularly regarding the envisaged use of the models for decision-making, are thus needed. Here we present the most recent developments regarding methods for parameter and uncertainty estimation to be used with our mechanistic, spatially explicit models for cholera epidemics, based on informal measures of goodness of fit.

  13. Structure of Alzheimer's 10-35 β peptide from replica-exchange molecular dynamics simulations in explicit water

    NASA Astrophysics Data System (ADS)

    Baumketner, Andriy; Shea, Joan-Emma

    2006-03-01

    We report a replica-exchange molecular dynamics study of the 10-35 fragment of Alzheimer's disease amyloid β peptide, Aβ10-35, in aqueous solution. This fragment was previously seen [J. Str. Biol. 130 (2000) 130] to possess all the most important amyloidogenic properties characteristic of full-length Aβ peptides. Our simulations attempted to fold Aβ10-35 from first principles. The peptide was modeled using all-atom OPLS/AA force field in conjunction with the TIP3P explicit solvent model. A total of 72 replicas were considered and simulated over 40 ns of total time, including 5 ns of initial equilibration. We find that Aβ10-35 does not possess any unique folded state, a 3D structure of predominant population, under normal temperature and pressure. Rather, this peptide exists as a mixture of collapsed globular states that remain in rapid dynamic equilibrium with each other. This conformational ensemble is seen to be dominated by random coil and bend structures with insignificant presence of α-helical or β-sheet structure. We find that, overall, the 3D structure of Aβ10-35 is shaped by salt bridges formed between oppositely charged residues.Of all possible salt bridges, K28-D23 was seen to have the highest formation probability, totaling more than 60% of the time.

  14. Methods used to parameterize the spatially-explicit components of a state-and-transition simulation model

    USGS Publications Warehouse

    Sleeter, Rachel; Acevedo, William; Soulard, Christopher E.; Sleeter, Benjamin M.

    2015-01-01

    Spatially-explicit state-and-transition simulation models of land use and land cover (LULC) increase our ability to assess regional landscape characteristics and associated carbon dynamics across multiple scenarios. By characterizing appropriate spatial attributes such as forest age and land-use distribution, a state-and-transition model can more effectively simulate the pattern and spread of LULC changes. This manuscript describes the methods and input parameters of the Land Use and Carbon Scenario Simulator (LUCAS), a customized state-and-transition simulation model utilized to assess the relative impacts of LULC on carbon stocks for the conterminous U.S. The methods and input parameters are spatially explicit and describe initial conditions (strata, state classes and forest age), spatial multipliers, and carbon stock density. Initial conditions were derived from harmonization of multi-temporal data characterizing changes in land use as well as land cover. Harmonization combines numerous national-level datasets through a cell-based data fusion process to generate maps of primary LULC categories. Forest age was parameterized using data from the North American Carbon Program and spatially-explicit maps showing the locations of past disturbances (i.e. wildfire and harvest). Spatial multipliers were developed to spatially constrain the location of future LULC transitions. Based on distance-decay theory, maps were generated to guide the placement of changes related to forest harvest, agricultural intensification/extensification, and urbanization. We analyze the spatially-explicit input parameters with a sensitivity analysis, by showing how LUCAS responds to variations in the model input. This manuscript uses Mediterranean California as a regional subset to highlight local to regional aspects of land change, which demonstrates the utility of LUCAS at many scales and applications.

  15. Corruption of accuracy and efficiency of Markov chain Monte Carlo simulation by inaccurate numerical implementation of conceptual hydrologic models

    NASA Astrophysics Data System (ADS)

    Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.

    2010-10-01

    Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.

  16. Explicit Instruction Elements in Core Reading Programs

    ERIC Educational Resources Information Center

    Child, Angela R.

    2012-01-01

    Classroom teachers are provided instructional recommendations for teaching reading from their adopted core reading programs (CRPs). Explicit instruction elements or what is also called instructional moves, including direct explanation, modeling, guided practice, independent practice, discussion, feedback, and monitoring, were examined within CRP…

  17. Late positive potential to explicit sexual images associated with the number of sexual intercourse partners

    PubMed Central

    Steele, Vaughn R.; Staley, Cameron; Sabatinelli, Dean

    2015-01-01

    Risky sexual behaviors typically occur when a person is sexually motivated by potent, sexual reward cues. Yet, individual differences in sensitivity to sexual cues have not been examined with respect to sexual risk behaviors. A greater responsiveness to sexual cues might provide greater motivation for a person to act sexually; a lower responsiveness to sexual cues might lead a person to seek more intense, novel, possibly risky, sexual acts. In this study, event-related potentials were recorded in 64 men and women while they viewed a series of emotional, including explicit sexual, photographs. The motivational salience of the sexual cues was varied by including more and less explicit sexual images. Indeed, the more explicit sexual stimuli resulted in enhanced late positive potentials (LPP) relative to the less explicit sexual images. Participants with fewer sexual intercourse partners in the last year had reduced LPP amplitude to the less explicit sexual images than the more explicit sexual images, whereas participants with more partners responded similarly to the more and less explicit sexual images. This pattern of results is consistent with a greater responsivity model. Those who engage in more sexual behaviors consistent with risk are also more responsive to less explicit sexual cues. PMID:24526189

  18. Assessment of implicit health attitudes: a multitrait-multimethod approach and a comparison between patients with hypochondriasis and patients with anxiety disorders.

    PubMed

    Weck, Florian; Höfling, Volkmar

    2015-01-01

    Two adaptations of the Implicit Association Task were used to assess implicit anxiety (IAT-Anxiety) and implicit health attitudes (IAT-Hypochondriasis) in patients with hypochondriasis (n = 58) and anxiety patients (n = 71). Explicit anxieties and health attitudes were assessed using questionnaires. The analysis of several multitrait-multimethod models indicated that the low correlation between explicit and implicit measures of health attitudes is due to the substantial methodological differences between the IAT and the self-report questionnaire. Patients with hypochondriasis displayed significantly more dysfunctional explicit and implicit health attitudes than anxiety patients, but no differences were found regarding explicit and implicit anxieties. The study demonstrates the specificity of explicit and implicit dysfunctional health attitudes among patients with hypochondriasis.

  19. Assessment of multireference approaches to explicitly correlated full configuration interaction quantum Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kersten, J. A. F., E-mail: jennifer.kersten@cantab.net; Alavi, Ali, E-mail: a.alavi@fkf.mpg.de; Max Planck Institute for Solid State Research, Heisenbergstraße 1, 70569 Stuttgart

    2016-08-07

    The Full Configuration Interaction Quantum Monte Carlo (FCIQMC) method has proved able to provide near-exact solutions to the electronic Schrödinger equation within a finite orbital basis set, without relying on an expansion about a reference state. However, a drawback to the approach is that being based on an expansion of Slater determinants, the FCIQMC method suffers from a basis set incompleteness error that decays very slowly with the size of the employed single particle basis. The FCIQMC results obtained in a small basis set can be improved significantly with explicitly correlated techniques. Here, we present a study that assesses andmore » compares two contrasting “universal” explicitly correlated approaches that fit into the FCIQMC framework: the [2]{sub R12} method of Kong and Valeev [J. Chem. Phys. 135, 214105 (2011)] and the explicitly correlated canonical transcorrelation approach of Yanai and Shiozaki [J. Chem. Phys. 136, 084107 (2012)]. The former is an a posteriori internally contracted perturbative approach, while the latter transforms the Hamiltonian prior to the FCIQMC simulation. These comparisons are made across the 55 molecules of the G1 standard set. We found that both methods consistently reduce the basis set incompleteness, for accurate atomization energies in small basis sets, reducing the error from 28 mE{sub h} to 3-4 mE{sub h}. While many of the conclusions hold in general for any combination of multireference approaches with these methodologies, we also consider FCIQMC-specific advantages of each approach.« less

  20. Free energy landscape of protein folding in water: explicit vs. implicit solvent.

    PubMed

    Zhou, Ruhong

    2003-11-01

    The Generalized Born (GB) continuum solvent model is arguably the most widely used implicit solvent model in protein folding and protein structure prediction simulations; however, it still remains an open question on how well the model behaves in these large-scale simulations. The current study uses the beta-hairpin from C-terminus of protein G as an example to explore the folding free energy landscape with various GB models, and the results are compared to the explicit solvent simulations and experiments. All free energy landscapes are obtained from extensive conformation space sampling with a highly parallel replica exchange method. Because solvation model parameters are strongly coupled with force fields, five different force field/solvation model combinations are examined and compared in this study, namely the explicit solvent model: OPLSAA/SPC model, and the implicit solvent models: OPLSAA/SGB (Surface GB), AMBER94/GBSA (GB with Solvent Accessible Surface Area), AMBER96/GBSA, and AMBER99/GBSA. Surprisingly, we find that the free energy landscapes from implicit solvent models are quite different from that of the explicit solvent model. Except for AMBER96/GBSA, all other implicit solvent models find the lowest free energy state not the native state. All implicit solvent models show erroneous salt-bridge effects between charged residues, particularly in OPLSAA/SGB model, where the overly strong salt-bridge effect results in an overweighting of a non-native structure with one hydrophobic residue F52 expelled from the hydrophobic core in order to make better salt bridges. On the other hand, both AMBER94/GBSA and AMBER99/GBSA models turn the beta-hairpin in to an alpha-helix, and the alpha-helical content is much higher than the previously reported alpha-helices in an explicit solvent simulation with AMBER94 (AMBER94/TIP3P). Only AMBER96/GBSA shows a reasonable free energy landscape with the lowest free energy structure the native one despite an erroneous salt-bridge between D47 and K50. Detailed results on free energy contour maps, lowest free energy structures, distribution of native contacts, alpha-helical content during the folding process, NOE comparison with NMR, and temperature dependences are reported and discussed for all five models. Copyright 2003 Wiley-Liss, Inc.

  1. Reduced cognitive capacity impairs the malleability of older adults' negative attitudes to stigmatized individuals.

    PubMed

    Krendl, Anne C

    2018-05-21

    Although engaging explicit regulatory strategies may reduce negative bias toward outgroup members, these strategies are cognitively demanding and thus may not be effective for older adults (OA) who have reduced cognitive resources. The current study therefore examines whether individual differences in cognitive capacity disrupt OA' ability to explicitly regulate their bias to stigmatized individuals. Young and OA were instructed to explicitly regulate their negative bias toward stigmatized individuals by using an explicit reappraisal strategy. Regulatory success was assessed as a function of age and individual differences in cognitive capacity (Experiment 1). In Experiment 2, the role of executive function in implementing cognitive reappraisal strategies was examined by using a divided attention manipulation. Results from Experiment 1 revealed that individual differences in OA' cognitive capacity disrupted their ability to regulate their negative emotional response to stigma. In Experiment 2, it was found that dividing attention in young adults (YA) significantly reduced their regulatory success as compared to YA' regulatory capacity in the full attention condition. As expected, dividing YA' attention made their performance similar to OA with relatively preserved cognitive capacity. Together, the results from this study demonstrated that individual differences in cognitive capacity predicted OA' ability to explicitly regulate their negative bias to a range of stigmatized individuals.

  2. Explicit and implicit springback simulation in sheet metal forming using fully coupled ductile damage and distortional hardening model

    NASA Astrophysics Data System (ADS)

    Yetna n'jock, M.; Houssem, B.; Labergere, C.; Saanouni, K.; Zhenming, Y.

    2018-05-01

    The springback is an important phenomenon which accompanies the forming of metallic sheets especially for high strength materials. A quantitative prediction of springback becomes very important for newly developed material with high mechanical characteristics. In this work, a numerical methodology is developed to quantify this undesirable phenomenon. This methodoly is based on the use of both explicit and implicit finite element solvers of Abaqus®. The most important ingredient of this methodology consists on the use of highly predictive mechanical model. A thermodynamically-consistent, non-associative and fully anisotropic elastoplastic constitutive model strongly coupled with isotropic ductile damage and accounting for distortional hardening is then used. An algorithm for local integration of the complete set of the constitutive equations is developed. This algorithm considers the rotated frame formulation (RFF) to ensure the incremental objectivity of the model in the framework of finite strains. This algorithm is implemented in both explicit (Abaqus/Explicit®) and implicit (Abaqus/Standard®) solvers of Abaqus® through the users routine VUMAT and UMAT respectively. The implicit solver of Abaqus® has been used to study spingback as it is generally a quasi-static unloading. In order to compare the methods `efficiency, the explicit method (Dynamic Relaxation Method) proposed by Rayleigh has been also used for springback prediction. The results obtained within U draw/bending benchmark are studied, discussed and compared with experimental results as reference. Finally, the purpose of this work is to evaluate the reliability of different methods predict efficiently springback in sheet metal forming.

  3. Free energy landscapes of small peptides in an implicit solvent model determined by force-biased multicanonical molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Watanabe, Yukihisa S.; Kim, Jae Gil; Fukunishi, Yoshifumi; Nakamura, Haruki

    2004-12-01

    In order to investigate whether the implicit solvent (GB/SA) model could reproduce the free energy landscapes of peptides, the potential of mean forces (PMFs) of eight tripeptides was examined and compared with the PMFs of the explicit water model. The force-biased multicanonical molecular dynamics method was used for the enhanced conformational sampling. Consequently, the GB/SA model reproduced almost all the global and local minima in the PMFs observed with the explicit water model. However, the GB/SA model overestimated frequencies of the structures that are stabilized by intra-peptide hydrogen bonds.

  4. A neurocomputational theory of how explicit learning bootstraps early procedural learning.

    PubMed

    Paul, Erick J; Ashby, F Gregory

    2013-01-01

    It is widely accepted that human learning and memory is mediated by multiple memory systems that are each best suited to different requirements and demands. Within the domain of categorization, at least two systems are thought to facilitate learning: an explicit (declarative) system depending largely on the prefrontal cortex, and a procedural (non-declarative) system depending on the basal ganglia. Substantial evidence suggests that each system is optimally suited to learn particular categorization tasks. However, it remains unknown precisely how these systems interact to produce optimal learning and behavior. In order to investigate this issue, the present research evaluated the progression of learning through simulation of categorization tasks using COVIS, a well-known model of human category learning that includes both explicit and procedural learning systems. Specifically, the model's parameter space was thoroughly explored in procedurally learned categorization tasks across a variety of conditions and architectures to identify plausible interaction architectures. The simulation results support the hypothesis that one-way interaction between the systems occurs such that the explicit system "bootstraps" learning early on in the procedural system. Thus, the procedural system initially learns a suboptimal strategy employed by the explicit system and later refines its strategy. This bootstrapping could be from cortical-striatal projections that originate in premotor or motor regions of cortex, or possibly by the explicit system's control of motor responses through basal ganglia-mediated loops.

  5. A watershed-based spatially-explicit demonstration of an integrated environmental modeling framework for ecosystem services in the Coal River Basin (WV, USA)

    Treesearch

    John M. Johnston; Mahion C. Barber; Kurt Wolfe; Mike Galvin; Mike Cyterski; Rajbir Parmar; Luis Suarez

    2016-01-01

    We demonstrate a spatially-explicit regional assessment of current condition of aquatic ecoservices in the Coal River Basin (CRB), with limited sensitivity analysis for the atmospheric contaminant mercury. The integrated modeling framework (IMF) forecasts water quality and quantity, habitat suitability for aquatic biota, fish biomasses, population densities, ...

  6. Finite Element Modeling of Coupled Flexible Multibody Dynamics and Liquid Sloshing

    DTIC Science & Technology

    2006-09-01

    tanks is presented. The semi-discrete combined solid and fluid equations of motions are integrated using a time- accurate parallel explicit solver...Incompressible fluid flow in a moving/deforming container including accurate modeling of the free-surface, turbulence, and viscous effects ...paper, a single computational code which uses a time- accurate explicit solution procedure is used to solve both the solid and fluid equations of

  7. The effects of divided attention on auditory priming.

    PubMed

    Mulligan, Neil W; Duke, Marquinn; Cooper, Angela W

    2007-09-01

    Traditional theorizing stresses the importance of attentional state during encoding for later memory, based primarily on research with explicit memory. Recent research has begun to investigate the role of attention in implicit memory but has focused almost exclusively on priming in the visual modality. The present experiments examined the effect of divided attention on auditory implicit memory, using auditory perceptual identification, word-stem completion and word-fragment completion. Participants heard study words under full attention conditions or while simultaneously carrying out a distractor task (the divided attention condition). In Experiment 1, a distractor task with low response frequency failed to disrupt later auditory priming (but diminished explicit memory as assessed with auditory recognition). In Experiment 2, a distractor task with greater response frequency disrupted priming on all three of the auditory priming tasks as well as the explicit test. These results imply that although auditory priming is less reliant on attention than explicit memory, it is still greatly affected by at least some divided-attention manipulations. These results are consistent with research using visual priming tasks and have relevance for hypotheses regarding attention and auditory priming.

  8. Electroweak bremsstrahlung for wino-like Dark Matter annihilations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciafaloni, Paolo; Comelli, Denis; Simone, Andrea De

    2012-06-01

    If the Dark Matter is the neutral Majorana component of a multiplet which is charged under the electroweak interactions of the Standard Model, its main annihilation channel is into W{sup +}W{sup −}, while the annihilation into light fermions is helicity suppressed. As pointed out recently, the radiation of gauge bosons from the initial state of the annihilation lifts the suppression and opens up an s-wave contribution to the cross section. We perform the full tree-level calculation of Dark Matter annihilations, including electroweak bremsstrahlung, in the context of an explicit model corresponding to the supersymmetric wino. We find that the fermionmore » channel can become as important as the di-boson one. This result has significant implications for the predictions of the fluxes of particles originating from Dark Matter annihilations.« less

  9. Multilayer-MCTDH approach to the energy transfer dynamics in the LH2 antenna complex

    NASA Astrophysics Data System (ADS)

    Shibl, Mohamed F.; Schulze, Jan; Al-Marri, Mohammed J.; Kühn, Oliver

    2017-09-01

    The multilayer multiconfiguration time-dependent Hartree method is used to study the coupled exciton-vibrational dynamics in a high-dimensional nonameric model of the LH2 antenna complex of purple bacteria. The exciton-vibrational coupling is parametrized within the Huang-Rhys model according to phonon and intramolecular vibrational modes derived from an experimental bacteriochlorophyll spectral density. In contrast to reduced density matrix approaches, the Schrödinger equation is solved explicitly, giving access to the full wave function. This facilitates an unbiased analysis in terms of the coupled dynamics of excitonic and vibrational degrees of freedom. For the present system, we identify spectator modes for the B800 to B800 transfer and we find a non-additive effect of phonon and intramolecular vibrational modes on the B800 to B850 exciton transfer.

  10. Predictions for the Dirac C P -violating phase from sum rules

    NASA Astrophysics Data System (ADS)

    Delgadillo, Luis A.; Everett, Lisa L.; Ramos, Raymundo; Stuart, Alexander J.

    2018-05-01

    We explore the implications of recent results relating the Dirac C P -violating phase to predicted and measured leptonic mixing angles within a standard set of theoretical scenarios in which charged lepton corrections are responsible for generating a nonzero value of the reactor mixing angle. We employ a full set of leptonic sum rules as required by the unitarity of the lepton mixing matrix, which can be reduced to predictions for the observable mixing angles and the Dirac C P -violating phase in terms of model parameters. These sum rules are investigated within a given set of theoretical scenarios for the neutrino sector diagonalization matrix for several known classes of charged lepton corrections. The results provide explicit maps of the allowed model parameter space within each given scenario and assumed form of charged lepton perturbations.

  11. A spatially explicit hydro-ecological modeling framework (BEPS-TerrainLab V2.0): Model description and test in a boreal ecosystem in Eastern North America

    NASA Astrophysics Data System (ADS)

    Govind, Ajit; Chen, Jing Ming; Margolis, Hank; Ju, Weimin; Sonnentag, Oliver; Giasson, Marc-André

    2009-04-01

    SummaryA spatially explicit, process-based hydro-ecological model, BEPS-TerrainLab V2.0, was developed to improve the representation of ecophysiological, hydro-ecological and biogeochemical processes of boreal ecosystems in a tightly coupled manner. Several processes unique to boreal ecosystems were implemented including the sub-surface lateral water fluxes, stratification of vegetation into distinct layers for explicit ecophysiological representation, inclusion of novel spatial upscaling strategies and biogeochemical processes. To account for preferential water fluxes common in humid boreal ecosystems, a novel scheme was introduced based on laboratory analyses. Leaf-scale ecophysiological processes were upscaled to canopy-scale by explicitly considering leaf physiological conditions as affected by light and water stress. The modified model was tested with 2 years of continuous measurements taken at the Eastern Old Black Spruce Site of the Fluxnet-Canada Research Network located in a humid boreal watershed in eastern Canada. Comparison of the simulated and measured ET, water-table depth (WTD), volumetric soil water content (VSWC) and gross primary productivity (GPP) revealed that BEPS-TerrainLab V2.0 simulates hydro-ecological processes with reasonable accuracy. The model was able to explain 83% of the ET, 92% of the GPP variability and 72% of the WTD dynamics. The model suggests that in humid ecosystems such as eastern North American boreal watersheds, topographically driven sub-surface baseflow is the main mechanism of soil water partitioning which significantly affects the local-scale hydrological conditions.

  12. Analysis of Lightning-induced Impulse Magnetic Fields in the Building with an Insulated Down Conductor

    NASA Astrophysics Data System (ADS)

    Du, Patrick Y.; Zhou, Qi-Bin

    This paper presents an analysis of lightning-induced magnetic fields in a building. The building of concern is protected by the lightning protection system with an insulated down conductor. In this paper a system model for metallic structure of the building is constructed first using the circuit approach. The circuit model of the insulated down conductor is discussed extensively, and explicit expressions of the circuit parameters are presented. The system model was verified experimentally in the laboratory. The modeling approach is applied to analyze the impulse magnetic fields in a full-scale building during a direct lightning strike. It is found that the impulse magnetic field is significantly high near the down conductor. The field is attenuated if the down conductor is moved to a column in the building. The field can be reduced further if the down conductor is housed in an earthed metal pipe. Recommendations for protecting critical equipment against lightning-induced magnetic fields are also provided in the paper.

  13. Group velocity of discrete-time quantum walks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kempf, A.; Department of Applied Mathematics, University of Waterloo, Waterloo, Ontario, Canada N2L 3G1; Portugal, R.

    2009-05-15

    We show that certain types of quantum walks can be modeled as waves that propagate in a medium with phase and group velocities that are explicitly calculable. Since the group and phase velocities indicate how fast wave packets can propagate causally, we propose the use of these wave velocities in our definition for the hitting time of quantum walks. Our definition of hitting time has the advantage that it requires neither the specification of a walker's initial condition nor of an arrival probability threshold. We give full details for the case of quantum walks on the Cayley graphs of Abelianmore » groups. This includes the special cases of quantum walks on the line and on hypercubes.« less

  14. General theories of linear gravitational perturbations to a Schwarzschild black hole

    NASA Astrophysics Data System (ADS)

    Tattersall, Oliver J.; Ferreira, Pedro G.; Lagos, Macarena

    2018-02-01

    We use the covariant formulation proposed by Tattersall, Lagos, and Ferreira [Phys. Rev. D 96, 064011 (2017), 10.1103/PhysRevD.96.064011] to analyze the structure of linear perturbations about a spherically symmetric background in different families of gravity theories, and hence study how quasinormal modes of perturbed black holes may be affected by modifications to general relativity. We restrict ourselves to single-tensor, scalar-tensor and vector-tensor diffeomorphism-invariant gravity models in a Schwarzschild black hole background. We show explicitly the full covariant form of the quadratic actions in such cases, which allow us to then analyze odd parity (axial) and even parity (polar) perturbations simultaneously in a straightforward manner.

  15. Improved method for calculating neoclassical transport coefficients in the banana regime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taguchi, M., E-mail: taguchi.masayoshi@nihon-u.ac.jp

    The conventional neoclassical moment method in the banana regime is improved by increasing the accuracy of approximation to the linearized Fokker-Planck collision operator. This improved method is formulated for a multiple ion plasma in general tokamak equilibria. The explicit computation in a model magnetic field shows that the neoclassical transport coefficients can be accurately calculated in the full range of aspect ratio by the improved method. The some neoclassical transport coefficients for the intermediate aspect ratio are found to appreciably deviate from those obtained by the conventional moment method. The differences between the transport coefficients with these two methods aremore » up to about 20%.« less

  16. Floquet Engineering of Correlated Tunneling in the Bose-Hubbard Model with Ultracold Atoms.

    PubMed

    Meinert, F; Mark, M J; Lauber, K; Daley, A J; Nägerl, H-C

    2016-05-20

    We report on the experimental implementation of tunable occupation-dependent tunneling in a Bose-Hubbard system of ultracold atoms via time-periodic modulation of the on-site interaction energy. The tunneling rate is inferred from a time-resolved measurement of the lattice site occupation after a quantum quench. We demonstrate coherent control of the tunneling dynamics in the correlated many-body system, including full suppression of tunneling as predicted within the framework of Floquet theory. We find that the tunneling rate explicitly depends on the atom number difference in neighboring lattice sites. Our results may open up ways to realize artificial gauge fields that feature density dependence with ultracold atoms.

  17. Predictive Simulations of Neuromuscular Coordination and Joint-Contact Loading in Human Gait.

    PubMed

    Lin, Yi-Chung; Walter, Jonathan P; Pandy, Marcus G

    2018-04-18

    We implemented direct collocation on a full-body neuromusculoskeletal model to calculate muscle forces, ground reaction forces and knee contact loading simultaneously for one cycle of human gait. A data-tracking collocation problem was solved for walking at the normal speed to establish the practicality of incorporating a 3D model of articular contact and a model of foot-ground interaction explicitly in a dynamic optimization simulation. The data-tracking solution then was used as an initial guess to solve predictive collocation problems, where novel patterns of movement were generated for walking at slow and fast speeds, independent of experimental data. The data-tracking solutions accurately reproduced joint motion, ground forces and knee contact loads measured for two total knee arthroplasty patients walking at their preferred speeds. RMS errors in joint kinematics were < 2.0° for rotations and < 0.3 cm for translations while errors in the model-computed ground-reaction and knee-contact forces were < 0.07 BW and < 0.4 BW, respectively. The predictive solutions were also consistent with joint kinematics, ground forces, knee contact loads and muscle activation patterns measured for slow and fast walking. The results demonstrate the feasibility of performing computationally-efficient, predictive, dynamic optimization simulations of movement using full-body, muscle-actuated models with realistic representations of joint function.

  18. Explicit solutions of normal form of driven oscillatory systems in entrainment bands

    NASA Astrophysics Data System (ADS)

    Tsarouhas, George E.; Ross, John

    1988-11-01

    As in a prior article (Ref. 1), we consider an oscillatory dissipative system driven by external sinusoidal perturbations of given amplitude Q and frequency ω. The kinetic equations are transformed to normal form and solved for small Q near a Hopf bifurcation to oscillations in the autonomous system. Whereas before we chose irrational ratios of the frequency of the autonomous system ωn to ω, with quasiperiodic response of the system to the perturbation, we now choose rational coprime ratios, with periodic response (entrainment). The dissipative system has either two variables or is adequately described by two variables near the bifurcation. We obtain explicit solutions and develop these in detail for ωn/ω=1; 1:2; 2:1; 1:3; 3:1. We choose a specific dissipative model (Brusselator) and test the theory by comparison with full numerical solutions. The analytic solutions of the theory give an excellent approximation for the autonomous system near the bifurcation. The theoretically predicted and calculated entrainment bands agree very well for small Q in the vicinity of the bifurcation (small μ); deviations increase with increasing Q and μ. The theory is applicable to one or two external periodic perturbations.

  19. Analytical validation of an explicit finite element model of a rolling element bearing with a localised line spall

    NASA Astrophysics Data System (ADS)

    Singh, Sarabjeet; Howard, Carl Q.; Hansen, Colin H.; Köpke, Uwe G.

    2018-03-01

    In this paper, numerically modelled vibration response of a rolling element bearing with a localised outer raceway line spall is presented. The results were obtained from a finite element (FE) model of the defective bearing solved using an explicit dynamics FE software package, LS-DYNA. Time domain vibration signals of the bearing obtained directly from the FE modelling were processed further to estimate time-frequency and frequency domain results, such as spectrogram and power spectrum, using standard signal processing techniques pertinent to the vibration-based monitoring of rolling element bearings. A logical approach to analyses of the numerically modelled results was developed with an aim to presenting the analytical validation of the modelled results. While the time and frequency domain analyses of the results show that the FE model generates accurate bearing kinematics and defect frequencies, the time-frequency analysis highlights the simulation of distinct low- and high-frequency characteristic vibration signals associated with the unloading and reloading of the rolling elements as they move in and out of the defect, respectively. Favourable agreement of the numerical and analytical results demonstrates the validation of the results from the explicit FE modelling of the bearing.

  20. A three dimensional multigrid multiblock multistage time stepping scheme for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Elmiligui, Alaa; Cannizzaro, Frank; Melson, N. D.

    1991-01-01

    A general multiblock method for the solution of the three-dimensional, unsteady, compressible, thin-layer Navier-Stokes equations has been developed. The convective and pressure terms are spatially discretized using Roe's flux differencing technique while the viscous terms are centrally differenced. An explicit Runge-Kutta method is used to advance the solution in time. Local time stepping, adaptive implicit residual smoothing, and the Full Approximation Storage (FAS) multigrid scheme are added to the explicit time stepping scheme to accelerate convergence to steady state. Results for three-dimensional test cases are presented and discussed.

  1. Increasing the sampling efficiency of protein conformational transition using velocity-scaling optimized hybrid explicit/implicit solvent REMD simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Yuqi; Wang, Jinan; Shao, Qiang, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn

    2015-03-28

    The application of temperature replica exchange molecular dynamics (REMD) simulation on protein motion is limited by its huge requirement of computational resource, particularly when explicit solvent model is implemented. In the previous study, we developed a velocity-scaling optimized hybrid explicit/implicit solvent REMD method with the hope to reduce the temperature (replica) number on the premise of maintaining high sampling efficiency. In this study, we utilized this method to characterize and energetically identify the conformational transition pathway of a protein model, the N-terminal domain of calmodulin. In comparison to the standard explicit solvent REMD simulation, the hybrid REMD is much lessmore » computationally expensive but, meanwhile, gives accurate evaluation of the structural and thermodynamic properties of the conformational transition which are in well agreement with the standard REMD simulation. Therefore, the hybrid REMD could highly increase the computational efficiency and thus expand the application of REMD simulation to larger-size protein systems.« less

  2. Predictive Validity of Explicit and Implicit Threat Overestimation in Contamination Fear

    PubMed Central

    Green, Jennifer S.; Teachman, Bethany A.

    2012-01-01

    We examined the predictive validity of explicit and implicit measures of threat overestimation in relation to contamination-fear outcomes using structural equation modeling. Undergraduate students high in contamination fear (N = 56) completed explicit measures of contamination threat likelihood and severity, as well as looming vulnerability cognitions, in addition to an implicit measure of danger associations with potential contaminants. Participants also completed measures of contamination-fear symptoms, as well as subjective distress and avoidance during a behavioral avoidance task, and state looming vulnerability cognitions during an exposure task. The latent explicit (but not implicit) threat overestimation variable was a significant and unique predictor of contamination fear symptoms and self-reported affective and cognitive facets of contamination fear. On the contrary, the implicit (but not explicit) latent measure predicted behavioral avoidance (at the level of a trend). Results are discussed in terms of differential predictive validity of implicit versus explicit markers of threat processing and multiple fear response systems. PMID:24073390

  3. The Development of a new Numerical Modelling Approach for Naturally Fractured Rock Masses

    NASA Astrophysics Data System (ADS)

    Pine, R. J.; Coggan, J. S.; Flynn, Z. N.; Elmo, D.

    2006-11-01

    An approach for modelling fractured rock masses has been developed which has two main objectives: to maximise the quality of representation of the geometry of existing rock jointing and to use this within a loading model which takes full account of this style of jointing. Initially the work has been applied to the modelling of mine pillars and data from the Middleton Mine in the UK has been used as a case example. However, the general approach is applicable to all aspects of rock mass behaviour including the stress conditions found in hangingwalls, tunnels, block caving, and slopes. The rock mass fracture representation was based on a combination of explicit mapping of rock faces and the synthesis of this data into a three-dimensional model, based on the use of the FracMan computer model suite. Two-dimensional cross sections from this model were imported into the finite element computer model, ELFEN, for loading simulation. The ELFEN constitutive model for fracture simulation includes the Rotating Crack, and Rankine material models, in which fracturing is controlled by tensile strength and fracture energy parameters. For tension/compression stress states, the model is complemented with a capped Mohr-Coulomb criterion in which the softening response is coupled to the tensile model. Fracturing due to dilation is accommodated by introducing an explicit coupling between the inelastic strain accrued by the Mohr-Coulomb yield surface and the anisotropic degradation of the mutually orthogonal tensile yield surfaces of the rotating crack model. Pillars have been simulated with widths of 2.8, 7 and 14 m and a height of 7 m (the Middleton Mine pillars are typically 14 m wide and 7 m high). The evolution of the pillar failure under progressive loading through fracture extension and creation of new fractures is presented, and pillar capacities and stiffnesses are compared with empirical models. The agreement between the models is promising and the new model provides useful insights into the influence of pre-existing fractures. Further work is needed to consider the effects of three-dimensional loading and other boundary condition problems.

  4. Explicit and Implicit Processes Constitute the Fast and Slow Processes of Sensorimotor Learning.

    PubMed

    McDougle, Samuel D; Bond, Krista M; Taylor, Jordan A

    2015-07-01

    A popular model of human sensorimotor learning suggests that a fast process and a slow process work in parallel to produce the canonical learning curve (Smith et al., 2006). Recent evidence supports the subdivision of sensorimotor learning into explicit and implicit processes that simultaneously subserve task performance (Taylor et al., 2014). We set out to test whether these two accounts of learning processes are homologous. Using a recently developed method to assay explicit and implicit learning directly in a sensorimotor task, along with a computational modeling analysis, we show that the fast process closely resembles explicit learning and the slow process approximates implicit learning. In addition, we provide evidence for a subdivision of the slow/implicit process into distinct manifestations of motor memory. We conclude that the two-state model of motor learning is a close approximation of sensorimotor learning, but it is unable to describe adequately the various implicit learning operations that forge the learning curve. Our results suggest that a wider net be cast in the search for the putative psychological mechanisms and neural substrates underlying the multiplicity of processes involved in motor learning. Copyright © 2015 the authors 0270-6474/15/359568-12$15.00/0.

  5. Investigating the predictive validity of implicit and explicit measures of motivation in problem-solving behavioural tasks.

    PubMed

    Keatley, David; Clarke, David D; Hagger, Martin S

    2013-09-01

    Research into the effects of individuals'autonomous motivation on behaviour has traditionally adopted explicit measures and self-reported outcome assessment. Recently, there has been increased interest in the effects of implicit motivational processes underlying behaviour from a self-determination theory (SDT) perspective. The aim of the present research was to provide support for the predictive validity of an implicit measure of autonomous motivation on behavioural persistence on two objectively measurable tasks. SDT and a dual-systems model were adopted as frameworks to explain the unique effects offered by explicit and implicit autonomous motivational constructs on behavioural persistence. In both studies, implicit autonomous motivation significantly predicted unique variance in time spent on each task. Several explicit measures of autonomous motivation also significantly predicted persistence. Results provide support for the proposed model and the inclusion of implicit measures in research on motivated behaviour. In addition, implicit measures of autonomous motivation appear to be better suited to explaining variance in behaviours that are more spontaneous or unplanned. Future implications for research examining implicit motivation from dual-systems models and SDT approaches are outlined. © 2012 The British Psychological Society.

  6. Modeling Active Aging and Explicit Memory: An Empirical Study.

    PubMed

    Ponce de León, Laura Ponce; Lévy, Jean Pierre; Fernández, Tomás; Ballesteros, Soledad

    2015-08-01

    The rapid growth of the population of older adults and their concomitant psychological status and health needs have captured the attention of researchers and health professionals. To help fill the void of literature available to social workers interested in mental health promotion and aging, the authors provide a model for active aging that uses psychosocial variables. Structural equation modeling was used to examine the relationships among the latent variables of the state of explicit memory, the perception of social resources, depression, and the perception of quality of life in a sample of 184 older adults. The results suggest that explicit memory is not a direct indicator of the perception of quality of life, but it could be considered an indirect indicator as it is positively correlated with perception of social resources and negatively correlated with depression. These last two variables influenced the perception of quality of life directly, the former positively and the latter negatively. The main outcome suggests that the perception of social support improves explicit memory and quality of life and reduces depression in active older adults. The findings also suggest that gerontological professionals should design memory training programs, improve available social resources, and offer environments with opportunities to exercise memory.

  7. Explicit and Implicit Processes Constitute the Fast and Slow Processes of Sensorimotor Learning

    PubMed Central

    Bond, Krista M.; Taylor, Jordan A.

    2015-01-01

    A popular model of human sensorimotor learning suggests that a fast process and a slow process work in parallel to produce the canonical learning curve (Smith et al., 2006). Recent evidence supports the subdivision of sensorimotor learning into explicit and implicit processes that simultaneously subserve task performance (Taylor et al., 2014). We set out to test whether these two accounts of learning processes are homologous. Using a recently developed method to assay explicit and implicit learning directly in a sensorimotor task, along with a computational modeling analysis, we show that the fast process closely resembles explicit learning and the slow process approximates implicit learning. In addition, we provide evidence for a subdivision of the slow/implicit process into distinct manifestations of motor memory. We conclude that the two-state model of motor learning is a close approximation of sensorimotor learning, but it is unable to describe adequately the various implicit learning operations that forge the learning curve. Our results suggest that a wider net be cast in the search for the putative psychological mechanisms and neural substrates underlying the multiplicity of processes involved in motor learning. PMID:26134640

  8. Independence polynomial and matching polynomial of the Koch network

    NASA Astrophysics Data System (ADS)

    Liao, Yunhua; Xie, Xiaoliang

    2015-11-01

    The lattice gas model and the monomer-dimer model are two classical models in statistical mechanics. It is well known that the partition functions of these two models are associated with the independence polynomial and the matching polynomial in graph theory, respectively. Both polynomials have been shown to belong to the “#P-complete” class, which indicate the problems are computationally “intractable”. We consider these two polynomials of the Koch networks which are scale-free with small-world effects. Explicit recurrences are derived, and explicit formulae are presented for the number of independent sets of a certain type.

  9. Assessment of an Explicit Algebraic Reynolds Stress Model

    NASA Technical Reports Server (NTRS)

    Carlson, Jan-Renee

    2005-01-01

    This study assesses an explicit algebraic Reynolds stress turbulence model in the in the three-dimensional Reynolds averaged Navier-Stokes (RANS) solver, ISAAC (Integrated Solution Algorithm for Arbitrary Con gurations). Additionally, it compares solutions for two select configurations between ISAAC and the RANS solver PAB3D. This study compares with either direct numerical simulation data, experimental data, or empirical models for several different geometries with compressible, separated, and high Reynolds number flows. In general, the turbulence model matched data or followed experimental trends well, and for the selected configurations, the computational results of ISAAC closely matched those of PAB3D using the same turbulence model.

  10. Logistic Mixed Models to Investigate Implicit and Explicit Belief Tracking.

    PubMed

    Lages, Martin; Scheel, Anne

    2016-01-01

    We investigated the proposition of a two-systems Theory of Mind in adults' belief tracking. A sample of N = 45 participants predicted the choice of one of two opponent players after observing several rounds in an animated card game. Three matches of this card game were played and initial gaze direction on target and subsequent choice predictions were recorded for each belief task and participant. We conducted logistic regressions with mixed effects on the binary data and developed Bayesian logistic mixed models to infer implicit and explicit mentalizing in true belief and false belief tasks. Although logistic regressions with mixed effects predicted the data well a Bayesian logistic mixed model with latent task- and subject-specific parameters gave a better account of the data. As expected explicit choice predictions suggested a clear understanding of true and false beliefs (TB/FB). Surprisingly, however, model parameters for initial gaze direction also indicated belief tracking. We discuss why task-specific parameters for initial gaze directions are different from choice predictions yet reflect second-order perspective taking.

  11. Characterizing Aeroelastic Systems Using Eigenanalysis, Explicitly Retaining The Aerodynamic Degrees of Freedom

    NASA Technical Reports Server (NTRS)

    Heeg, Jennifer; Dowell, Earl H.

    2001-01-01

    Discrete time aeroelastic models with explicitly retained aerodynamic modes have been generated employing a time marching vortex lattice aerodynamic model. This paper presents analytical results from eigenanalysis of these models. The potential of these models to calculate the behavior of modes that represent damped system motion (noncritical modes) in addition to the simple harmonic modes is explored. A typical section with only structural freedom in pitch is examined. The eigenvalues are examined and compared to experimental data. Issues regarding the convergence of the solution with regard to refining the aerodynamic discretization are investigated. Eigenvector behavior is examined; the eigenvector associated with a particular eigenvalue can be viewed as the set of modal participation factors for that particular mode. For the present formulation of the equations of motion, the vorticity for each aerodynamic element appears explicitly as an element of each eigenvector in addition to the structural dynamic generalized coordinates. Thus, modal participation of the aerodynamic degrees of freedom can be assessed in M addition to participation of structural degrees of freedom.

  12. Quantum mechanical force field for hydrogen fluoride with explicit electronic polarization.

    PubMed

    Mazack, Michael J M; Gao, Jiali

    2014-05-28

    The explicit polarization (X-Pol) theory is a fragment-based quantum chemical method that explicitly models the internal electronic polarization and intermolecular interactions of a chemical system. X-Pol theory provides a framework to construct a quantum mechanical force field, which we have extended to liquid hydrogen fluoride (HF) in this work. The parameterization, called XPHF, is built upon the same formalism introduced for the XP3P model of liquid water, which is based on the polarized molecular orbital (PMO) semiempirical quantum chemistry method and the dipole-preserving polarization consistent point charge model. We introduce a fluorine parameter set for PMO, and find good agreement for various gas-phase results of small HF clusters compared to experiments and ab initio calculations at the M06-2X/MG3S level of theory. In addition, the XPHF model shows reasonable agreement with experiments for a variety of structural and thermodynamic properties in the liquid state, including radial distribution functions, interaction energies, diffusion coefficients, and densities at various state points.

  13. Blast and the Consequences on Traumatic Brain Injury-Multiscale Mechanical Modeling of Brain

    DTIC Science & Technology

    2011-02-17

    blast simulation. LS-DYNA as an explicit FE code has been employed to simulate this multi- material fluid –structure interaction problem. The 3-D head...formulation is implemented to model the air-blast simulation. LS-DYNA as an explicit FE code has been employed to simulate this multi-material fluid ...Biomechanics Study of Influencing Parameters for brain under Impact ............................... 12 5.1 The Impact of Cerebrospinal Fluid

  14. Traveling waves in a spring-block chain sliding down a slope

    NASA Astrophysics Data System (ADS)

    Morales, J. E.; James, G.; Tonnelier, A.

    2017-07-01

    Traveling waves are studied in a spring slider-block model. We explicitly construct front waves (kinks) for a piecewise-linear spinodal friction force. Pulse waves are obtained as the matching of two traveling fronts with identical speeds. Explicit formulas are obtained for the wavespeed and the wave form in the anticontinuum limit. The link with localized waves in a Burridge-Knopoff model of an earthquake fault is briefly discussed.

  15. Biomass and fire dynamics in a temperate forest-grassland mosaic: Integrating multi-species herbivory, climate, and fire with the FireBGCv2/GrazeBGC system

    Treesearch

    Robert A. Riggs; Robert E. Keane; Norm Cimon; Rachel Cook; Lisa Holsinger; John Cook; Timothy DelCurto; L.Scott Baggett; Donald Justice; David Powell; Martin Vavra; Bridgett Naylor

    2015-01-01

    Landscape fire succession models (LFSMs) predict spatially-explicit interactions between vegetation succession and disturbance, but these models have yet to fully integrate ungulate herbivory as a driver of their processes. We modified a complex LFSM, FireBGCv2, to include a multi-species herbivory module, GrazeBGC. The system is novel in that it explicitly...

  16. Traveling waves in a spring-block chain sliding down a slope.

    PubMed

    Morales, J E; James, G; Tonnelier, A

    2017-07-01

    Traveling waves are studied in a spring slider-block model. We explicitly construct front waves (kinks) for a piecewise-linear spinodal friction force. Pulse waves are obtained as the matching of two traveling fronts with identical speeds. Explicit formulas are obtained for the wavespeed and the wave form in the anticontinuum limit. The link with localized waves in a Burridge-Knopoff model of an earthquake fault is briefly discussed.

  17. Spin-orbit splitted excited states using explicitly-correlated equation-of-motion coupled-cluster singles and doubles eigenvectors

    NASA Astrophysics Data System (ADS)

    Bokhan, Denis; Trubnikov, Dmitrii N.; Perera, Ajith; Bartlett, Rodney J.

    2018-04-01

    An explicitly-correlated method of calculation of excited states with spin-orbit couplings, has been formulated and implemented. Developed approach utilizes left and right eigenvectors of equation-of-motion coupled-cluster model, which is based on the linearly approximated explicitly correlated coupled-cluster singles and doubles [CCSD(F12)] method. The spin-orbit interactions are introduced by using the spin-orbit mean field (SOMF) approximation of the Breit-Pauli Hamiltonian. Numerical tests for several atoms and molecules show good agreement between explicitly-correlated results and the corresponding values, calculated in complete basis set limit (CBS); the highly-accurate excitation energies can be obtained already at triple- ζ level.

  18. Global Asymptotic Behavior of Iterative Implicit Schemes

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.

    1994-01-01

    The global asymptotic nonlinear behavior of some standard iterative procedures in solving nonlinear systems of algebraic equations arising from four implicit linear multistep methods (LMMs) in discretizing three models of 2 x 2 systems of first-order autonomous nonlinear ordinary differential equations (ODEs) is analyzed using the theory of dynamical systems. The iterative procedures include simple iteration and full and modified Newton iterations. The results are compared with standard Runge-Kutta explicit methods, a noniterative implicit procedure, and the Newton method of solving the steady part of the ODEs. Studies showed that aside from exhibiting spurious asymptotes, all of the four implicit LMMs can change the type and stability of the steady states of the differential equations (DEs). They also exhibit a drastic distortion but less shrinkage of the basin of attraction of the true solution than standard nonLMM explicit methods. The simple iteration procedure exhibits behavior which is similar to standard nonLMM explicit methods except that spurious steady-state numerical solutions cannot occur. The numerical basins of attraction of the noniterative implicit procedure mimic more closely the basins of attraction of the DEs and are more efficient than the three iterative implicit procedures for the four implicit LMMs. Contrary to popular belief, the initial data using the Newton method of solving the steady part of the DEs may not have to be close to the exact steady state for convergence. These results can be used as an explanation for possible causes and cures of slow convergence and nonconvergence of steady-state numerical solutions when using an implicit LMM time-dependent approach in computational fluid dynamics.

  19. Incompressible spectral-element method: Derivation of equations

    NASA Technical Reports Server (NTRS)

    Deanna, Russell G.

    1993-01-01

    A fractional-step splitting scheme breaks the full Navier-Stokes equations into explicit and implicit portions amenable to the calculus of variations. Beginning with the functional forms of the Poisson and Helmholtz equations, we substitute finite expansion series for the dependent variables and derive the matrix equations for the unknown expansion coefficients. This method employs a new splitting scheme which differs from conventional three-step (nonlinear, pressure, viscous) schemes. The nonlinear step appears in the conventional, explicit manner, the difference occurs in the pressure step. Instead of solving for the pressure gradient using the nonlinear velocity, we add the viscous portion of the Navier-Stokes equation from the previous time step to the velocity before solving for the pressure gradient. By combining this 'predicted' pressure gradient with the nonlinear velocity in an explicit term, and the Crank-Nicholson method for the viscous terms, we develop a Helmholtz equation for the final velocity.

  20. Heavy-light mesons in chiral AdS/QCD

    NASA Astrophysics Data System (ADS)

    Liu, Yizhuang; Zahed, Ismail

    2017-06-01

    We discuss a minimal holographic model for the description of heavy-light and light mesons with chiral symmetry, defined in a slab of AdS space. The model consists of a pair of chiral Yang-Mills and tachyon fields with specific boundary conditions that break spontaneously chiral symmetry in the infrared. The heavy-light spectrum and decay constants are evaluated explicitly. In the heavy mass limit the model exhibits both heavy-quark and chiral symmetry and allows for the explicit derivation of the one-pion axial couplings to the heavy-light mesons.

  1. Nonminimally coupled massive scalar field in a 2D black hole: Exactly solvable model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frolov, V.; Zelnikov, A.

    2001-06-15

    We study a nonminimal massive scalar field in the background of a two-dimensional black hole spacetime. We consider the black hole which is the solution of the 2D dilaton gravity derived from string-theoretical models. We find an explicit solution in a closed form for all modes and the Green function of the scalar field with an arbitrary mass and a nonminimal coupling to the curvature. Greybody factors, the Hawking radiation, and 2>{sup ren} are calculated explicitly for this exactly solvable model.

  2. Test-Case Generation using an Explicit State Model Checker Final Report

    NASA Technical Reports Server (NTRS)

    Heimdahl, Mats P. E.; Gao, Jimin

    2003-01-01

    In the project 'Test-Case Generation using an Explicit State Model Checker' we have extended an existing tools infrastructure for formal modeling to export Java code so that we can use the NASA Ames tool Java Pathfinder (JPF) for test case generation. We have completed a translator from our source language RSML(exp -e) to Java and conducted initial studies of how JPF can be used as a testing tool. In this final report, we provide a detailed description of the translation approach as implemented in our tools.

  3. Implicit and explicit weight bias in a national sample of 4,732 medical students: the medical student CHANGES study.

    PubMed

    Phelan, Sean M; Dovidio, John F; Puhl, Rebecca M; Burgess, Diana J; Nelson, David B; Yeazel, Mark W; Hardeman, Rachel; Perry, Sylvia; van Ryn, Michelle

    2014-04-01

    To examine the magnitude of explicit and implicit weight biases compared to biases against other groups; and identify student factors predicting bias in a large national sample of medical students. A web-based survey was completed by 4,732 1st year medical students from 49 medical schools as part of a longitudinal study of medical education. The survey included a validated measure of implicit weight bias, the implicit association test, and 2 measures of explicit bias: a feeling thermometer and the anti-fat attitudes test. A majority of students exhibited implicit (74%) and explicit (67%) weight bias. Implicit weight bias scores were comparable to reported bias against racial minorities. Explicit attitudes were more negative toward obese people than toward racial minorities, gays, lesbians, and poor people. In multivariate regression models, implicit and explicit weight bias was predicted by lower BMI, male sex, and non-Black race. Either implicit or explicit bias was also predicted by age, SES, country of birth, and specialty choice. Implicit and explicit weight bias is common among 1st year medical students, and varies across student factors. Future research should assess implications of biases and test interventions to reduce their impact. Copyright © 2013 The Obesity Society.

  4. Different Mechanisms of Soil Microbial Response to Global Change Result in Different Outcomes in the MIMICS-CN Model

    NASA Astrophysics Data System (ADS)

    Kyker-Snowman, E.; Wieder, W. R.; Grandy, S.

    2017-12-01

    Microbial-explicit models of soil carbon (C) and nitrogen (N) cycling have improved upon simulations of C and N stocks and flows at site-to-global scales relative to traditional first-order linear models. However, the response of microbial-explicit soil models to global change factors depends upon which parameters and processes in a model are altered by those factors. We used the MIcrobial-MIneral Carbon Stabilization Model with coupled N cycling (MIMICS-CN) to compare modeled responses to changes in temperature and plant inputs at two previously-modeled sites (Harvard Forest and Kellogg Biological Station). We spun the model up to equilibrium, applied each perturbation, and evaluated 15 years of post-perturbation C and N pools and fluxes. To model the effect of increasing temperatures, we independently examined the impact of decreasing microbial C use efficiency (CUE), increasing the rate of microbial turnover, and increasing Michaelis-Menten kinetic rates of litter decomposition, plus several combinations of the three. For plant inputs, we ran simulations with stepwise increases in metabolic litter, structural litter, whole litter (structural and metabolic), or labile soil C. The cumulative change in soil C or N varied in both sign and magnitude across simulations. For example, increasing kinetic rates of litter decomposition resulted in net releases of both C and N from soil pools, while decreasing CUE produced short-term increases in respiration but long-term accumulation of C in litter pools and shifts in soil C:N as microbial demand for C increased and biomass declined. Given that soil N cycling constrains the response of plant productivity to global change and that soils generate a large amount of uncertainty in current earth system models, microbial-explicit models are a critical opportunity to advance the modeled representation of soils. However, microbial-explicit models must be improved by experiments to isolate the physiological and stoichiometric parameters of soil microbes that shift under global change.

  5. Modelling explicit fracture of nuclear fuel pellets using peridynamics

    NASA Astrophysics Data System (ADS)

    Mella, R.; Wenman, M. R.

    2015-12-01

    Three dimensional models of explicit cracking of nuclear fuel pellets for a variety of power ratings have been explored with peridynamics, a non-local, mesh free, fracture mechanics method. These models were implemented in the explicitly integrated molecular dynamics code LAMMPS, which was modified to include thermal strains in solid bodies. The models of fuel fracture, during initial power transients, are shown to correlate with the mean number of cracks observed on the inner and outer edges of the pellet, by experimental post irradiation examination of fuel, for power ratings of 10 and 15 W g-1 UO2. The models of the pellet show the ability to predict expected features such as the mid-height pellet crack, the correct number of radial cracks and initiation and coalescence of radial cracks. This work presents a modelling alternative to empirical fracture data found in many fuel performance codes and requires just one parameter of fracture strain. Weibull distributions of crack numbers were fitted to both numerical and experimental data using maximum likelihood estimation so that statistical comparison could be made. The findings show P-values of less than 0.5% suggesting an excellent agreement between model and experimental distributions.

  6. Aerosol-cloud interactions in a multi-scale modeling framework

    NASA Astrophysics Data System (ADS)

    Lin, G.; Ghan, S. J.

    2017-12-01

    Atmospheric aerosols play an important role in changing the Earth's climate through scattering/absorbing solar and terrestrial radiation and interacting with clouds. However, quantification of the aerosol effects remains one of the most uncertain aspects of current and future climate projection. Much of the uncertainty results from the multi-scale nature of aerosol-cloud interactions, which is very challenging to represent in traditional global climate models (GCMs). In contrast, the multi-scale modeling framework (MMF) provides a viable solution, which explicitly resolves the cloud/precipitation in the cloud resolved model (CRM) embedded in the GCM grid column. In the MMF version of community atmospheric model version 5 (CAM5), aerosol processes are treated with a parameterization, called the Explicit Clouds Parameterized Pollutants (ECPP). It uses the cloud/precipitation statistics derived from the CRM to treat the cloud processing of aerosols on the GCM grid. However, this treatment treats clouds on the CRM grid but aerosols on the GCM grid, which is inconsistent with the reality that cloud-aerosol interactions occur on the cloud scale. To overcome the limitation, here, we propose a new aerosol treatment in the MMF: Explicit Clouds Explicit Aerosols (ECEP), in which we resolve both clouds and aerosols explicitly on the CRM grid. We first applied the MMF with ECPP to the Accelerated Climate Modeling for Energy (ACME) model to have an MMF version of ACME. Further, we also developed an alternative version of ACME-MMF with ECEP. Based on these two models, we have conducted two simulations: one with the ECPP and the other with ECEP. Preliminary results showed that the ECEP simulations tend to predict higher aerosol concentrations than ECPP simulations, because of the more efficient vertical transport from the surface to the higher atmosphere but the less efficient wet removal. We also found that the cloud droplet number concentrations are also different between the two simulations due to the difference in the cloud droplet lifetime. Next, we will explore how the ECEP treatment affects the anthropogenic aerosol forcing, particularly the aerosol indirect forcing, by comparing present-day and pre-industrial simulations.

  7. ORILAM, a three-moment lognormal aerosol scheme for mesoscale atmospheric model: Online coupling into the Meso-NH-C model and validation on the Escompte campaign

    NASA Astrophysics Data System (ADS)

    Tulet, Pierre; Crassier, Vincent; Cousin, Frederic; Suhre, Karsten; Rosset, Robert

    2005-09-01

    Classical aerosol schemes use either a sectional (bin) or lognormal approach. Both approaches have particular capabilities and interests: the sectional approach is able to describe every kind of distribution, whereas the lognormal one makes assumption of the distribution form with a fewer number of explicit variables. For this last reason we developed a three-moment lognormal aerosol scheme named ORILAM to be coupled in three-dimensional mesoscale or CTM models. This paper presents the concept and hypothesis of a range of aerosol processes such as nucleation, coagulation, condensation, sedimentation, and dry deposition. One particular interest of ORILAM is to keep explicit the aerosol composition and distribution (mass of each constituent, mean radius, and standard deviation of the distribution are explicit) using the prediction of three-moment (m0, m3, and m6). The new model was evaluated by comparing simulations to measurements from the Escompte campaign and to a previously published aerosol model. The numerical cost of the lognormal mode is lower than two bins of the sectional one.

  8. Moving forward socio-economically focused models of deforestation.

    PubMed

    Dezécache, Camille; Salles, Jean-Michel; Vieilledent, Ghislain; Hérault, Bruno

    2017-09-01

    Whilst high-resolution spatial variables contribute to a good fit of spatially explicit deforestation models, socio-economic processes are often beyond the scope of these models. Such a low level of interest in the socio-economic dimension of deforestation limits the relevancy of these models for decision-making and may be the cause of their failure to accurately predict observed deforestation trends in the medium term. This study aims to propose a flexible methodology for taking into account multiple drivers of deforestation in tropical forested areas, where the intensity of deforestation is explicitly predicted based on socio-economic variables. By coupling a model of deforestation location based on spatial environmental variables with several sub-models of deforestation intensity based on socio-economic variables, we were able to create a map of predicted deforestation over the period 2001-2014 in French Guiana. This map was compared to a reference map for accuracy assessment, not only at the pixel scale but also over cells ranging from 1 to approximately 600 sq. km. Highly significant relationships were explicitly established between deforestation intensity and several socio-economic variables: population growth, the amount of agricultural subsidies, gold and wood production. Such a precise characterization of socio-economic processes allows to avoid overestimation biases in high deforestation areas, suggesting a better integration of socio-economic processes in the models. Whilst considering deforestation as a purely geographical process contributes to the creation of conservative models unable to effectively assess changes in the socio-economic and political contexts influencing deforestation trends, this explicit characterization of the socio-economic dimension of deforestation is critical for the creation of deforestation scenarios in REDD+ projects. © 2017 John Wiley & Sons Ltd.

  9. A MULTIPLE GRID APPROACH FOR OPEN CHANNEL FLOWS WITH STRONG SHOCKS. (R825200)

    EPA Science Inventory

    Abstract

    Explicit finite difference schemes are being widely used for modeling open channel flows accompanied with shocks. A characteristic feature of explicit schemes is the small time step, which is limited by the CFL stability condition. To overcome this limitation,...

  10. New explicit global asymptotic stability criteria for higher order difference equations

    NASA Astrophysics Data System (ADS)

    El-Morshedy, Hassan A.

    2007-12-01

    New explicit sufficient conditions for the asymptotic stability of the zero solution of higher order difference equations are obtained. These criteria can be applied to autonomous and nonautonomous equations. The celebrated Clark asymptotic stability criterion is improved. Also, applications to models from mathematical biology and macroeconomics are given.

  11. Explicit Processing Demands Reveal Language Modality-Specific Organization of Working Memory

    ERIC Educational Resources Information Center

    Rudner, Mary; Ronnberg, Jerker

    2008-01-01

    The working memory model for Ease of Language Understanding (ELU) predicts that processing differences between language modalities emerge when cognitive demands are explicit. This prediction was tested in three working memory experiments with participants who were Deaf Signers (DS), Hearing Signers (HS), or Hearing Nonsigners (HN). Easily nameable…

  12. Feasibility of Explicit Instruction in Adult Basic Education: Instructor-Learner Interaction Patterns

    ERIC Educational Resources Information Center

    Mellard, Daryl; Scanlon, David

    2006-01-01

    A strategic instruction model introduced into adult basic education classrooms yields insight into the feasibility of using direct and explicit instruction with adults with learning disabilities or other cognitive barriers to learning. Ecobehavioral assessment was used to describe and compare instructor-learner interaction patterns during learning…

  13. Through the Immune Looking Glass: A Model for Brain Memory Strategies

    PubMed Central

    Sánchez-Ramón, Silvia; Faure, Florence

    2016-01-01

    The immune system (IS) and the central nervous system (CNS) are complex cognitive networks involved in defining the identity (self) of the individual through recognition and memory processes that enable one to anticipate responses to stimuli. Brain memory has traditionally been classified as either implicit or explicit on psychological and anatomical grounds, with reminiscences of the evolutionarily-based innate-adaptive IS responses. Beyond the multineuronal networks of the CNS, we propose a theoretical model of brain memory integrating the CNS as a whole. This is achieved by analogical reasoning between the operational rules of recognition and memory processes in both systems, coupled to an evolutionary analysis. In this new model, the hippocampus is no longer specifically ascribed to explicit memory but rather it both becomes part of the innate (implicit) memory system and tightly controls the explicit memory system. Alike the antigen presenting cells for the IS, the hippocampus would integrate transient and pseudo-specific (i.e., danger-fear) memories and would drive the formation of long-term and highly specific or explicit memories (i.e., the taste of the Proust’s madeleine cake) by the more complex and recent, evolutionarily speaking, neocortex. Experimental and clinical evidence is provided to support the model. We believe that the singularity of this model’s approximation could help to gain a better understanding of the mechanisms operating in brain memory strategies from a large-scale network perspective. PMID:26869886

  14. Global motions exhibited by proteins in micro- to milliseconds simulations concur with anisotropic network model predictions

    NASA Astrophysics Data System (ADS)

    Gur, M.; Zomot, E.; Bahar, I.

    2013-09-01

    The Anton supercomputing technology recently developed for efficient molecular dynamics simulations permits us to examine micro- to milli-second events at full atomic resolution for proteins in explicit water and lipid bilayer. It also permits us to investigate to what extent the collective motions predicted by network models (that have found broad use in molecular biophysics) agree with those exhibited by full-atomic long simulations. The present study focuses on Anton trajectories generated for two systems: the bovine pancreatic trypsin inhibitor, and an archaeal aspartate transporter, GltPh. The former, a thoroughly studied system, helps benchmark the method of comparative analysis, and the latter provides new insights into the mechanism of function of glutamate transporters. The principal modes of motion derived from both simulations closely overlap with those predicted for each system by the anisotropic network model (ANM). Notably, the ANM modes define the collective mechanisms, or the pathways on conformational energy landscape, that underlie the passage between the crystal structure and substates visited in simulations. In particular, the lowest frequency ANM modes facilitate the conversion between the most probable substates, lending support to the view that easy access to functional substates is a robust determinant of evolutionarily selected native contact topology.

  15. Copper and zinc removal from roof runoff: from research to full-scale adsorber systems.

    PubMed

    Steiner, M; Boller, M

    2006-01-01

    Large, uncoated copper and zinc roofs cause environmental problems if their runoff is infiltrated into the underground or discharged into receiving waters. Since source control is not always feasible, barrier systems for efficient copper and zinc removal are recommended in Switzerland. During the last few years, research carried out in order to test the performance of GIH-calcite adsorber filters as a barrier system. Adsorption and mass transport processes were assessed and described in a mathematical model. However, this model is not suitable for practical design, because it does not give explicit access to design parameters such as adsorber diameter and adsorber bed depth. Therefore, for e.g. engineers, an easy to use design guideline for GIH-calcite adsorber systems was developed, mainly based on the mathematical model. The core of this guideline is the design of the depth of the GIH-calcite adsorber layer. The depth is calculated by adding up the GIH depth for sorption equilibrium and the depth for the mass transfer zone (MTZ). Additionally, the arrangement of other adsorber system components such as particle separation and retention volume was considered in the guideline. Investigations of a full-scale adsorber confirm the successful application of this newly developed design guideline for the application of GIH-calcite adsorber systems in practice.

  16. Constant pH Molecular Dynamics of Proteins in Explicit Solvent with Proton Tautomerism

    PubMed Central

    Goh, Garrett B.; Hulbert, Benjamin S.; Zhou, Huiqing; Brooks, Charles L.

    2015-01-01

    pH is a ubiquitous regulator of biological activity, including protein-folding, protein-protein interactions and enzymatic activity. Existing constant pH molecular dynamics (CPHMD) models that were developed to address questions related to the pH-dependent properties of proteins are largely based on implicit solvent models. However, implicit solvent models are known to underestimate the desolvation energy of buried charged residues, increasing the error associated with predictions that involve internal ionizable residue that are important in processes like hydrogen transport and electron transfer. Furthermore, discrete water and ions cannot be modeled in implicit solvent, which are important in systems like membrane proteins and ion channels. We report on an explicit solvent constant pH molecular dynamics framework based on multi-site λ-dynamics (CPHMDMSλD). In the CPHMDMSλD framework, we performed seamless alchemical transitions between protonation and tautomeric states using multi-site λ-dynamics, and designed novel biasing potentials to ensure that the physical end-states are predominantly sampled. We show that explicit solvent CPHMDMSλD simulations model realistic pH-dependent properties of proteins such as the Hen-Egg White Lysozyme (HEWL), binding domain of 2-oxoglutarate dehydrogenase (BBL) and N-terminal domain of ribosomal L9 (NTL9), and the pKa predictions are in excellent agreement with experimental values, with a RMSE ranging from 0.72 to 0.84 pKa units. With the recent development of the explicit solvent CPHMDMSλD framework for nucleic acids, accurate modeling of pH-dependent properties of both major class of biomolecules – proteins and nucleic acids is now possible. PMID:24375620

  17. On the application of multilevel modeling in environmental and ecological studies

    USGS Publications Warehouse

    Qian, Song S.; Cuffney, Thomas F.; Alameddine, Ibrahim; McMahon, Gerard; Reckhow, Kenneth H.

    2010-01-01

    This paper illustrates the advantages of a multilevel/hierarchical approach for predictive modeling, including flexibility of model formulation, explicitly accounting for hierarchical structure in the data, and the ability to predict the outcome of new cases. As a generalization of the classical approach, the multilevel modeling approach explicitly models the hierarchical structure in the data by considering both the within- and between-group variances leading to a partial pooling of data across all levels in the hierarchy. The modeling framework provides means for incorporating variables at different spatiotemporal scales. The examples used in this paper illustrate the iterative process of model fitting and evaluation, a process that can lead to improved understanding of the system being studied.

  18. Comparison of multi-fluid moment models with particle-in-cell simulations of collisionless magnetic reconnection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Liang, E-mail: liang.wang@unh.edu; Germaschewski, K.; Hakim, Ammar H.

    2015-01-15

    We introduce an extensible multi-fluid moment model in the context of collisionless magnetic reconnection. This model evolves full Maxwell equations and simultaneously moments of the Vlasov-Maxwell equation for each species in the plasma. Effects like electron inertia and pressure gradient are self-consistently embedded in the resulting multi-fluid moment equations, without the need to explicitly solving a generalized Ohm's law. Two limits of the multi-fluid moment model are discussed, namely, the five-moment limit that evolves a scalar pressures for each species and the ten-moment limit that evolves the full anisotropic, non-gyrotropic pressure tensor for each species. We first demonstrate analytically andmore » numerically that the five-moment model reduces to the widely used Hall magnetohydrodynamics (Hall MHD) model under the assumptions of vanishing electron inertia, infinite speed of light, and quasi-neutrality. Then, we compare ten-moment and fully kinetic particle-in-cell (PIC) simulations of a large scale Harris sheet reconnection problem, where the ten-moment equations are closed with a local linear collisionless approximation for the heat flux. The ten-moment simulation gives reasonable agreement with the PIC results regarding the structures and magnitudes of the electron flows, the polarities and magnitudes of elements of the electron pressure tensor, and the decomposition of the generalized Ohm's law. Possible ways to improve the simple local closure towards a nonlocal fully three-dimensional closure are also discussed.« less

  19. Towards a more efficient and robust representation of subsurface hydrological processes in Earth System Models

    NASA Astrophysics Data System (ADS)

    Rosolem, R.; Rahman, M.; Kollet, S. J.; Wagener, T.

    2017-12-01

    Understanding the impacts of land cover and climate changes on terrestrial hydrometeorology is important across a range of spatial and temporal scales. Earth System Models (ESMs) provide a robust platform for evaluating these impacts. However, current ESMs lack the representation of key hydrological processes (e.g., preferential water flow, and direct interactions with aquifers) in general. The typical "free drainage" conceptualization of land models can misrepresent the magnitude of those interactions, consequently affecting the exchange of energy and water at the surface as well as estimates of groundwater recharge. Recent studies show the benefits of explicitly simulating the interactions between subsurface and surface processes in similar models. However, such parameterizations are often computationally demanding resulting in limited application for large/global-scale studies. Here, we take a different approach in developing a novel parameterization for groundwater dynamics. Instead of directly adding another complex process to an established land model, we examine a set of comprehensive experimental scenarios using a very robust and establish three-dimensional hydrological model to develop a simpler parameterization that represents the aquifer to land surface interactions. The main goal of our developed parameterization is to simultaneously maximize the computational gain (i.e., "efficiency") while minimizing simulation errors in comparison to the full 3D model (i.e., "robustness") to allow for easy implementation in ESMs globally. Our study focuses primarily on understanding both the dynamics for groundwater recharge and discharge, respectively. Preliminary results show that our proposed approach significantly reduced the computational demand while model deviations from the full 3D model are considered to be small for these processes.

  20. Using data to inform soil microbial carbon model structure and parameters

    NASA Astrophysics Data System (ADS)

    Hagerty, S. B.; Schimel, J.

    2016-12-01

    There is increasing consensus that explicitly representing microbial mechanisms in soil carbon models can improve model predictions of future soil carbon stocks. However, which microbial mechanisms must be represented in these new models and how remains under debate. One of the major challenges in developing microbially explicit soil carbon models is that there is little data available to validate model structure. Empirical studies of microbial mechanisms often fail to capture the full range of microbial processes; from the cellular processes that occur within minutes to hours of substrate consumption to community turnover which may occur over weeks or longer. We added isotopically labeled 14C-glucose to soil incubated in the lab and traced its movement into the microbial biomass, carbon dioxide, and K2SO4 extractable carbon pool. We measured the concentration of 14C in each of these pools at 1, 3, 6, 24, and 72 hours and at 7, 14, and 21 days. We used this data to compare data fits among models that match our conceptual understanding of microbial carbon transformations and to estimate microbial parameters that control the fate of soil carbon. Over 90% of the added glucose was consumed within the first hour after it was added and concentration of the label was highest in biomass at this time. After the first hour, the label in biomass declined, with the rate that the label moved from the biomass slowing after 24hours, because of this models representing the microbial biomass as two pools fit best. Recovery of the label decreased with incubation time, from nearly 80% in the first hour to 67% after three weeks, indicating that carbon is moving into unextractable pools in the soil likely as microbial products and necromass sorb to soil particles and that these mechanisms must be represented in microbial models. This data fitting exercise demonstrates how isotopic data can be useful in validating model structure and estimating microbial model parameters. Future studies can apply this inverse modeling approach to compare the response of microbial parameters to changes in environmental conditions.

  1. Rapid computational identification of the targets of protein kinase inhibitors.

    PubMed

    Rockey, William M; Elcock, Adrian H

    2005-06-16

    We describe a method for rapidly computing the relative affinities of an inhibitor for all individual members of a family of homologous receptors. The approach, implemented in a new program, SCR, models inhibitor-receptor interactions in full atomic detail with an empirical energy function and includes an explicit account of flexibility in homology-modeled receptors through sampling of libraries of side chain rotamers. SCR's general utility was demonstrated by application to seven different protein kinase inhibitors: for each inhibitor, relative binding affinities with panels of approximately 20 protein kinases were computed and compared with experimental data. For five of the inhibitors (SB203580, purvalanol B, imatinib, H89, and hymenialdisine), SCR provided excellent reproduction of the experimental trends and, importantly, was capable of identifying the targets of inhibitors even when they belonged to different kinase families. The method's performance in a predictive setting was demonstrated by performing separate training and testing applications, and its key assumptions were tested by comparison with a number of alternative approaches employing the ligand-docking program AutoDock (Morris et al. J. Comput. Chem. 1998, 19, 1639-1662). These comparison tests included using AutoDock in nondocking and docking modes and performing energy minimizations of inhibitor-kinase complexes with the molecular mechanics code GROMACS (Berendsen et al. Comput. Phys. Commun. 1995, 91, 43-56). It was found that a surprisingly important aspect of SCR's approach is its assumption that the inhibitor be modeled in the same orientation for each kinase: although this assumption is in some respects unrealistic, calculations that used apparently more realistic approaches produced clearly inferior results. Finally, as a large-scale application of the method, SB203580, purvalanol B, and imatinib were screened against an almost full complement of 493 human protein kinases using SCR in order to identify potential new targets; the predicted targets of SB203580 were compared with those identified in recent proteomics-based experiments. These kinome-wide screens, performed within a day on a small cluster of PCs, indicate that explicit computation of inhibitor-receptor binding affinities has the potential to promote rapid discovery of new therapeutic targets for existing inhibitors.

  2. Challenges and strategies for effectively teaching the nature of science: A qualitative case study

    NASA Astrophysics Data System (ADS)

    Koehler, Catherine M.

    This year long, qualitative, case study examines two, experienced, high school, biology teachers as they facilitated nature of science (NOS) understandings in their classrooms. This study explored three research questions: (1) In what ways do experienced teachers' conceptions of NOS evolve over one full year as a result of participating in a course that explicitly address NOS teaching and learning? (2) In what ways do experienced teachers' pedagogical practices evolve over one full year as a result of participating in a course that explicitly address NOS teaching and learning?, and (3) What are the challenges facing experienced teachers in their attempts to implement NOS understandings in their science, high school classrooms? This study was conducted in two parts. In Part I (fall 2004 semester), the participants were enrolled in a graduate course titled, Teaching the Nature of Science , where they were introduced to: (1) NOS, (2) a strategy, the Model for Teaching NOS (MTNOS), which helped them facilitate teaching NOS understandings through inquiry-based activities, and (3) participated in "real" science activities that reinforced their conceptions of NOS. In Part II (spring 2005 semester), classroom observations were made to uncover how these teachers implemented inquiry-based activities emphasizing NOS understanding in their classrooms. Their conceptions of NOS were measured using the Views of the Nature of Science questionnaire. Results demonstrated that each teacher's conceptions of NOS shifted slightly during course the study, but, for one, this was not a permanent shift. Over the year, one teacher's pedagogical practices changed to include inquiry-based lessons using MTNOS; the other, although very amenable to using prepared inquiry-based lessons, did not change her pedagogical practices. Both reported similar challenges while facilitating NOS understanding. The most significant challenges included: (1) time management; (2) the perception that NOS was a content area, and (3) using an inquiry-based model in their classroom. This study describes a curricular and pedagogical model for lesson planning and implementation of inquiry-based activities that promotes NOS understandings in the classroom. It defines the challenges encountered while fostering these understandings, and suggests that NOS needs to be integrated across the educational life span of all students.

  3. Theory of wavelet-based coarse-graining hierarchies for molecular dynamics.

    PubMed

    Rinderspacher, Berend Christopher; Bardhan, Jaydeep P; Ismail, Ahmed E

    2017-07-01

    We present a multiresolution approach to compressing the degrees of freedom and potentials associated with molecular dynamics, such as the bond potentials. The approach suggests a systematic way to accelerate large-scale molecular simulations with more than two levels of coarse graining, particularly applications of polymeric materials. In particular, we derive explicit models for (arbitrarily large) linear (homo)polymers and iterative methods to compute large-scale wavelet decompositions from fragment solutions. This approach does not require explicit preparation of atomistic-to-coarse-grained mappings, but instead uses the theory of diffusion wavelets for graph Laplacians to develop system-specific mappings. Our methodology leads to a hierarchy of system-specific coarse-grained degrees of freedom that provides a conceptually clear and mathematically rigorous framework for modeling chemical systems at relevant model scales. The approach is capable of automatically generating as many coarse-grained model scales as necessary, that is, to go beyond the two scales in conventional coarse-grained strategies; furthermore, the wavelet-based coarse-grained models explicitly link time and length scales. Furthermore, a straightforward method for the reintroduction of omitted degrees of freedom is presented, which plays a major role in maintaining model fidelity in long-time simulations and in capturing emergent behaviors.

  4. Sensitivity of single column model simulations of Arctic springtime clouds to different cloud cover and mixed phase cloud parameterizations

    NASA Astrophysics Data System (ADS)

    Zhang, Junhua; Lohmann, Ulrike

    2003-08-01

    The single column model of the Canadian Centre for Climate Modeling and Analysis (CCCma) climate model is used to simulate Arctic spring cloud properties observed during the Surface Heat Budget of the Arctic Ocean (SHEBA) experiment. The model is driven by the rawinsonde observations constrained European Center for Medium-Range Weather Forecasts (ECMWF) reanalysis data. Five cloud parameterizations, including three statistical and two explicit schemes, are compared and the sensitivity to mixed phase cloud parameterizations is studied. Using the original mixed phase cloud parameterization of the model, the statistical cloud schemes produce more cloud cover, cloud water, and precipitation than the explicit schemes and in general agree better with observations. The mixed phase cloud parameterization from ECMWF decreases the initial saturation specific humidity threshold of cloud formation. This improves the simulated cloud cover in the explicit schemes and reduces the difference between the different cloud schemes. On the other hand, because the ECMWF mixed phase cloud scheme does not consider the Bergeron-Findeisen process, less ice crystals are formed. This leads to a higher liquid water path and less precipitation than what was observed.

  5. Medical School Factors Associated with Changes in Implicit and Explicit Bias Against Gay and Lesbian People among 3492 Graduating Medical Students.

    PubMed

    Phelan, Sean M; Burke, Sara E; Hardeman, Rachel R; White, Richard O; Przedworski, Julia; Dovidio, John F; Perry, Sylvia P; Plankey, Michael; A Cunningham, Brooke; Finstad, Deborah; W Yeazel, Mark; van Ryn, Michelle

    2017-11-01

    Implicit and explicit bias among providers can influence the quality of healthcare. Efforts to address sexual orientation bias in new physicians are hampered by a lack of knowledge of school factors that influence bias among students. To determine whether medical school curriculum, role modeling, diversity climate, and contact with sexual minorities predict bias among graduating students against gay and lesbian people. Prospective cohort study. A sample of 4732 first-year medical students was recruited from a stratified random sample of 49 US medical schools in the fall of 2010 (81% response; 55% of eligible), of which 94.5% (4473) identified as heterosexual. Seventy-eight percent of baseline respondents (3492) completed a follow-up survey in their final semester (spring 2014). Medical school predictors included formal curriculum, role modeling, diversity climate, and contact with sexual minorities. Outcomes were year 4 implicit and explicit bias against gay men and lesbian women, adjusted for bias at year 1. In multivariate models, lower explicit bias against gay men and lesbian women was associated with more favorable contact with LGBT faculty, residents, students, and patients, and perceived skill and preparedness for providing care to LGBT patients. Greater explicit bias against lesbian women was associated with discrimination reported by sexual minority students (b = 1.43 [0.16, 2.71]; p = 0.03). Lower implicit sexual orientation bias was associated with more frequent contact with LGBT faculty, residents, students, and patients (b = -0.04 [-0.07, -0.01); p = 0.008). Greater implicit bias was associated with more faculty role modeling of discriminatory behavior (b = 0.34 [0.11, 0.57); p = 0.004). Medical schools may reduce bias against sexual minority patients by reducing negative role modeling, improving the diversity climate, and improving student preparedness to care for this population.

  6. An image-based reaction field method for electrostatic interactions in molecular dynamics simulations of aqueous solutions

    NASA Astrophysics Data System (ADS)

    Lin, Yuchun; Baumketner, Andrij; Deng, Shaozhong; Xu, Zhenli; Jacobs, Donald; Cai, Wei

    2009-10-01

    In this paper, a new solvation model is proposed for simulations of biomolecules in aqueous solutions that combines the strengths of explicit and implicit solvent representations. Solute molecules are placed in a spherical cavity filled with explicit water, thus providing microscopic detail where it is most needed. Solvent outside of the cavity is modeled as a dielectric continuum whose effect on the solute is treated through the reaction field corrections. With this explicit/implicit model, the electrostatic potential represents a solute molecule in an infinite bath of solvent, thus avoiding unphysical interactions between periodic images of the solute commonly used in the lattice-sum explicit solvent simulations. For improved computational efficiency, our model employs an accurate and efficient multiple-image charge method to compute reaction fields together with the fast multipole method for the direct Coulomb interactions. To minimize the surface effects, periodic boundary conditions are employed for nonelectrostatic interactions. The proposed model is applied to study liquid water. The effect of model parameters, which include the size of the cavity, the number of image charges used to compute reaction field, and the thickness of the buffer layer, is investigated in comparison with the particle-mesh Ewald simulations as a reference. An optimal set of parameters is obtained that allows for a faithful representation of many structural, dielectric, and dynamic properties of the simulated water, while maintaining manageable computational cost. With controlled and adjustable accuracy of the multiple-image charge representation of the reaction field, it is concluded that the employed model achieves convergence with only one image charge in the case of pure water. Future applications to pKa calculations, conformational sampling of solvated biomolecules and electrolyte solutions are briefly discussed.

  7. Dynamical simulation priors for human motion tracking.

    PubMed

    Vondrak, Marek; Sigal, Leonid; Jenkins, Odest Chadwicke

    2013-01-01

    We propose a simulation-based dynamical motion prior for tracking human motion from video in presence of physical ground-person interactions. Most tracking approaches to date have focused on efficient inference algorithms and/or learning of prior kinematic motion models; however, few can explicitly account for the physical plausibility of recovered motion. Here, we aim to recover physically plausible motion of a single articulated human subject. Toward this end, we propose a full-body 3D physical simulation-based prior that explicitly incorporates a model of human dynamics into the Bayesian filtering framework. We consider the motion of the subject to be generated by a feedback “control loop” in which Newtonian physics approximates the rigid-body motion dynamics of the human and the environment through the application and integration of interaction forces, motor forces, and gravity. Interaction forces prevent physically impossible hypotheses, enable more appropriate reactions to the environment (e.g., ground contacts), and are produced from detected human-environment collisions. Motor forces actuate the body, ensure that proposed pose transitions are physically feasible, and are generated using a motion controller. For efficient inference in the resulting high-dimensional state space, we utilize an exemplar-based control strategy that reduces the effective search space of motor forces. As a result, we are able to recover physically plausible motion of human subjects from monocular and multiview video. We show, both quantitatively and qualitatively, that our approach performs favorably with respect to Bayesian filtering methods with standard motion priors.

  8. Quantum state engineering in hybrid open quantum systems

    NASA Astrophysics Data System (ADS)

    Joshi, Chaitanya; Larson, Jonas; Spiller, Timothy P.

    2016-04-01

    We investigate a possibility to generate nonclassical states in light-matter coupled noisy quantum systems, namely, the anisotropic Rabi and Dicke models. In these hybrid quantum systems, a competing influence of coherent internal dynamics and environment-induced dissipation drives the system into nonequilibrium steady states (NESSs). Explicitly, for the anisotropic Rabi model, the steady state is given by an incoherent mixture of two states of opposite parities, but as each parity state displays light-matter entanglement, we also find that the full state is entangled. Furthermore, as a natural extension of the anisotropic Rabi model to an infinite spin subsystem, we next explored the NESS of the anisotropic Dicke model. The NESS of this linearized Dicke model is also an inseparable state of light and matter. With an aim to enrich the dynamics beyond the sustainable entanglement found for the NESS of these hybrid quantum systems, we also propose to combine an all-optical feedback strategy for quantum state protection and for establishing quantum control in these systems. Our present work further elucidates the relevance of such hybrid open quantum systems for potential applications in quantum architectures.

  9. Predictive information processing in music cognition. A critical review.

    PubMed

    Rohrmeier, Martin A; Koelsch, Stefan

    2012-02-01

    Expectation and prediction constitute central mechanisms in the perception and cognition of music, which have been explored in theoretical and empirical accounts. We review the scope and limits of theoretical accounts of musical prediction with respect to feature-based and temporal prediction. While the concept of prediction is unproblematic for basic single-stream features such as melody, it is not straight-forward for polyphonic structures or higher-order features such as formal predictions. Behavioural results based on explicit and implicit (priming) paradigms provide evidence of priming in various domains that may reflect predictive behaviour. Computational learning models, including symbolic (fragment-based), probabilistic/graphical, or connectionist approaches, provide well-specified predictive models of specific features and feature combinations. While models match some experimental results, full-fledged music prediction cannot yet be modelled. Neuroscientific results regarding the early right-anterior negativity (ERAN) and mismatch negativity (MMN) reflect expectancy violations on different levels of processing complexity, and provide some neural evidence for different predictive mechanisms. At present, the combinations of neural and computational modelling methodologies are at early stages and require further research. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. Impact of charge transport on current–voltage characteristics and power-conversion efficiency of organic solar cells

    PubMed Central

    Würfel, Uli; Neher, Dieter; Spies, Annika; Albrecht, Steve

    2015-01-01

    This work elucidates the impact of charge transport on the photovoltaic properties of organic solar cells. Here we show that the analysis of current–voltage curves of organic solar cells under illumination with the Shockley equation results in values for ideality factor, photocurrent and parallel resistance, which lack physical meaning. Drift-diffusion simulations for a wide range of charge-carrier mobilities and illumination intensities reveal significant carrier accumulation caused by poor transport properties, which is not included in the Shockley equation. As a consequence, the separation of the quasi Fermi levels in the organic photoactive layer (internal voltage) differs substantially from the external voltage for almost all conditions. We present a new analytical model, which considers carrier transport explicitly. The model shows excellent agreement with full drift-diffusion simulations over a wide range of mobilities and illumination intensities, making it suitable for realistic efficiency predictions for organic solar cells. PMID:25907581

  11. Theory of resonant x-ray emission spectra in compounds with localized f electrons

    NASA Astrophysics Data System (ADS)

    Kolorenč, Jindřich

    2018-05-01

    I discuss a theoretical description of the resonant x-ray emission spectroscopy (RXES) that is based on the Anderson impurity model. The parameters entering the model are determined from material-specific LDA+DMFT calculations. The theory is applicable across the whole f series, not only in the limits of nearly empty (La, Ce) or nearly full (Yb) valence f shell. Its performance is illustrated on the pressure-enhanced intermediate valency of elemental praseodymium. The obtained results are compared to the usual interpretation of RXES, which assumes that the spectrum is a superposition of several signals, each corresponding to one configuration of the 4f shell. The present theory simplifies to such superposition only if nearly all effects of hybridization of the 4f shell with the surrounding states are neglected. Although the assumption of negligible hybridization sounds reasonable for lanthanides, the explicit calculations show that it substantially distorts the analysis of the RXES data.

  12. Computing the Absorption and Emission Spectra of 5-Methylcytidine in Different Solvents: A Test-Case for Different Solvation Models.

    PubMed

    Martínez-Fernández, L; Pepino, A J; Segarra-Martí, J; Banyasz, A; Garavelli, M; Improta, R

    2016-09-13

    The optical spectra of 5-methylcytidine in three different solvents (tetrahydrofuran, acetonitrile, and water) is measured, showing that both the absorption and the emission maximum in water are significantly blue-shifted (0.08 eV). The absorption spectra are simulated based on CAM-B3LYP/TD-DFT calculations but including solvent effects with three different approaches: (i) a hybrid implicit/explicit full quantum mechanical approach, (ii) a mixed QM/MM static approach, and (iii) a QM/MM method exploiting the structures issuing from molecular dynamics classical simulations. Ab-initio Molecular dynamics simulations based on CAM-B3LYP functionals have also been performed. The adopted approaches all reproduce the main features of the experimental spectra, giving insights on the chemical-physical effects responsible for the solvent shifts in the spectra of 5-methylcytidine and providing the basis for discussing advantages and limitations of the adopted solvation models.

  13. Simulations of carbon fiber composite delamination tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kay, G

    2007-10-25

    Simulations of mode I interlaminar fracture toughness tests of a carbon-reinforced composite material (BMS 8-212) were conducted with LSDYNA. The fracture toughness tests were performed by U.C. Berkeley. The simulations were performed to investigate the validity and practicality of employing decohesive elements to represent interlaminar bond failures that are prevalent in carbon-fiber composite structure penetration events. The simulations employed a decohesive element formulation that was verified on a simple two element model before being employed to perform the full model simulations. Care was required during the simulations to ensure that the explicit time integration of LSDYNA duplicate the near steady-statemore » testing conditions. In general, this study validated the use of employing decohesive elements to represent the interlaminar bond failures seen in carbon-fiber composite structures, but the practicality of employing the elements to represent the bond failures seen in carbon-fiber composite structures during penetration events was not established.« less

  14. Scalable free energy calculation of proteins via multiscale essential sampling

    NASA Astrophysics Data System (ADS)

    Moritsugu, Kei; Terada, Tohru; Kidera, Akinori

    2010-12-01

    A multiscale simulation method, "multiscale essential sampling (MSES)," is proposed for calculating free energy surface of proteins in a sizable dimensional space with good scalability. In MSES, the configurational sampling of a full-dimensional model is enhanced by coupling with the accelerated dynamics of the essential degrees of freedom. Applying the Hamiltonian exchange method to MSES can remove the biasing potential from the coupling term, deriving the free energy surface of the essential degrees of freedom. The form of the coupling term ensures good scalability in the Hamiltonian exchange. As a test application, the free energy surface of the folding process of a miniprotein, chignolin, was calculated in the continuum solvent model. Results agreed with the free energy surface derived from the multicanonical simulation. Significantly improved scalability with the MSES method was clearly shown in the free energy calculation of chignolin in explicit solvent, which was achieved without increasing the number of replicas in the Hamiltonian exchange.

  15. Age-related differences in brain activity during implicit and explicit processing of fearful facial expressions.

    PubMed

    Zsoldos, Isabella; Cousin, Emilie; Klein-Koerkamp, Yanica; Pichat, Cédric; Hot, Pascal

    2016-11-01

    Age-related differences in neural correlates underlying implicit and explicit emotion processing are unclear. Within the framework of the Frontoamygdalar Age-related Differences in Emotion model (St Jacques et al., 2009), our objectives were to examine the behavioral and neural modifications that occur with age for both processes. During explicit and implicit processing of fearful faces, we expected to observe less amygdala activity in older adults (OA) than in younger adults (YA), associated with poorer recognition performance in the explicit task, and more frontal activity during implicit processing, suggesting compensation. At a behavioral level, explicit recognition of fearful faces was impaired in OA compared with YA. We did not observe any cerebral differences between OA and YA during the implicit task, whereas in the explicit task, OA recruited more frontal, parietal, temporal, occipital, and cingulate areas. Our findings suggest that automatic processing of emotion may be preserved during aging, whereas deliberate processing is impaired. Additional neural recruitment in OA did not appear to compensate for their behavioral deficits. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. New Interoperable Tools to Facilitate Decision-Making to Support Community Sustainability

    EPA Science Inventory

    Communities, regional planning authorities, regulatory agencies, and other decision-making bodies do not currently have adequate access to spatially explicit information crucial to making decisions that allow them to consider a full accounting of the costs, benefits, and trade-of...

  17. An explicit mixed numerical method for mesoscale model

    NASA Technical Reports Server (NTRS)

    Hsu, H.-M.

    1981-01-01

    A mixed numerical method has been developed for mesoscale models. The technique consists of a forward difference scheme for time tendency terms, an upstream scheme for advective terms, and a central scheme for the other terms in a physical system. It is shown that the mixed method is conditionally stable and highly accurate for approximating the system of either shallow-water equations in one dimension or primitive equations in three dimensions. Since the technique is explicit and two time level, it conserves computer and programming resources.

  18. Initialization and assimilation of cloud and rainwater in a regional model

    NASA Technical Reports Server (NTRS)

    Raymond, William H.; Olson, William S.

    1990-01-01

    The initialization and assimilation of cloud and rainwater quantities in a mesoscale regional model was examined. Forecasts of explicit cloud and rainwater are made using conservation equations. The physical processes include condensation, evaporation, autoconversion, accretion, and the removal of rainwater by fallout. These physical processes, some of which are parameterized, represent source and sink in terms in the conservation equations. The question of how to initialize the explicit liquid water calculations in numerical models and how to retain information about precipitation processes during the 4-D assimilation cycle are important issues that are addressed.

  19. Automation based on knowledge modeling theory and its applications in engine diagnostic systems using Space Shuttle Main Engine vibrational data. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Kim, Jonnathan H.

    1995-01-01

    Humans can perform many complicated tasks without explicit rules. This inherent and advantageous capability becomes a hurdle when a task is to be automated. Modern computers and numerical calculations require explicit rules and discrete numerical values. In order to bridge the gap between human knowledge and automating tools, a knowledge model is proposed. Knowledge modeling techniques are discussed and utilized to automate a labor and time intensive task of detecting anomalous bearing wear patterns in the Space Shuttle Main Engine (SSME) High Pressure Oxygen Turbopump (HPOTP).

  20. Toward an Optimal Pedagogy for Teamwork.

    PubMed

    Earnest, Mark A; Williams, Jason; Aagaard, Eva M

    2017-10-01

    Teamwork and collaboration are increasingly listed as core competencies for undergraduate health professions education. Despite the clear mandate for teamwork training, the optimal method for providing that training is much less certain. In this Perspective, the authors propose a three-level classification of pedagogical approaches to teamwork training based on the presence of two key learning factors: interdependent work and explicit training in teamwork. In this classification framework, level 1-minimal team learning-is where learners work in small groups but neither of the key learning factors is present. Level 2-implicit team learning-engages learners in interdependent learning activities but does not include an explicit focus on teamwork. Level 3-explicit team learning-creates environments where teams work interdependently toward common goals and are given explicit instruction and practice in teamwork. The authors provide examples that demonstrate each level. They then propose that the third level of team learning, explicit team learning, represents a best practice approach in teaching teamwork, highlighting their experience with an explicit team learning course at the University of Colorado Anschutz Medical Campus. Finally, they discuss several challenges to implementing explicit team-learning-based curricula: the lack of a common teamwork model on which to anchor such a curriculum; the question of whether the knowledge, skills, and attitudes acquired during training would be transferable to the authentic clinical environment; and effectively evaluating the impact of explicit team learning.

  1. Best Practices for Crash Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Fasanella, Edwin L.; Jackson, Karen E.

    2002-01-01

    Aviation safety can be greatly enhanced by the expeditious use of computer simulations of crash impact. Unlike automotive impact testing, which is now routine, experimental crash tests of even small aircraft are expensive and complex due to the high cost of the aircraft and the myriad of crash impact conditions that must be considered. Ultimately, the goal is to utilize full-scale crash simulations of aircraft for design evaluation and certification. The objective of this publication is to describe "best practices" for modeling aircraft impact using explicit nonlinear dynamic finite element codes such as LS-DYNA, DYNA3D, and MSC.Dytran. Although "best practices" is somewhat relative, it is hoped that the authors' experience will help others to avoid some of the common pitfalls in modeling that are not documented in one single publication. In addition, a discussion of experimental data analysis, digital filtering, and test-analysis correlation is provided. Finally, some examples of aircraft crash simulations are described in several appendices following the main report.

  2. Temperature-Dependent Implicit-Solvent Model of Polyethylene Glycol in Aqueous Solution.

    PubMed

    Chudoba, Richard; Heyda, Jan; Dzubiella, Joachim

    2017-12-12

    A temperature (T)-dependent coarse-grained (CG) Hamiltonian of polyethylene glycol/oxide (PEG/PEO) in aqueous solution is reported to be used in implicit-solvent material models in a wide temperature (i.e., solvent quality) range. The T-dependent nonbonded CG interactions are derived from a combined "bottom-up" and "top-down" approach. The pair potentials calculated from atomistic replica-exchange molecular dynamics simulations in combination with the iterative Boltzmann inversion are postrefined by benchmarking to experimental data of the radius of gyration. For better handling and a fully continuous transferability in T-space, the pair potentials are conveniently truncated and mapped to an analytic formula with three structural parameters expressed as explicit continuous functions of T. It is then demonstrated that this model without further adjustments successfully reproduces other experimentally known key thermodynamic properties of semidilute PEG solutions such as the full equation of state (i.e., T-dependent osmotic pressure) for various chain lengths as well as their cloud point (or collapse) temperature.

  3. Spatially explicit shallow landslide susceptibility mapping over large areas

    Treesearch

    Dino Bellugi; William E. Dietrich; Jonathan Stock; Jim McKean; Brian Kazian; Paul Hargrove

    2011-01-01

    Recent advances in downscaling climate model precipitation predictions now yield spatially explicit patterns of rainfall that could be used to estimate shallow landslide susceptibility over large areas. In California, the United States Geological Survey is exploring community emergency response to the possible effects of a very large simulated storm event and to do so...

  4. Evaluating spatially explicit burn probabilities for strategic fire management planning

    Treesearch

    C. Miller; M.-A. Parisien; A. A. Ager; M. A. Finney

    2008-01-01

    Spatially explicit information on the probability of burning is necessary for virtually all strategic fire and fuels management planning activities, including conducting wildland fire risk assessments, optimizing fuel treatments, and prevention planning. Predictive models providing a reliable estimate of the annual likelihood of fire at each point on the landscape have...

  5. Beyond the Sponge Model: Encouraging Students' Questioning Skills in Abnormal Psychology.

    ERIC Educational Resources Information Center

    Keeley, Stuart M.; Ali, Rahan; Gebing, Tracy

    1998-01-01

    Argues that educators should provide students with explicit training in asking critical questions. Describes a training strategy taught in abnormal psychology courses at Bowling Green State University (Ohio). Based on a pre- and post-test, results support the promise of using explicit questioning training in promoting the evaluative aspects of…

  6. A Conceptual Model for the Design and Delivery of Explicit Thinking Skills Instruction

    ERIC Educational Resources Information Center

    Kassem, Cherrie L.

    2005-01-01

    Developing student thinking skills is an important goal for most educators. However, due to time constraints and weighty content standards, thinking skills instruction is often embedded in subject matter, implicit and incidental. For best results, thinking skills instruction requires a systematic design and explicit teaching strategies. The…

  7. Fully implicit Particle-in-cell algorithms for multiscale plasma simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacon, Luis

    The outline of the paper is as follows: Particle-in-cell (PIC) methods for fully ionized collisionless plasmas, explicit vs. implicit PIC, 1D ES implicit PIC (charge and energy conservation, moment-based acceleration), and generalization to Multi-D EM PIC: Vlasov-Darwin model (review and motivation for Darwin model, conservation properties (energy, charge, and canonical momenta), and numerical benchmarks). The author demonstrates a fully implicit, fully nonlinear, multidimensional PIC formulation that features exact local charge conservation (via a novel particle mover strategy), exact global energy conservation (no particle self-heating or self-cooling), adaptive particle orbit integrator to control errors in momentum conservation, and canonical momenta (EM-PICmore » only, reduced dimensionality). The approach is free of numerical instabilities: ω peΔt >> 1, and Δx >> λ D. It requires many fewer dofs (vs. explicit PIC) for comparable accuracy in challenging problems. Significant CPU gains (vs explicit PIC) have been demonstrated. The method has much potential for efficiency gains vs. explicit in long-time-scale applications. Moment-based acceleration is effective in minimizing N FE, leading to an optimal algorithm.« less

  8. The Full Scope of Family Physicians' Work Is Not Reflected by Current Procedural Terminology Codes.

    PubMed

    Young, Richard A; Burge, Sandy; Kumar, Kaparaboyna Ashok; Wilson, Jocelyn

    2017-01-01

    The purpose of this study was to characterize the content of family physician (FP) clinic encounters, and to count the number of visits in which the FPs addressed issues not explicitly reportable by 99211 to 99215 and 99354 Current Procedural Terminology (CPT) codes with current reimbursement methods and based on examples provided in the CPT manual. The data collection instrument was modeled on the National Ambulatory Medical Care Survey. Trained assistants directly observed every other FP-patient encounter and recorded every patient concern, issue addressed by the physician (including care barriers related to health care systems and social determinants), and treatment ordered in clinics affiliated with 10 residencies of the Residency Research Network of Texas. A visit was deemed to include physician work that was not explicitly reportable if the number or nature of issues addressed exceeded the definitions or examples for 99205/99215 or 99214 + 99354 or a preventive service code, included the physician addressing health care system or social determinant issues, or included the care of a family member. In 982 physician-patient encounters, patients raised 517 different reasons for visit (total, 5278; mean, 5.4 per visit; range, 1 to 16) and the FPs addressed 509 different issues (total issues, 3587; mean, 3.7 per visit; range, 1 to 10). FPs managed 425 different medications, 18 supplements, and 11 devices. A mean of 3.9 chronic medications were continued per visit (range, 0 to 21) and 4.6 total medications were managed (range, 0 to 22). In 592 (60.3%) of the visits the FPs did work that was not explicitly reportable with available CPT codes: 582 (59.3%) addressed more numerous issues than explicitly reportable, 64 (6.5%) addressed system barriers, and 13 (1.3%) addressed concerns for other family members. FPs perform cognitive work in a majority of their patient encounters that are not explicitly reportable, either by being higher than the CPT example number of diagnoses per code or the type of problems addressed, which has implications for the care of complex multi-morbid patients and the growth of the primary care workforce. To address these limitations, either the CPT codes and their associated rules should be updated to reflect the realities of family physicians' practices or new billing and coding approaches should be developed. © Copyright 2017 by the American Board of Family Medicine.

  9. Ramsey Interference in One-Dimensional Systems: The Full Distribution Function of Fringe Contrast as a Probe of Many-Body Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kitagawa, Takuya; Pielawa, Susanne; Demler, Eugene

    2010-06-25

    We theoretically analyze Ramsey interference experiments in one-dimensional quasicondensates and obtain explicit expressions for the time evolution of full distribution functions of fringe contrast. We show that distribution functions contain unique signatures of the many-body mechanism of decoherence. We argue that Ramsey interference experiments provide a powerful tool for analyzing strongly correlated nature of 1D interacting systems.

  10. Dynamic modelling of solids in a full-scale activated sludge plant preceded by CEPT as a preliminary step for micropollutant removal modelling.

    PubMed

    Baalbaki, Zeina; Torfs, Elena; Maere, Thomas; Yargeau, Viviane; Vanrolleghem, Peter A

    2017-04-01

    The presence of micropollutants in the environment has triggered research on quantifying and predicting their fate in wastewater treatment plants (WWTPs). Since the removal of micropollutants is highly related to conventional pollutant removal and affected by hydraulics, aeration, biomass composition and solids concentration, the fate of these conventional pollutants and characteristics must be well predicted before tackling models to predict the fate of micropollutants. In light of this, the current paper presents the dynamic modelling of conventional pollutants undergoing activated sludge treatment using a limited set of additional daily composite data besides the routine data collected at a WWTP over one year. Results showed that as a basis for modelling, the removal of micropollutants, the Bürger-Diehl settler model was found to capture the actual effluent total suspended solids (TSS) concentrations more efficiently than the Takács model by explicitly modelling the overflow boundary. Results also demonstrated that particular attention must be given to characterizing incoming TSS to obtain a representative solids balance in the presence of a chemically enhanced primary treatment, which is key to predict the fate of micropollutants.

  11. Structure and dynamics of human vimentin intermediate filament dimer and tetramer in explicit and implicit solvent models.

    PubMed

    Qin, Zhao; Buehler, Markus J

    2011-01-01

    Intermediate filaments, in addition to microtubules and microfilaments, are one of the three major components of the cytoskeleton in eukaryotic cells, and play an important role in mechanotransduction as well as in providing mechanical stability to cells at large stretch. The molecular structures, mechanical and dynamical properties of the intermediate filament basic building blocks, the dimer and the tetramer, however, have remained elusive due to persistent experimental challenges owing to the large size and fibrillar geometry of this protein. We have recently reported an atomistic-level model of the human vimentin dimer and tetramer, obtained through a bottom-up approach based on structural optimization via molecular simulation based on an implicit solvent model (Qin et al. in PLoS ONE 2009 4(10):e7294, 9). Here we present extensive simulations and structural analyses of the model based on ultra large-scale atomistic-level simulations in an explicit solvent model, with system sizes exceeding 500,000 atoms and simulations carried out at 20 ns time-scales. We report a detailed comparison of the structural and dynamical behavior of this large biomolecular model with implicit and explicit solvent models. Our simulations confirm the stability of the molecular model and provide insight into the dynamical properties of the dimer and tetramer. Specifically, our simulations reveal a heterogeneous distribution of the bending stiffness along the molecular axis with the formation of rather soft and highly flexible hinge-like regions defined by non-alpha-helical linker domains. We report a comparison of Ramachandran maps and the solvent accessible surface area between implicit and explicit solvent models, and compute the persistence length of the dimer and tetramer structure of vimentin intermediate filaments for various subdomains of the protein. Our simulations provide detailed insight into the dynamical properties of the vimentin dimer and tetramer intermediate filament building blocks, which may guide the development of novel coarse-grained models of intermediate filaments, and could also help in understanding assembly mechanisms.

  12. Rapid Response Tools and Datasets for Post-fire Hydrological Modeling

    NASA Astrophysics Data System (ADS)

    Miller, Mary Ellen; MacDonald, Lee H.; Billmire, Michael; Elliot, William J.; Robichaud, Pete R.

    2016-04-01

    Rapid response is critical following natural disasters. Flooding, erosion, and debris flows are a major threat to life, property and municipal water supplies after moderate and high severity wildfires. The problem is that mitigation measures must be rapidly implemented if they are to be effective, but they are expensive and cannot be applied everywhere. Fires, runoff, and erosion risks also are highly heterogeneous in space, so there is an urgent need for a rapid, spatially-explicit assessment. Past post-fire modeling efforts have usually relied on lumped, conceptual models because of the lack of readily available, spatially-explicit data layers on the key controls of topography, vegetation type, climate, and soil characteristics. The purpose of this project is to develop a set of spatially-explicit data layers for use in process-based models such as WEPP, and to make these data layers freely available. The resulting interactive online modeling database (http://geodjango.mtri.org/geowepp/) is now operational and publically available for 17 western states in the USA. After a fire, users only need to upload a soil burn severity map, and this is combined with the pre-existing data layers to generate the model inputs needed for spatially explicit models such as GeoWEPP (Renschler, 2003). The development of this online database has allowed us to predict post-fire erosion and various remediation scenarios in just 1-7 days for six fires ranging in size from 4-540 km2. These initial successes have stimulated efforts to further improve the spatial extent and amount of data, and add functionality to support the USGS debris flow model, batch processing for Disturbed WEPP (Elliot et al., 2004) and ERMiT (Robichaud et al., 2007), and to support erosion modeling for other land uses, such as agriculture or mining. The design and techniques used to create the database and the modeling interface are readily repeatable for any area or country that has the necessary topography, climate, soil, and land cover datasets.

  13. Asymptotic approximations to posterior distributions via conditional moment equations

    USGS Publications Warehouse

    Yee, J.L.; Johnson, W.O.; Samaniego, F.J.

    2002-01-01

    We consider asymptotic approximations to joint posterior distributions in situations where the full conditional distributions referred to in Gibbs sampling are asymptotically normal. Our development focuses on problems where data augmentation facilitates simpler calculations, but results hold more generally. Asymptotic mean vectors are obtained as simultaneous solutions to fixed point equations that arise naturally in the development. Asymptotic covariance matrices flow naturally from the work of Arnold & Press (1989) and involve the conditional asymptotic covariance matrices and first derivative matrices for conditional mean functions. When the fixed point equations admit an analytical solution, explicit formulae are subsequently obtained for the covariance structure of the joint limiting distribution, which may shed light on the use of the given statistical model. Two illustrations are given. ?? 2002 Biometrika Trust.

  14. Random close packing in protein cores

    NASA Astrophysics Data System (ADS)

    Ohern, Corey

    Shortly after the determination of the first protein x-ray crystal structures, researchers analyzed their cores and reported packing fractions ϕ ~ 0 . 75 , a value that is similar to close packing equal-sized spheres. A limitation of these analyses was the use of `extended atom' models, rather than the more physically accurate `explicit hydrogen' model. The validity of using the explicit hydrogen model is proved by its ability to predict the side chain dihedral angle distributions observed in proteins. We employ the explicit hydrogen model to calculate the packing fraction of the cores of over 200 high resolution protein structures. We find that these protein cores have ϕ ~ 0 . 55 , which is comparable to random close-packing of non-spherical particles. This result provides a deeper understanding of the physical basis of protein structure that will enable predictions of the effects of amino acid mutations and design of new functional proteins. We gratefully acknowledge the support of the Raymond and Beverly Sackler Institute for Biological, Physical, and Engineering Sciences, National Library of Medicine training grant T15LM00705628 (J.C.G.), and National Science Foundation DMR-1307712 (L.R.).

  15. Using spatially explicit surveillance models to provide confidence in the eradication of an invasive ant

    PubMed Central

    Ward, Darren F.; Anderson, Dean P.; Barron, Mandy C.

    2016-01-01

    Effective detection plays an important role in the surveillance and management of invasive species. Invasive ants are very difficult to eradicate and are prone to imperfect detection because of their small size and cryptic nature. Here we demonstrate the use of spatially explicit surveillance models to estimate the probability that Argentine ants (Linepithema humile) have been eradicated from an offshore island site, given their absence across four surveys and three surveillance methods, conducted since ant control was applied. The probability of eradication increased sharply as each survey was conducted. Using all surveys and surveillance methods combined, the overall median probability of eradication of Argentine ants was 0.96. There was a high level of confidence in this result, with a high Credible Interval Value of 0.87. Our results demonstrate the value of spatially explicit surveillance models for the likelihood of eradication of Argentine ants. We argue that such models are vital to give confidence in eradication programs, especially from highly valued conservation areas such as offshore islands. PMID:27721491

  16. Systems Modeling at Multiple Levels of Regulation: Linking Systems and Genetic Networks to Spatially Explicit Plant Populations

    PubMed Central

    Kitchen, James L.; Allaby, Robin G.

    2013-01-01

    Selection and adaptation of individuals to their underlying environments are highly dynamical processes, encompassing interactions between the individual and its seasonally changing environment, synergistic or antagonistic interactions between individuals and interactions amongst the regulatory genes within the individual. Plants are useful organisms to study within systems modeling because their sedentary nature simplifies interactions between individuals and the environment, and many important plant processes such as germination or flowering are dependent on annual cycles which can be disrupted by climate behavior. Sedentism makes plants relevant candidates for spatially explicit modeling that is tied in with dynamical environments. We propose that in order to fully understand the complexities behind plant adaptation, a system that couples aspects from systems biology with population and landscape genetics is required. A suitable system could be represented by spatially explicit individual-based models where the virtual individuals are located within time-variable heterogeneous environments and contain mutable regulatory gene networks. These networks could directly interact with the environment, and should provide a useful approach to studying plant adaptation. PMID:27137364

  17. Logistic Mixed Models to Investigate Implicit and Explicit Belief Tracking

    PubMed Central

    Lages, Martin; Scheel, Anne

    2016-01-01

    We investigated the proposition of a two-systems Theory of Mind in adults’ belief tracking. A sample of N = 45 participants predicted the choice of one of two opponent players after observing several rounds in an animated card game. Three matches of this card game were played and initial gaze direction on target and subsequent choice predictions were recorded for each belief task and participant. We conducted logistic regressions with mixed effects on the binary data and developed Bayesian logistic mixed models to infer implicit and explicit mentalizing in true belief and false belief tasks. Although logistic regressions with mixed effects predicted the data well a Bayesian logistic mixed model with latent task- and subject-specific parameters gave a better account of the data. As expected explicit choice predictions suggested a clear understanding of true and false beliefs (TB/FB). Surprisingly, however, model parameters for initial gaze direction also indicated belief tracking. We discuss why task-specific parameters for initial gaze directions are different from choice predictions yet reflect second-order perspective taking. PMID:27853440

  18. A Single-System Model Predicts Recognition Memory and Repetition Priming in Amnesia

    PubMed Central

    Kessels, Roy P.C.; Wester, Arie J.; Shanks, David R.

    2014-01-01

    We challenge the claim that there are distinct neural systems for explicit and implicit memory by demonstrating that a formal single-system model predicts the pattern of recognition memory (explicit) and repetition priming (implicit) in amnesia. In the current investigation, human participants with amnesia categorized pictures of objects at study and then, at test, identified fragmented versions of studied (old) and nonstudied (new) objects (providing a measure of priming), and made a recognition memory judgment (old vs new) for each object. Numerous results in the amnesic patients were predicted in advance by the single-system model, as follows: (1) deficits in recognition memory and priming were evident relative to a control group; (2) items judged as old were identified at greater levels of fragmentation than items judged new, regardless of whether the items were actually old or new; and (3) the magnitude of the priming effect (the identification advantage for old vs new items) overall was greater than that of items judged new. Model evidence measures also favored the single-system model over two formal multiple-systems models. The findings support the single-system model, which explains the pattern of recognition and priming in amnesia primarily as a reduction in the strength of a single dimension of memory strength, rather than a selective explicit memory system deficit. PMID:25122896

  19. Molecular modelling of protein-protein/protein-solvent interactions

    NASA Astrophysics Data System (ADS)

    Luchko, Tyler

    The inner workings of individual cells are based on intricate networks of protein-protein interactions. However, each of these individual protein interactions requires a complex physical interaction between proteins and their aqueous environment at the atomic scale. In this thesis, molecular dynamics simulations are used in three theoretical studies to gain insight at the atomic scale about protein hydration, protein structure and tubulin-tubulin (protein-protein) interactions, as found in microtubules. Also presented, in a fourth project, is a molecular model of solvation coupled with the Amber molecular modelling package, to facilitate further studies without the need of explicitly modelled water. Basic properties of a minimally solvated protein were calculated through an extended study of myoglobin hydration with explicit solvent, directly investigating water and protein polarization. Results indicate a close correlation between polarization of both water and protein and the onset of protein function. The methodology of explicit solvent molecular dynamics was further used to study tubulin and microtubules. Extensive conformational sampling of the carboxy-terminal tails of 8-tubulin was performed via replica exchange molecular dynamics, allowing the characterisation of the flexibility, secondary structure and binding domains of the C-terminal tails through statistical analysis methods. Mechanical properties of tubulin and microtubules were calculated with adaptive biasing force molecular dynamics. The function of the M-loop in microtubule stability was demonstrated in these simulations. The flexibility of this loop allowed constant contacts between the protofilaments to be maintained during simulations while the smooth deformation provided a spring-like restoring force. Additionally, calculating the free energy profile between the straight and bent tubulin configurations was used to test the proposed conformational change in tubulin, thought to cause microtubule destabilization. No conformational change was observed but a nucleotide dependent 'softening' of the interaction was found instead, suggesting that an entropic force in a microtubule configuration could be the mechanism of microtubule collapse. Finally, to overcome much of the computational costs associated with explicit soIvent calculations, a new combination of molecular dynamics with the 3D-reference interaction site model (3D-RISM) of solvation was integrated into the Amber molecular dynamics package. Our implementation of 3D-RISM shows excellent agreement with explicit solvent free energy calculations. Several optimisation techniques, including a new multiple time step method, provide a nearly 100 fold performance increase, giving similar computational performance to explicit solvent.

  20. Spectral wave dissipation by submerged aquatic vegetation in a back-barrier estuary

    USGS Publications Warehouse

    Nowacki, Daniel J.; Beudin, Alexis; Ganju, Neil K.

    2017-01-01

    Submerged aquatic vegetation is generally thought to attenuate waves, but this interaction remains poorly characterized in shallow-water field settings with locally generated wind waves. Better quantification of wave–vegetation interaction can provide insight to morphodynamic changes in a variety of environments and also is relevant to the planning of nature-based coastal protection measures. Toward that end, an instrumented transect was deployed across a Zostera marina (common eelgrass) meadow in Chincoteague Bay, Maryland/Virginia, U.S.A., to characterize wind-wave transformation within the vegetated region. Field observations revealed wave-height reduction, wave-period transformation, and wave-energy dissipation with distance into the meadow, and the data informed and calibrated a spectral wave model of the study area. The field observations and model results agreed well when local wind forcing and vegetation-induced drag were included in the model, either explicitly as rigid vegetation elements or implicitly as large bed-roughness values. Mean modeled parameters were similar for both the explicit and implicit approaches, but the spectral performance of the explicit approach was poor compared to the implicit approach. The explicit approach over-predicted low-frequency energy within the meadow because the vegetation scheme determines dissipation using mean wavenumber and frequency, in contrast to the bed-friction formulations, which dissipate energy in a variable fashion across frequency bands. Regardless of the vegetation scheme used, vegetation was the most important component of wave dissipation within much of the study area. These results help to quantify the influence of submerged aquatic vegetation on wave dynamics in future model parameterizations, field efforts, and coastal-protection measures.

  1. Attention and perceptual implicit memory: effects of selective versus divided attention and number of visual objects.

    PubMed

    Mulligan, Neil W

    2002-08-01

    Extant research presents conflicting results on whether manipulations of attention during encoding affect perceptual priming. Two suggested mediating factors are type of manipulation (selective vs divided) and whether attention is manipulated across multiple objects or within a single object. Words printed in different colors (Experiment 1) or flanked by colored blocks (Experiment 2) were presented at encoding. In the full-attention condition, participants always read the word, in the unattended condition they always identified the color, and in the divided-attention conditions, participants attended to both word identity and color. Perceptual priming was assessed with perceptual identification and explicit memory with recognition. Relative to the full-attention condition, attending to color always reduced priming. Dividing attention between word identity and color, however, only disrupted priming when these attributes were presented as multiple objects (Experiment 2) but not when they were dimensions of a common object (Experiment 1). On the explicit test, manipulations of attention always affected recognition accuracy.

  2. Do You See What I See? Exploring the Consequences of Luminosity Limits in Black Hole-Galaxy Evolution Studies

    NASA Astrophysics Data System (ADS)

    Jones, Mackenzie L.; Hickox, Ryan C.; Mutch, Simon J.; Croton, Darren J.; Ptak, Andrew F.; DiPompeo, Michael A.

    2017-07-01

    In studies of the connection between active galactic nuclei (AGNs) and their host galaxies, there is widespread disagreement on some key aspects of the connection. These disagreements largely stem from a lack of understanding of the nature of the full underlying AGN population. Recent attempts to probe this connection utilize both observations and simulations to correct for a missed population, but presently are limited by intrinsic biases and complicated models. We take a simple simulation for galaxy evolution and add a new prescription for AGN activity to connect galaxy growth to dark matter halo properties and AGN activity to star formation. We explicitly model selection effects to produce an “observed” AGN population for comparison with observations and empirically motivated models of the local universe. This allows us to bypass the difficulties inherent in models that attempt to infer the AGN population by inverting selection effects. We investigate the impact of selecting AGNs based on thresholds in luminosity or Eddington ratio on the “observed” AGN population. By limiting our model AGN sample in luminosity, we are able to recreate the observed local AGN luminosity function and specific star formation-stellar mass distribution, and show that using an Eddington ratio threshold introduces less bias into the sample by selecting the full range of growing black holes, despite the challenge of selecting low-mass black holes. We find that selecting AGNs using these various thresholds yield samples with different AGN host galaxy properties.

  3. Computational statistics using the Bayesian Inference Engine

    NASA Astrophysics Data System (ADS)

    Weinberg, Martin D.

    2013-09-01

    This paper introduces the Bayesian Inference Engine (BIE), a general parallel, optimized software package for parameter inference and model selection. This package is motivated by the analysis needs of modern astronomical surveys and the need to organize and reuse expensive derived data. The BIE is the first platform for computational statistics designed explicitly to enable Bayesian update and model comparison for astronomical problems. Bayesian update is based on the representation of high-dimensional posterior distributions using metric-ball-tree based kernel density estimation. Among its algorithmic offerings, the BIE emphasizes hybrid tempered Markov chain Monte Carlo schemes that robustly sample multimodal posterior distributions in high-dimensional parameter spaces. Moreover, the BIE implements a full persistence or serialization system that stores the full byte-level image of the running inference and previously characterized posterior distributions for later use. Two new algorithms to compute the marginal likelihood from the posterior distribution, developed for and implemented in the BIE, enable model comparison for complex models and data sets. Finally, the BIE was designed to be a collaborative platform for applying Bayesian methodology to astronomy. It includes an extensible object-oriented and easily extended framework that implements every aspect of the Bayesian inference. By providing a variety of statistical algorithms for all phases of the inference problem, a scientist may explore a variety of approaches with a single model and data implementation. Additional technical details and download details are available from http://www.astro.umass.edu/bie. The BIE is distributed under the GNU General Public License.

  4. Explicit processing demands reveal language modality-specific organization of working memory.

    PubMed

    Rudner, Mary; Rönnberg, Jerker

    2008-01-01

    The working memory model for Ease of Language Understanding (ELU) predicts that processing differences between language modalities emerge when cognitive demands are explicit. This prediction was tested in three working memory experiments with participants who were Deaf Signers (DS), Hearing Signers (HS), or Hearing Nonsigners (HN). Easily nameable pictures were used as stimuli to avoid confounds relating to sensory modality. Performance was largely similar for DS, HS, and HN, suggesting that previously identified intermodal differences may be due to differences in retention of sensory information. When explicit processing demands were high, differences emerged between DS and HN, suggesting that although working memory storage in both groups is sensitive to temporal organization, retrieval is not sensitive to temporal organization in DS. A general effect of semantic similarity was also found. These findings are discussed in relation to the ELU model.

  5. Classification of NLO operators for composite Higgs models

    NASA Astrophysics Data System (ADS)

    Alanne, Tommi; Bizot, Nicolas; Cacciapaglia, Giacomo; Sannino, Francesco

    2018-04-01

    We provide a general classification of template operators, up to next-to-leading order, that appear in chiral perturbation theories based on the two flavor patterns of spontaneous symmetry breaking SU (NF)/Sp (NF) and SU (NF)/SO (NF). All possible explicit-breaking sources parametrized by spurions transforming in the fundamental and in the two-index representations of the flavor symmetry are included. While our general framework can be applied to any model of strong dynamics, we specialize to composite-Higgs models, where the main explicit breaking sources are a current mass, the gauging of flavor symmetries, and the Yukawa couplings (for the top). For the top, we consider both bilinear couplings and linear ones à la partial compositeness. Our templates provide a basis for lattice calculations in specific models. As a special example, we consider the SU (4 )/Sp (4 )≅SO (6 )/SO (5 ) pattern which corresponds to the minimal fundamental composite-Higgs model. We further revisit issues related to the misalignment of the vacuum. In particular, we shed light on the physical properties of the singlet η , showing that it cannot develop a vacuum expectation value without explicit C P violation in the underlying theory.

  6. Random close packing in protein cores

    NASA Astrophysics Data System (ADS)

    Gaines, Jennifer C.; Smith, W. Wendell; Regan, Lynne; O'Hern, Corey S.

    2016-03-01

    Shortly after the determination of the first protein x-ray crystal structures, researchers analyzed their cores and reported packing fractions ϕ ≈0.75 , a value that is similar to close packing of equal-sized spheres. A limitation of these analyses was the use of extended atom models, rather than the more physically accurate explicit hydrogen model. The validity of the explicit hydrogen model was proved in our previous studies by its ability to predict the side chain dihedral angle distributions observed in proteins. In contrast, the extended atom model is not able to recapitulate the side chain dihedral angle distributions, and gives rise to large atomic clashes at side chain dihedral angle combinations that are highly probable in protein crystal structures. Here, we employ the explicit hydrogen model to calculate the packing fraction of the cores of over 200 high-resolution protein structures. We find that these protein cores have ϕ ≈0.56 , which is similar to results obtained from simulations of random packings of individual amino acids. This result provides a deeper understanding of the physical basis of protein structure that will enable predictions of the effects of amino acid mutations to protein cores and interfaces of known structure.

  7. A Geographically Explicit Genetic Model of Worldwide Human-Settlement History

    PubMed Central

    Liu, Hua; Prugnolle, Franck; Manica, Andrea; Balloux, François

    2006-01-01

    Currently available genetic and archaeological evidence is generally interpreted as supportive of a recent single origin of modern humans in East Africa. However, this is where the near consensus on human settlement history ends, and considerable uncertainty clouds any more detailed aspect of human colonization history. Here, we present a dynamic genetic model of human settlement history coupled with explicit geographical distances from East Africa, the likely origin of modern humans. We search for the best-supported parameter space by fitting our analytical prediction to genetic data that are based on 52 human populations analyzed at 783 autosomal microsatellite markers. This framework allows us to jointly estimate the key parameters of the expansion of modern humans. Our best estimates suggest an initial expansion of modern humans ∼56,000 years ago from a small founding population of ∼1,000 effective individuals. Our model further points to high growth rates in newly colonized habitats. The general fit of the model with the data is excellent. This suggests that coupling analytical genetic models with explicit demography and geography provides a powerful tool for making inferences on human-settlement history. PMID:16826514

  8. Random close packing in protein cores.

    PubMed

    Gaines, Jennifer C; Smith, W Wendell; Regan, Lynne; O'Hern, Corey S

    2016-03-01

    Shortly after the determination of the first protein x-ray crystal structures, researchers analyzed their cores and reported packing fractions ϕ ≈ 0.75, a value that is similar to close packing of equal-sized spheres. A limitation of these analyses was the use of extended atom models, rather than the more physically accurate explicit hydrogen model. The validity of the explicit hydrogen model was proved in our previous studies by its ability to predict the side chain dihedral angle distributions observed in proteins. In contrast, the extended atom model is not able to recapitulate the side chain dihedral angle distributions, and gives rise to large atomic clashes at side chain dihedral angle combinations that are highly probable in protein crystal structures. Here, we employ the explicit hydrogen model to calculate the packing fraction of the cores of over 200 high-resolution protein structures. We find that these protein cores have ϕ ≈ 0.56, which is similar to results obtained from simulations of random packings of individual amino acids. This result provides a deeper understanding of the physical basis of protein structure that will enable predictions of the effects of amino acid mutations to protein cores and interfaces of known structure.

  9. Constant pH molecular dynamics of proteins in explicit solvent with proton tautomerism.

    PubMed

    Goh, Garrett B; Hulbert, Benjamin S; Zhou, Huiqing; Brooks, Charles L

    2014-07-01

    pH is a ubiquitous regulator of biological activity, including protein-folding, protein-protein interactions, and enzymatic activity. Existing constant pH molecular dynamics (CPHMD) models that were developed to address questions related to the pH-dependent properties of proteins are largely based on implicit solvent models. However, implicit solvent models are known to underestimate the desolvation energy of buried charged residues, increasing the error associated with predictions that involve internal ionizable residue that are important in processes like hydrogen transport and electron transfer. Furthermore, discrete water and ions cannot be modeled in implicit solvent, which are important in systems like membrane proteins and ion channels. We report on an explicit solvent constant pH molecular dynamics framework based on multi-site λ-dynamics (CPHMD(MSλD)). In the CPHMD(MSλD) framework, we performed seamless alchemical transitions between protonation and tautomeric states using multi-site λ-dynamics, and designed novel biasing potentials to ensure that the physical end-states are predominantly sampled. We show that explicit solvent CPHMD(MSλD) simulations model realistic pH-dependent properties of proteins such as the Hen-Egg White Lysozyme (HEWL), binding domain of 2-oxoglutarate dehydrogenase (BBL) and N-terminal domain of ribosomal protein L9 (NTL9), and the pKa predictions are in excellent agreement with experimental values, with a RMSE ranging from 0.72 to 0.84 pKa units. With the recent development of the explicit solvent CPHMD(MSλD) framework for nucleic acids, accurate modeling of pH-dependent properties of both major class of biomolecules-proteins and nucleic acids is now possible. © 2013 Wiley Periodicals, Inc.

  10. Alternative modeling methods for plasma-based Rf ion sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veitzer, Seth A., E-mail: veitzer@txcorp.com; Kundrapu, Madhusudhan, E-mail: madhusnk@txcorp.com; Stoltz, Peter H., E-mail: phstoltz@txcorp.com

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H{sup −} source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. Inmore » particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H{sup −} ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD models for the SNS source and present simulation results demonstrating plasma evolution over many Rf periods for different plasma temperatures. We perform the calculations in parallel, on unstructured meshes, using finite-volume solvers in order to obtain results in reasonable time.« less

  11. Alternative modeling methods for plasma-based Rf ion sources.

    PubMed

    Veitzer, Seth A; Kundrapu, Madhusudhan; Stoltz, Peter H; Beckwith, Kristian R C

    2016-02-01

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H(-) source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. In particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H(-) ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD models for the SNS source and present simulation results demonstrating plasma evolution over many Rf periods for different plasma temperatures. We perform the calculations in parallel, on unstructured meshes, using finite-volume solvers in order to obtain results in reasonable time.

  12. Explicit criteria for prioritization of cataract surgery

    PubMed Central

    Ma Quintana, José; Escobar, Antonio; Bilbao, Amaia

    2006-01-01

    Background Consensus techniques have been used previously to create explicit criteria to prioritize cataract extraction; however, the appropriateness of the intervention was not included explicitly in previous studies. We developed a prioritization tool for cataract extraction according to the RAND method. Methods Criteria were developed using a modified Delphi panel judgment process. A panel of 11 ophthalmologists was assembled. Ratings were analyzed regarding the level of agreement among panelists. We studied the effect of all variables on the final panel score using general linear and logistic regression models. Priority scoring systems were developed by means of optimal scaling and general linear models. The explicit criteria developed were summarized by means of regression tree analysis. Results Eight variables were considered to create the indications. Of the 310 indications that the panel evaluated, 22.6% were considered high priority, 52.3% intermediate priority, and 25.2% low priority. Agreement was reached for 31.9% of the indications and disagreement for 0.3%. Logistic regression and general linear models showed that the preoperative visual acuity of the cataractous eye, visual function, and anticipated visual acuity postoperatively were the most influential variables. Alternative and simple scoring systems were obtained by optimal scaling and general linear models where the previous variables were also the most important. The decision tree also shows the importance of the previous variables and the appropriateness of the intervention. Conclusion Our results showed acceptable validity as an evaluation and management tool for prioritizing cataract extraction. It also provides easy algorithms for use in clinical practice. PMID:16512893

  13. Improving predictions of large scale soil carbon dynamics: Integration of fine-scale hydrological and biogeochemical processes, scaling, and benchmarking

    NASA Astrophysics Data System (ADS)

    Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.

    2015-12-01

    Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we contend that creating believable soil carbon predictions requires a robust, transparent, and community-available benchmarking framework. I will present an ILAMB evaluation of several of the above-mentioned approaches in ACME, and attempt to motivate community adoption of this evaluation approach.

  14. Assessment of the GECKO-A modeling tool using chamber observations for C12 alkanes

    NASA Astrophysics Data System (ADS)

    Aumont, B.; La, S.; Ouzebidour, F.; Valorso, R.; Mouchel-Vallon, C.; Camredon, M.; Lee-Taylor, J. M.; Hodzic, A.; Madronich, S.; Yee, L. D.; Loza, C. L.; Craven, J. S.; Zhang, X.; Seinfeld, J.

    2013-12-01

    Secondary Organic Aerosol (SOA) production and ageing is the result of atmospheric oxidation processes leading to the progressive formation of organic species with higher oxidation state and lower volatility. Explicit chemical mechanisms reflect our understanding of these multigenerational oxidation steps. Major uncertainties remain concerning the processes leading to SOA formation and the development, assessment and improvement of such explicit schemes is therefore a key issue. The development of explicit mechanism to describe the oxidation of long chain hydrocarbons is however a challenge. Indeed, explicit oxidation schemes involve a large number of reactions and secondary organic species, far exceeding the size of chemical schemes that can be written manually. The chemical mechanism generator GECKO-A (Generator for Explicit Chemistry and Kinetics of Organics in the Atmosphere) is a computer program designed to overcome this difficulty. GECKO-A generates gas phase oxidation schemes according to a prescribed protocol assigning reaction pathways and kinetics data on the basis of experimental data and structure-activity relationships. In this study, we examine the ability of the generated schemes to explain SOA formation observed in the Caltech Environmental Chambers from various C12 alkane isomers and under high NOx and low NOx conditions. First results show that the model overestimates both the SOA yields and the O/C ratios. Various sensitivity tests are performed to explore processes that might be responsible for these disagreements.

  15. Elucidating the interaction between light competition and herbivore feeding patterns using functional–structural plant modelling

    PubMed Central

    de Vries, Jorad; Poelman, Erik H; Anten, Niels; Evers, Jochem B

    2018-01-01

    Abstract Background and Aims Plants usually compete with neighbouring plants for resources such as light as well as defend themselves against herbivorous insects. This requires investment of limiting resources, resulting in optimal resource distribution patterns and trade-offs between growth- and defence-related traits. A plant’s competitive success is determined by the spatial distribution of its resources in the canopy. The spatial distribution of herbivory in the canopy in turn differs between herbivore species as the level of herbivore specialization determines their response to the distribution of resources and defences in the canopy. Here, we investigated to what extent competition for light affects plant susceptibility to herbivores with different feeding preferences. Methods To quantify interactions between herbivory and competition, we developed and evaluated a 3-D spatially explicit functional–structural plant model for Brassica nigra that mechanistically simulates competition in a dynamic light environment, and also explicitly models leaf area removal by herbivores with different feeding preferences. With this novel approach, we can quantitatively explore the extent to which herbivore feeding location and light competition interact in their effect on plant performance. Key Results Our results indicate that there is indeed a strong interaction between levels of plant–plant competition and herbivore feeding preference. When plants did not compete, herbivory had relatively small effects irrespective of feeding preference. Conversely, when plants competed, herbivores with a preference for young leaves had a strong negative effect on the competitiveness and subsequent performance of the plant, whereas herbivores with a preference for old leaves did not. Conclusions Our study predicts how plant susceptibility to herbivory depends on the composition of the herbivore community and the level of plant competition, and highlights the importance of considering the full range of dynamics in plant–plant–herbivore interactions. PMID:29373660

  16. Spatially explicit and stochastic simulation of forest landscape fire disturbance and succession

    Treesearch

    Hong S. He; David J. Mladenoff

    1999-01-01

    Understanding disturbance and recovery of forest landscapes is a challenge because of complex interactions over a range of temporal and spatial scales. Landscape simulation models offer an approach to studying such systems at broad scales. Fire can be simulated spatially using mechanistic or stochastic approaches. We describe the fire module in a spatially explicit,...

  17. The discrepancy between implicit and explicit attitudes in predicting disinhibited eating.

    PubMed

    Goldstein, Stephanie P; Forman, Evan M; Meiran, Nachshon; Herbert, James D; Juarascio, Adrienne S; Butryn, Meghan L

    2014-01-01

    Disinhibited eating (i.e., the tendency to overeat, despite intentions not to do so, in the presence of palatable foods or other cues such as emotional stress) is strongly linked with obesity and appears to be associated with both implicit (automatic) and explicit (deliberative) food attitudes. Prior research suggests that a large discrepancy between implicit and explicit food attitudes may contribute to greater levels of disinhibited eating; however this theory has not been directly tested. The current study examined whether the discrepancy between implicit and explicit attitudes towards chocolate could predict both lab-based and self-reported disinhibited eating of chocolate. Results revealed that, whereas neither implicit nor explicit attitudes alone predicted disinhibited eating, absolute attitude discrepancy positively predicted chocolate consumption. Impulsivity moderated this effect, such that discrepancy was less predictive of disinhibited eating for those who exhibited lower levels of impulsivity. The results align with the meta-cognitive model to indicate that attitude discrepancy may be involved in overeating. © 2013.

  18. How does subsurface retain and release stored water? An explicit estimation of young water fraction and mean transit time

    NASA Astrophysics Data System (ADS)

    Ameli, Ali; McDonnell, Jeffrey; Laudon, Hjalmar; Bishop, Kevin

    2017-04-01

    The stable isotopes of water have served science well as hydrological tracers which have demonstrated that there is often a large component of "old" water in stream runoff. It has been more problematic to define the full transit time distribution of that stream water. Non-linear mixing of previous precipitation signals that is stored for extended periods and slowly travel through the subsurface before reaching the stream results in a large range of possible transit times. It difficult to find tracers can represent this, especially if all that one has is data on the precipitation input and the stream runoff. In this paper, we explicitly characterize this "old water" displacement using a novel quasi-steady physically-based flow and transport model in the well-studied S-Transect hillslope in Sweden where the concentration of hydrological tracers in the subsurface and stream has been measured. We explore how subsurface conductivity profile impacts the characteristics of old water displacement, and then test these scenarios against the observed dynamics of conservative hydrological tracers in both the stream and subsurface. This work explores the efficiency of convolution-based approaches in the estimation of stream "young water" fraction and time-variant mean transit times. We also suggest how celerity and velocity differ with landscape structure

  19. Effects of Divided Attention at Retrieval on Conceptual Implicit Memory

    PubMed Central

    Prull, Matthew W.; Lawless, Courtney; Marshall, Helen M.; Sherman, Annabella T. K.

    2016-01-01

    This study investigated whether conceptual implicit memory is sensitive to process-specific interference at the time of retrieval. Participants performed the implicit memory test of category exemplar generation (CEG; Experiments 1 and 3), or the matched explicit memory test of category-cued recall (Experiment 2), both of which are conceptually driven memory tasks, under one of two divided attention (DA) conditions in which participants simultaneously performed a distracting task. The distracting task was either syllable judgments (dissimilar processes), or semantic judgments (similar processes) on unrelated words. Compared to full attention (FA) in which no distracting task was performed, DA had no effect on CEG priming overall, but reduced category-cued recall similarly regardless of distractor task. Analyses of distractor task performance also revealed differences between implicit and explicit memory retrieval. The evidence suggests that, whereas explicit memory retrieval requires attentional resources and is disrupted by semantic and phonological distracting tasks, conceptual implicit memory is automatic and unaffected even when distractor and memory tasks involve similar processes. PMID:26834678

  20. Effects of Divided Attention at Retrieval on Conceptual Implicit Memory.

    PubMed

    Prull, Matthew W; Lawless, Courtney; Marshall, Helen M; Sherman, Annabella T K

    2016-01-01

    This study investigated whether conceptual implicit memory is sensitive to process-specific interference at the time of retrieval. Participants performed the implicit memory test of category exemplar generation (CEG; Experiments 1 and 3), or the matched explicit memory test of category-cued recall (Experiment 2), both of which are conceptually driven memory tasks, under one of two divided attention (DA) conditions in which participants simultaneously performed a distracting task. The distracting task was either syllable judgments (dissimilar processes), or semantic judgments (similar processes) on unrelated words. Compared to full attention (FA) in which no distracting task was performed, DA had no effect on CEG priming overall, but reduced category-cued recall similarly regardless of distractor task. Analyses of distractor task performance also revealed differences between implicit and explicit memory retrieval. The evidence suggests that, whereas explicit memory retrieval requires attentional resources and is disrupted by semantic and phonological distracting tasks, conceptual implicit memory is automatic and unaffected even when distractor and memory tasks involve similar processes.

  1. Linear response coupled cluster theory with the polarizable continuum model within the singles approximation for the solvent response.

    PubMed

    Caricato, Marco

    2018-04-07

    We report the theory and the implementation of the linear response function of the coupled cluster (CC) with the single and double excitations method combined with the polarizable continuum model of solvation, where the correlation solvent response is approximated with the perturbation theory with energy and singles density (PTES) scheme. The singles name is derived from retaining only the contribution of the CC single excitation amplitudes to the correlation density. We compare the PTES working equations with those of the full-density (PTED) method. We then test the PTES scheme on the evaluation of excitation energies and transition dipoles of solvated molecules, as well as of the isotropic polarizability and specific rotation. Our results show a negligible difference between the PTED and PTES schemes, while the latter affords a significantly reduced computational cost. This scheme is general and can be applied to any solvation model that includes mutual solute-solvent polarization, including explicit models. Therefore, the PTES scheme is a competitive approach to compute response properties of solvated systems using CC methods.

  2. Linear response coupled cluster theory with the polarizable continuum model within the singles approximation for the solvent response

    NASA Astrophysics Data System (ADS)

    Caricato, Marco

    2018-04-01

    We report the theory and the implementation of the linear response function of the coupled cluster (CC) with the single and double excitations method combined with the polarizable continuum model of solvation, where the correlation solvent response is approximated with the perturbation theory with energy and singles density (PTES) scheme. The singles name is derived from retaining only the contribution of the CC single excitation amplitudes to the correlation density. We compare the PTES working equations with those of the full-density (PTED) method. We then test the PTES scheme on the evaluation of excitation energies and transition dipoles of solvated molecules, as well as of the isotropic polarizability and specific rotation. Our results show a negligible difference between the PTED and PTES schemes, while the latter affords a significantly reduced computational cost. This scheme is general and can be applied to any solvation model that includes mutual solute-solvent polarization, including explicit models. Therefore, the PTES scheme is a competitive approach to compute response properties of solvated systems using CC methods.

  3. Renormalizing a viscous fluid model for large scale structure formation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Führer, Florian; Rigopoulos, Gerasimos, E-mail: fuhrer@thphys.uni-heidelberg.de, E-mail: gerasimos.rigopoulos@ncl.ac.uk

    2016-02-01

    Using the Stochastic Adhesion Model (SAM) as a simple toy model for cosmic structure formation, we study renormalization and the removal of the cutoff dependence from loop integrals in perturbative calculations. SAM shares the same symmetry with the full system of continuity+Euler equations and includes a viscosity term and a stochastic noise term, similar to the effective theories recently put forward to model CDM clustering. We show in this context that if the viscosity and noise terms are treated as perturbative corrections to the standard eulerian perturbation theory, they are necessarily non-local in time. To ensure Galilean Invariance higher ordermore » vertices related to the viscosity and the noise must then be added and we explicitly show at one-loop that these terms act as counter terms for vertex diagrams. The Ward Identities ensure that the non-local-in-time theory can be renormalized consistently. Another possibility is to include the viscosity in the linear propagator, resulting in exponential damping at high wavenumber. The resulting local-in-time theory is then renormalizable to one loop, requiring less free parameters for its renormalization.« less

  4. The Environment Makes a Difference: The Impact of Explicit and Implicit Attitudes as Precursors in Different Food Choice Tasks

    PubMed Central

    König, Laura M.; Giese, Helge; Schupp, Harald T.; Renner, Britta

    2016-01-01

    Studies show that implicit and explicit attitudes influence food choice. However, precursors of food choice often are investigated using tasks offering a very limited number of options despite the comparably complex environment surrounding real life food choice. In the present study, we investigated how the assortment impacts the relationship between implicit and explicit attitudes and food choice (confectionery and fruit), assuming that a more complex choice architecture is more taxing on cognitive resources. Specifically, a binary and a multiple option choice task based on the same stimulus set (fake food items) were presented to ninety-seven participants. Path modeling revealed that both explicit and implicit attitudes were associated with relative food choice (confectionery vs. fruit) in both tasks. In the binary option choice task, both explicit and implicit attitudes were significant precursors of food choice, with explicit attitudes having a greater impact. Conversely, in the multiple option choice task, the additive impact of explicit and implicit attitudes was qualified by an interaction indicating that, even if explicit and implicit attitudes toward confectionery were inconsistent, more confectionery was chosen than fruit if either was positive. This compensatory ‘one is sufficient’-effect indicates that the structure of the choice environment modulates the relationship between attitudes and choice. The study highlights that environmental constraints, such as the number of choice options, are an important boundary condition that need to be included when investigating the relationship between psychological precursors and behavior. PMID:27621719

  5. The Environment Makes a Difference: The Impact of Explicit and Implicit Attitudes as Precursors in Different Food Choice Tasks.

    PubMed

    König, Laura M; Giese, Helge; Schupp, Harald T; Renner, Britta

    2016-01-01

    Studies show that implicit and explicit attitudes influence food choice. However, precursors of food choice often are investigated using tasks offering a very limited number of options despite the comparably complex environment surrounding real life food choice. In the present study, we investigated how the assortment impacts the relationship between implicit and explicit attitudes and food choice (confectionery and fruit), assuming that a more complex choice architecture is more taxing on cognitive resources. Specifically, a binary and a multiple option choice task based on the same stimulus set (fake food items) were presented to ninety-seven participants. Path modeling revealed that both explicit and implicit attitudes were associated with relative food choice (confectionery vs. fruit) in both tasks. In the binary option choice task, both explicit and implicit attitudes were significant precursors of food choice, with explicit attitudes having a greater impact. Conversely, in the multiple option choice task, the additive impact of explicit and implicit attitudes was qualified by an interaction indicating that, even if explicit and implicit attitudes toward confectionery were inconsistent, more confectionery was chosen than fruit if either was positive. This compensatory 'one is sufficient'-effect indicates that the structure of the choice environment modulates the relationship between attitudes and choice. The study highlights that environmental constraints, such as the number of choice options, are an important boundary condition that need to be included when investigating the relationship between psychological precursors and behavior.

  6. An economic theory of cigarette addiction.

    PubMed

    Suranovic, S M; Goldfarb, R S; Leonard, T C

    1999-01-01

    In this paper we present a model in which individuals act in their own best interest, to explain many behaviors associated with cigarette addiction. There are two key features of the model. First, there is an explicit representation of the withdrawal effects experienced when smokers attempt to quit smoking. Second, there is explicit recognition that the negative effects of smoking generally appear late in an individual's life. Among the things we use the model to explain are: (1) how individuals can become trapped in their decision to smoke; (2) the conditions under which cold-turkey quitting and gradual quitting may occur; and (3) a reason for the existence of quit-smoking treatments.

  7. Decision support systems in health economics.

    PubMed

    Quaglini, S; Dazzi, L; Stefanelli, M; Barosi, G; Marchetti, M

    1999-08-01

    This article describes a system addressed to different health care professionals for building, using, and sharing decision support systems for resource allocation. The system deals with selected areas, namely the choice of diagnostic tests, the therapy planning, and the instrumentation purchase. Decision support is based on decision-analytic models, incorporating an explicit knowledge representation of both the medical domain knowledge and the economic evaluation theory. Application models are built on top of meta-models, that are used as guidelines for making explicit both the cost and effectiveness components. This approach improves the transparency and soundness of the collaborative decision-making process and facilitates the result interpretation.

  8. How can the English-language scientific literature be made more accessible to non-native speakers? Journals should allow greater use of referenced direct quotations in 'component-oriented' scientific writing.

    PubMed

    Charlton, Bruce G

    2007-01-01

    In scientific writing, although clarity and precision of language are vital to effective communication, it seems undeniable that content is more important than form. Potentially valuable knowledge should not be excluded from the scientific literature merely because the researchers lack advanced language skills. Given that global scientific literature is overwhelmingly in the English-language, this presents a problem for non-native speakers. My proposal is that scientists should be permitted to construct papers using a substantial number of direct quotations from the already-published scientific literature. Quotations would need to be explicitly referenced so that the original author and publication should be given full credit for creating such a useful and valid description. At the extreme, this might result in a paper consisting mainly of a 'mosaic' of quotations from the already existing scientific literature, which are linked and extended by relatively few sentences comprising new data or ideas. This model bears some conceptual relationship to the recent trend in computing science for component-based or component-oriented software engineering - in which new programs are constructed by reusing programme components, which may be available in libraries. A new functionality is constructed by linking-together many pre-existing chunks of software. I suggest that journal editors should, in their instructions to authors, explicitly allow this 'component-oriented' method of constructing scientific articles; and carefully describe how it can be accomplished in such a way that proper referencing is enforced, and full credit is allocated to the authors of the reused linguistic components.

  9. Bessel smoothing filter for spectral-element mesh

    NASA Astrophysics Data System (ADS)

    Trinh, P. T.; Brossier, R.; Métivier, L.; Virieux, J.; Wellington, P.

    2017-06-01

    Smoothing filters are extremely important tools in seismic imaging and inversion, such as for traveltime tomography, migration and waveform inversion. For efficiency, and as they can be used a number of times during inversion, it is important that these filters can easily incorporate prior information on the geological structure of the investigated medium, through variable coherent lengths and orientation. In this study, we promote the use of the Bessel filter to achieve these purposes. Instead of considering the direct application of the filter, we demonstrate that we can rely on the equation associated with its inverse filter, which amounts to the solution of an elliptic partial differential equation. This enhances the efficiency of the filter application, and also its flexibility. We apply this strategy within a spectral-element-based elastic full waveform inversion framework. Taking advantage of this formulation, we apply the Bessel filter by solving the associated partial differential equation directly on the spectral-element mesh through the standard weak formulation. This avoids cumbersome projection operators between the spectral-element mesh and a regular Cartesian grid, or expensive explicit windowed convolution on the finite-element mesh, which is often used for applying smoothing operators. The associated linear system is solved efficiently through a parallel conjugate gradient algorithm, in which the matrix vector product is factorized and highly optimized with vectorized computation. Significant scaling behaviour is obtained when comparing this strategy with the explicit convolution method. The theoretical numerical complexity of this approach increases linearly with the coherent length, whereas a sublinear relationship is observed practically. Numerical illustrations are provided here for schematic examples, and for a more realistic elastic full waveform inversion gradient smoothing on the SEAM II benchmark model. These examples illustrate well the efficiency and flexibility of the approach proposed.

  10. Activity-Centered Domain Characterization for Problem-Driven Scientific Visualization

    PubMed Central

    Marai, G. Elisabeta

    2018-01-01

    Although visualization design models exist in the literature in the form of higher-level methodological frameworks, these models do not present a clear methodological prescription for the domain characterization step. This work presents a framework and end-to-end model for requirements engineering in problem-driven visualization application design. The framework and model are based on the activity-centered design paradigm, which is an enhancement of human-centered design. The proposed activity-centered approach focuses on user tasks and activities, and allows an explicit link between the requirements engineering process with the abstraction stage—and its evaluation—of existing, higher-level visualization design models. In a departure from existing visualization design models, the resulting model: assigns value to a visualization based on user activities; ranks user tasks before the user data; partitions requirements in activity-related capabilities and nonfunctional characteristics and constraints; and explicitly incorporates the user workflows into the requirements process. A further merit of this model is its explicit integration of functional specifications, a concept this work adapts from the software engineering literature, into the visualization design nested model. A quantitative evaluation using two sets of interdisciplinary projects supports the merits of the activity-centered model. The result is a practical roadmap to the domain characterization step of visualization design for problem-driven data visualization. Following this domain characterization model can help remove a number of pitfalls that have been identified multiple times in the visualization design literature. PMID:28866550

  11. Exploring global carbon turnover and radiocarbon cycling in terrestrial biosphere models

    NASA Astrophysics Data System (ADS)

    Graven, H. D.; Warren, H.

    2017-12-01

    The uptake of carbon into terrestrial ecosystems through net primary productivity (NPP) and the turnover of that carbon through various pathways are the fundamental drivers of changing carbon stocks on land, in addition to human-induced and natural disturbances. Terrestrial biosphere models use different formulations for carbon uptake and release, resulting in a range of values in NPP of 40-70 PgC/yr and biomass turnover times of about 25-40 years for the preindustrial period in current-generation models from CMIP5. Biases in carbon uptake and turnover impact simulated carbon uptake and storage in the historical period and later in the century under changing climate and CO2 concentration, however evaluating global-scale NPP and carbon turnover is challenging. Scaling up of plot-scale measurements involves uncertainty due to the large heterogeneity across ecosystems and biomass types, some of which are not well-observed. We are developing the modelling of radiocarbon in terrestrial biosphere models, with a particular focus on decadal 14C dynamics after the nuclear weapons testing in the 1950s-60s, including the impact of carbon flux trends and variability on 14C cycling. We use an estimate of the total inventory of excess 14C in the biosphere constructed by Naegler and Levin (2009) using a 14C budget approach incorporating estimates of total 14C produced by the weapons tests and atmospheric and oceanic 14C observations. By simulating radiocarbon in simple biosphere box models using carbon fluxes from the CMIP5 models, we find that carbon turnover is too rapid in many of the simple models - the models appear to take up too much 14C and release it too quickly. Therefore many CMIP5 models may also simulate carbon turnover that is too rapid. A caveat is that the simple box models we use may not adequately represent carbon dynamics in the full-scale models. Explicit simulation of radiocarbon in terrestrial biosphere models would allow more robust evaluation of biosphere models and the investigation of climate-carbon cycle feedbacks on various timescales. Explicit simulation of radiocarbon and carbon-13 in terrestrial biosphere models of Earth System Models, as well as in ocean models, is recommended by CMIP6 and supported by CMIP6 protocols and forcing datasets.

  12. Gravity discharge vessel revisited: An explicit Lambert W function solution

    NASA Astrophysics Data System (ADS)

    Digilov, Rafael M.

    2017-07-01

    Based on the generalized Poiseuille equation modified by a kinetic energy correction, an explicit solution for the time evolution of a liquid column draining under gravity through an exit capillary tube is derived in terms of the Lambert W function. In contrast to the conventional exponential behavior, as implied by the Poiseuille law, a new analytical solution gives a full account for the volumetric flow rate of a fluid through a capillary of any length and improves the precision of viscosity determination. The theoretical consideration may be of interest to students as an example of how implicit equations in the field of physics can be solved analytically using the Lambert function.

  13. Application of the MacCormack scheme to overland flow routing for high-spatial resolution distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Zhang, Ling; Nan, Zhuotong; Liang, Xu; Xu, Yi; Hernández, Felipe; Li, Lianxia

    2018-03-01

    Although process-based distributed hydrological models (PDHMs) are evolving rapidly over the last few decades, their extensive applications are still challenged by the computational expenses. This study attempted, for the first time, to apply the numerically efficient MacCormack algorithm to overland flow routing in a representative high-spatial resolution PDHM, i.e., the distributed hydrology-soil-vegetation model (DHSVM), in order to improve its computational efficiency. The analytical verification indicates that both the semi and full versions of the MacCormack schemes exhibit robust numerical stability and are more computationally efficient than the conventional explicit linear scheme. The full-version outperforms the semi-version in terms of simulation accuracy when a same time step is adopted. The semi-MacCormack scheme was implemented into DHSVM (version 3.1.2) to solve the kinematic wave equations for overland flow routing. The performance and practicality of the enhanced DHSVM-MacCormack model was assessed by performing two groups of modeling experiments in the Mercer Creek watershed, a small urban catchment near Bellevue, Washington. The experiments show that DHSVM-MacCormack can considerably improve the computational efficiency without compromising the simulation accuracy of the original DHSVM model. More specifically, with the same computational environment and model settings, the computational time required by DHSVM-MacCormack can be reduced to several dozen minutes for a simulation period of three months (in contrast with one day and a half by the original DHSVM model) without noticeable sacrifice of the accuracy. The MacCormack scheme proves to be applicable to overland flow routing in DHSVM, which implies that it can be coupled into other PHDMs for watershed routing to either significantly improve their computational efficiency or to make the kinematic wave routing for high resolution modeling computational feasible.

  14. Classical nucleation theory in the phase-field crystal model

    NASA Astrophysics Data System (ADS)

    Jreidini, Paul; Kocher, Gabriel; Provatas, Nikolas

    2018-04-01

    A full understanding of polycrystalline materials requires studying the process of nucleation, a thermally activated phase transition that typically occurs at atomistic scales. The numerical modeling of this process is problematic for traditional numerical techniques: commonly used phase-field methods' resolution does not extend to the atomic scales at which nucleation takes places, while atomistic methods such as molecular dynamics are incapable of scaling to the mesoscale regime where late-stage growth and structure formation takes place following earlier nucleation. Consequently, it is of interest to examine nucleation in the more recently proposed phase-field crystal (PFC) model, which attempts to bridge the atomic and mesoscale regimes in microstructure simulations. In this work, we numerically calculate homogeneous liquid-to-solid nucleation rates and incubation times in the simplest version of the PFC model, for various parameter choices. We show that the model naturally exhibits qualitative agreement with the predictions of classical nucleation theory (CNT) despite a lack of some explicit atomistic features presumed in CNT. We also examine the early appearance of lattice structure in nucleating grains, finding disagreement with some basic assumptions of CNT. We then argue that a quantitatively correct nucleation theory for the PFC model would require extending CNT to a multivariable theory.

  15. Classical nucleation theory in the phase-field crystal model.

    PubMed

    Jreidini, Paul; Kocher, Gabriel; Provatas, Nikolas

    2018-04-01

    A full understanding of polycrystalline materials requires studying the process of nucleation, a thermally activated phase transition that typically occurs at atomistic scales. The numerical modeling of this process is problematic for traditional numerical techniques: commonly used phase-field methods' resolution does not extend to the atomic scales at which nucleation takes places, while atomistic methods such as molecular dynamics are incapable of scaling to the mesoscale regime where late-stage growth and structure formation takes place following earlier nucleation. Consequently, it is of interest to examine nucleation in the more recently proposed phase-field crystal (PFC) model, which attempts to bridge the atomic and mesoscale regimes in microstructure simulations. In this work, we numerically calculate homogeneous liquid-to-solid nucleation rates and incubation times in the simplest version of the PFC model, for various parameter choices. We show that the model naturally exhibits qualitative agreement with the predictions of classical nucleation theory (CNT) despite a lack of some explicit atomistic features presumed in CNT. We also examine the early appearance of lattice structure in nucleating grains, finding disagreement with some basic assumptions of CNT. We then argue that a quantitatively correct nucleation theory for the PFC model would require extending CNT to a multivariable theory.

  16. Relative complexation energies for Li(+) ion in solution: molecular level solvation versus polarizable continuum model study.

    PubMed

    Eilmes, Andrzej; Kubisiak, Piotr

    2010-01-21

    Relative complexation energies for the lithium cation in acetonitrile and diethyl ether have been studied. Quantum-chemical calculations explicitly describing the solvation of Li(+) have been performed based on structures obtained from molecular dynamics simulations. The effect of an increasing number of solvent molecules beyond the first solvation shell has been found to consist in reduction of the differences in complexation energies for different coordination numbers. Explicit-solvation data have served as a benchmark to the results of polarizable continuum model (PCM) calculations. It has been demonstrated that the PCM approach can yield relative complexation energies comparable to the predictions based on molecular-level solvation, but at significantly lower computational cost. The best agreement between the explicit-solvation and the PCM results has been obtained when the van der Waals surface was adopted to build the molecular cavity.

  17. Sierra/Solid Mechanics 4.48 User's Guide.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merewether, Mark Thomas; Crane, Nathan K; de Frias, Gabriel Jose

    Sierra/SolidMechanics (Sierra/SM) is a Lagrangian, three-dimensional code for finite element analysis of solids and structures. It provides capabilities for explicit dynamic, implicit quasistatic and dynamic analyses. The explicit dynamics capabilities allow for the efficient and robust solution of models with extensive contact subjected to large, suddenly applied loads. For implicit problems, Sierra/SM uses a multi-level iterative solver, which enables it to effectively solve problems with large deformations, nonlinear material behavior, and contact. Sierra/SM has a versatile library of continuum and structural elements, and a large library of material models. The code is written for parallel computing environments enabling scalable solutionsmore » of extremely large problems for both implicit and explicit analyses. It is built on the SIERRA Framework, which facilitates coupling with other SIERRA mechanics codes. This document describes the functionality and input syntax for Sierra/SM.« less

  18. On the Leaky Math Pipeline: Comparing Implicit Math-Gender Stereotypes and Math Withdrawal in Female and Male Children and Adolescents

    ERIC Educational Resources Information Center

    Steffens, Melanie C.; Jelenec, Petra; Noack, Peter

    2010-01-01

    Many models assume that habitual human behavior is guided by spontaneous, automatic, or implicit processes rather than by deliberate, rule-based, or explicit processes. Thus, math-ability self-concepts and math performance could be related to implicit math-gender stereotypes in addition to explicit stereotypes. Two studies assessed at what age…

  19. Adolescents' Use of Sexually Explicit Internet Material and Their Sexual Attitudes and Behavior: Parallel Development and Directional Effects

    ERIC Educational Resources Information Center

    Doornwaard, Suzan M.; Bickham, David S.; Rich, Michael; ter Bogt, Tom F. M.; van den Eijnden, Regina J. J. M.

    2015-01-01

    Although research has repeatedly demonstrated that adolescents' use of sexually explicit Internet material (SEIM) is related to their endorsement of permissive sexual attitudes and their experience with sexual behavior, it is not clear how linkages between these constructs unfold over time. This study combined 2 types of longitudinal modeling,…

  20. A Multi-Year Program Developing an Explicit Reflective Pedagogy for Teaching Pre-Service Teachers the Nature of Science by Ostention

    ERIC Educational Resources Information Center

    Smith, Mike U.; Scharmann, Lawrence

    2008-01-01

    This investigation delineates a multi-year action research agenda designed to develop an instructional model for teaching the nature of science (NOS) to preservice science teachers. Our past research strongly supports the use of explicit reflective instructional methods, which includes Thomas Kuhn's notion of learning by ostention and treating…

  1. Explicit ions/implicit water generalized Born model for nucleic acids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tolokh, Igor S.; Thomas, Dennis G.; Onufriev, Alexey V.

    Ion atmosphere around highly charged nucleic acid molecules plays a significant role in their dynamics, structure and interactions. Here we utilized the implicit solvent framework to develop a model for the explicit treatment of ions interacting with nucleic acid molecules. The proposed explicit ions/implicit water model is based on a significantly modified generalized Born (GB) model, and utilizes a non-standard approach to defining the solute/solvent dielectric boundary. Specifically, the model includes modifications to the GB interaction terms for the case of multiple interacting solutes – disconnected dielectric boundary around the solute-ion or ion-ion pairs. Fully analytical description of all energymore » components for charge-charge interactions is provided. The effectiveness of the approach is demonstrated by calculating the potential of mean force (PMF) for Na+-Cl− ion pair and by carrying out a set of Monte Carlo (MC) simulations of mono- and trivalent ions interacting with DNA and RNA duplexes. The monovalent (Na+) and trivalent (CoHex3+) counterion distributions predicted by the model are in close quantitative agreement with all-atom explicit water molecular dynamics simulations used as reference. Expressed in the units of energy, the maximum deviations of local ion concentrations from the reference are within kBT. The proposed explicit ions/implicit water GB model is able to resolve subtle features and differences of CoHex distributions around DNA and RNA duplexes. These features include preferential CoHex binding inside the major groove of RNA duplex, in contrast to CoHex biding at the "external" surface of the sugar-phosphate backbone of DNA duplex; these differences in the counterion binding patters were shown earlier to be responsible for the observed drastic differences in condensation propensities between short DNA and RNA duplexes. MC simulations of CoHex ions interacting with homopolymeric poly(dA·dT) DNA duplex with modified (de-methylated) and native Thymine bases are used to explore the physics behind CoHex-Thymine interactions. The simulations suggest that the ion desolvation penalty due to proximity to the low dielectric volume of the methyl group can contribute significantly to CoHex-Thymine interactions. Compared to the steric repulsion between the ion and the methyl group, the desolvation penalty interaction has a longer range, and may be important to consider in the context of methylation effects on DNA condensation.« less

  2. Explicit ions/implicit water generalized Born model for nucleic acids

    NASA Astrophysics Data System (ADS)

    Tolokh, Igor S.; Thomas, Dennis G.; Onufriev, Alexey V.

    2018-05-01

    The ion atmosphere around highly charged nucleic acid molecules plays a significant role in their dynamics, structure, and interactions. Here we utilized the implicit solvent framework to develop a model for the explicit treatment of ions interacting with nucleic acid molecules. The proposed explicit ions/implicit water model is based on a significantly modified generalized Born (GB) model and utilizes a non-standard approach to define the solute/solvent dielectric boundary. Specifically, the model includes modifications to the GB interaction terms for the case of multiple interacting solutes—disconnected dielectric boundary around the solute-ion or ion-ion pairs. A fully analytical description of all energy components for charge-charge interactions is provided. The effectiveness of the approach is demonstrated by calculating the potential of mean force for Na+-Cl- ion pair and by carrying out a set of Monte Carlo (MC) simulations of mono- and trivalent ions interacting with DNA and RNA duplexes. The monovalent (Na+) and trivalent (CoHex3+) counterion distributions predicted by the model are in close quantitative agreement with all-atom explicit water molecular dynamics simulations used as reference. Expressed in the units of energy, the maximum deviations of local ion concentrations from the reference are within kBT. The proposed explicit ions/implicit water GB model is able to resolve subtle features and differences of CoHex distributions around DNA and RNA duplexes. These features include preferential CoHex binding inside the major groove of the RNA duplex, in contrast to CoHex biding at the "external" surface of the sugar-phosphate backbone of the DNA duplex; these differences in the counterion binding patters were earlier shown to be responsible for the observed drastic differences in condensation propensities between short DNA and RNA duplexes. MC simulations of CoHex ions interacting with the homopolymeric poly(dA.dT) DNA duplex with modified (de-methylated) and native thymine bases are used to explore the physics behind CoHex-thymine interactions. The simulations suggest that the ion desolvation penalty due to proximity to the low dielectric volume of the methyl group can contribute significantly to CoHex-thymine interactions. Compared to the steric repulsion between the ion and the methyl group, the desolvation penalty interaction has a longer range and may be important to consider in the context of methylation effects on DNA condensation.

  3. Impact of negation salience and cognitive resources on negation during attitude formation.

    PubMed

    Boucher, Kathryn L; Rydell, Robert J

    2012-10-01

    Because of the increased cognitive resources required to process negations, past research has shown that explicit attitude measures are more sensitive to negations than implicit attitude measures. The current work demonstrated that the differential impact of negations on implicit and explicit attitude measures was moderated by (a) the extent to which the negation was made salient and (b) the amount of cognitive resources available during attitude formation. When negations were less visually salient, explicit but not implicit attitude measures reflected the intended valence of the negations. When negations were more visually salient, both explicit and implicit attitude measures reflected the intended valence of the negations, but only when perceivers had ample cognitive resources during encoding. Competing models of negation processing, schema-plus-tag and fusion, were examined to determine how negation salience impacts the processing of negations.

  4. Argumentation and the Unconscious.

    ERIC Educational Resources Information Center

    Hample, Dale

    Noting that--although explicit attention to the unconscious has been rare in argument theories--the notion is unavoidable in any full theory, this paper argues that the unconscious plays a central role in argumentation. After briefly discussing the characteristics of the unconscious, the first section of the paper presents an analysis of…

  5. 12 CFR 567.6 - Risk-based capital credit risk-weight categories.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... strips receivable, other than credit-enhancing interest-only strips; (N)-(O) [Reserved] (P) That portion...'s risk management system that explicitly incorporates the full range of risks arising from the... management or loan review personnel to assign or review the credit risk ratings; (7) Include an internal...

  6. 12 CFR 567.6 - Risk-based capital credit risk-weight categories.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... strips receivable, other than credit-enhancing interest-only strips; (N)-(O) [Reserved] (P) That portion...'s risk management system that explicitly incorporates the full range of risks arising from the... management or loan review personnel to assign or review the credit risk ratings; (7) Include an internal...

  7. Linguistic Features of Middle School Environmental Education Texts.

    ERIC Educational Resources Information Center

    Chenhansa, Suporn; Schleppegrell, Mary

    1998-01-01

    The language used in environmental education texts has linguistic features that affect students' comprehension of concepts and their ability to envision solutions to environmental problems. Findings indicate that features of texts such as abstract nouns and lack of explicit agents impede students' full comprehension of complex issues and obscure…

  8. Co-occurrence of social anxiety and depression symptoms in adolescence: differential links with implicit and explicit self-esteem?

    PubMed

    de Jong, P J; Sportel, B E; de Hullu, E; Nauta, M H

    2012-03-01

    Social anxiety and depression often co-occur. As low self-esteem has been identified as a risk factor for both types of symptoms, it may help to explain their co-morbidity. Current dual process models of psychopathology differentiate between explicit and implicit self-esteem. Explicit self-esteem would reflect deliberate self-evaluative processes whereas implicit self-esteem would reflect simple associations in memory. Previous research suggests that low explicit self-esteem is involved in both social anxiety and depression whereas low implicit self-esteem is only involved in social anxiety. We tested whether the association between symptoms of social phobia and depression can indeed be explained by low explicit self-esteem, whereas low implicit self-esteem is only involved in social anxiety. Adolescents during the first stage of secondary education (n=1806) completed the Revised Child Anxiety and Depression Scale (RCADS) to measure symptoms of social anxiety and depression, the Rosenberg Self-Esteem Scale (RSES) to index explicit self-esteem and the Implicit Association Test (IAT) to measure implicit self-esteem. There was a strong association between symptoms of depression and social anxiety that could be largely explained by participants' explicit self-esteem. Only for girls did implicit self-esteem and the interaction between implicit and explicit self-esteem show small cumulative predictive validity for social anxiety, indicating that the association between low implicit self-esteem and social anxiety was most evident for girls with relatively low explicit self-esteem. Implicit self-esteem showed no significant predictive validity for depressive symptoms. The findings support the view that both shared and differential self-evaluative processes are involved in depression and social anxiety.

  9. Comparing implicit and explicit semantic access of direct and indirect word pairs in schizophrenia to evaluate models of semantic memory.

    PubMed

    Neill, Erica; Rossell, Susan Lee

    2013-02-28

    Semantic memory deficits in schizophrenia (SZ) are profound, yet there is no research comparing implicit and explicit semantic processing in the same participant sample. In the current study, both implicit and explicit priming are investigated using direct (LION-TIGER) and indirect (LION-STRIPES; where tiger is not displayed) stimuli comparing SZ to healthy controls. Based on a substantive review (Rossell and Stefanovic, 2007) and meta-analysis (Pomarol-Clotet et al., 2008), it was predicted that SZ would be associated with increased indirect priming implicitly. Further, it was predicted that SZ would be associated with abnormal indirect priming explicitly, replicating earlier work (Assaf et al., 2006). No specific hypotheses were made for implicit direct priming due to the heterogeneity of the literature. It was hypothesised that explicit direct priming would be intact based on the structured nature of this task. The pattern of results suggests (1) intact reaction time (RT) and error performance implicitly in the face of abnormal direct priming and (2) impaired RT and error performance explicitly. This pattern confirms general findings regarding implicit/explicit memory impairments in SZ whilst highlighting the unique pattern of performance specific to semantic priming. Finally, priming performance is discussed in relation to thought disorder and length of illness. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  10. Spatially explicit modeling of 1992-2100 land cover and forest stand age for the conterminous United States

    USGS Publications Warehouse

    Sohl, Terry L.; Sayler, Kristi L.; Bouchard, Michelle; Reker, Ryan R.; Friesz, Aaron M.; Bennett, Stacie L.; Sleeter, Benjamin M.; Sleeter, Rachel R.; Wilson, Tamara; Soulard, Christopher E.; Knuppe, Michelle; Van Hofwegen, Travis

    2014-01-01

    Information on future land-use and land-cover (LULC) change is needed to analyze the impact of LULC change on ecological processes. The U.S. Geological Survey has produced spatially explicit, thematically detailed LULC projections for the conterminous United States. Four qualitative and quantitative scenarios of LULC change were developed, with characteristics consistent with the Intergovernmental Panel on Climate Change (IPCC) Special Report on 5 Emission Scenarios (SRES). The four quantified scenarios (A1B, A2, B1, and B2) served as input to the Forecasting Scenarios of Land-use Change (FORE-SCE) model. Four spatially explicit datasets consistent with scenario storylines were produced for the conterminous United States, with annual LULC maps from 1992 through 2100. The future projections are characterized by a loss of natural land covers in most scenarios, with corresponding expansion of 10 anthropogenic land uses. Along with the loss of natural land covers, remaining natural land covers experience increased fragmentation under most scenarios, with only the B2 scenario remaining relatively stable in both proportion of remaining natural land covers and basic fragmentation measures. Forest stand age was also modeled. By 2100, scenarios and ecoregions with heavy forest cutting have relatively lower mean stand ages compared to those with less 15 forest cutting. Stand ages differ substantially between unprotected and protected forest lands, as well as between different forest classes. The modeled data were compared to the National Land Cover Database (NLCD) and other data sources to assess model characteristics. The consistent, spatially explicit, and thematically detailed LULC projections and the associated forest stand age data layers have been used to analyze LULC impacts on carbon and greenhouse gas fluxes, 20 biodiversity, climate and weather variability, hydrologic change, and other ecological processes.

  11. Discrete Choice Model of Food Store Trips Using National Household Food Acquisition and Purchase Survey (FoodAPS).

    PubMed

    Hillier, Amy; Smith, Tony E; Whiteman, Eliza D; Chrisinger, Benjamin W

    2017-09-27

    Where households across income levels shop for food is of central concern within a growing body of research focused on where people live relative to where they shop, what they purchase and eat, and how those choices influence the risk of obesity and chronic disease. We analyzed data from the National Household Food Acquisition and Purchase Survey (FoodAPS) using a conditional logit model to determine where participants shop for food to be prepared and eaten at home and how individual and household characteristics of food shoppers interact with store characteristics and distance from home in determining store choice. Store size, whether or not it was a full-service supermarket, and the driving distance from home to the store constituted the three significant main effects on store choice. Overall, participants were more likely to choose larger stores, conventional supermarkets rather than super-centers and other types of stores, and stores closer to home. Interaction effects show that participants receiving Supplemental Nutrition Assistance Program (SNAP) were even more likely to choose larger stores. Hispanic participants were more likely than non-Hispanics to choose full-service supermarkets while White participants were more likely to travel further than non-Whites. This study demonstrates the value of explicitly spatial discrete choice models and provides evidence of national trends consistent with previous smaller, local studies.

  12. Discrete Choice Model of Food Store Trips Using National Household Food Acquisition and Purchase Survey (FoodAPS)

    PubMed Central

    Hillier, Amy; Smith, Tony E.; Whiteman, Eliza D.

    2017-01-01

    Where households across income levels shop for food is of central concern within a growing body of research focused on where people live relative to where they shop, what they purchase and eat, and how those choices influence the risk of obesity and chronic disease. We analyzed data from the National Household Food Acquisition and Purchase Survey (FoodAPS) using a conditional logit model to determine where participants shop for food to be prepared and eaten at home and how individual and household characteristics of food shoppers interact with store characteristics and distance from home in determining store choice. Store size, whether or not it was a full-service supermarket, and the driving distance from home to the store constituted the three significant main effects on store choice. Overall, participants were more likely to choose larger stores, conventional supermarkets rather than super-centers and other types of stores, and stores closer to home. Interaction effects show that participants receiving Supplemental Nutrition Assistance Program (SNAP) were even more likely to choose larger stores. Hispanic participants were more likely than non-Hispanics to choose full-service supermarkets while White participants were more likely to travel further than non-Whites. This study demonstrates the value of explicitly spatial discrete choice models and provides evidence of national trends consistent with previous smaller, local studies. PMID:28953221

  13. GAMBIT: A Parameterless Model-Based Evolutionary Algorithm for Mixed-Integer Problems.

    PubMed

    Sadowski, Krzysztof L; Thierens, Dirk; Bosman, Peter A N

    2018-01-01

    Learning and exploiting problem structure is one of the key challenges in optimization. This is especially important for black-box optimization (BBO) where prior structural knowledge of a problem is not available. Existing model-based Evolutionary Algorithms (EAs) are very efficient at learning structure in both the discrete, and in the continuous domain. In this article, discrete and continuous model-building mechanisms are integrated for the Mixed-Integer (MI) domain, comprising discrete and continuous variables. We revisit a recently introduced model-based evolutionary algorithm for the MI domain, the Genetic Algorithm for Model-Based mixed-Integer opTimization (GAMBIT). We extend GAMBIT with a parameterless scheme that allows for practical use of the algorithm without the need to explicitly specify any parameters. We furthermore contrast GAMBIT with other model-based alternatives. The ultimate goal of processing mixed dependences explicitly in GAMBIT is also addressed by introducing a new mechanism for the explicit exploitation of mixed dependences. We find that processing mixed dependences with this novel mechanism allows for more efficient optimization. We further contrast the parameterless GAMBIT with Mixed-Integer Evolution Strategies (MIES) and other state-of-the-art MI optimization algorithms from the General Algebraic Modeling System (GAMS) commercial algorithm suite on problems with and without constraints, and show that GAMBIT is capable of solving problems where variable dependences prevent many algorithms from successfully optimizing them.

  14. Advancing the Explicit Representation of Lake Processes in WRF-Hydro

    NASA Astrophysics Data System (ADS)

    Yates, D. N.; Read, L.; Barlage, M. J.; Gochis, D.

    2017-12-01

    Realistic simulation of physical processes in lakes is essential for closing the water and energy budgets in a coupled land-surface and hydrologic model, such as the Weather Research and Forecasting (WRF) model's WRF-Hydro framework. A current version of WRF-Hydro, the National Water Model (NWM), includes 1,506 waterbodies derived from the National Hydrography Database, each of which is modeled using a level-pool routing scheme. This presentation discusses the integration of WRF's one-dimensional lake model into WRF-Hydro, which is used to estimate waterbody fluxes and thus explicitly represent latent and sensible heat and the mass balance occurring over the lakes. Results of these developments are presented through a case study from Lake Winnebago, Wisconsin. Scalability and computational benchmarks to expand to the continental-scale NWM are discussed.

  15. Priming within and across modalities: exploring the nature of rCBF increases and decreases.

    PubMed

    Badgaiyan, R D; Schacter, D L; Alpert, N M

    2001-02-01

    Neuroimaging studies suggest that within-modality priming is associated with reduced regional cerebral blood flow (rCBF) in the extrastriate area, whereas cross-modality priming is associated with increased rCBF in prefrontal cortex. To characterize the nature of rCBF changes in within- and cross-modality priming, we conducted two neuroimaging experiments using positron emission tomography (PET). In experiment 1, rCBF changes in within-modality auditory priming on a word stem completion task were observed under same- and different-voice conditions. Both conditions were associated with decreased rCBF in extrastriate cortex. In the different-voice condition there were additional rCBF changes in the middle temporal gyrus and prefrontal cortex. Results suggest that the extrastriate involvement in within-modality priming is sensitive to a change in sensory modality of target stimuli between study and test, but not to a change in the feature of a stimulus within the same modality. In experiment 2, we studied cross-modality priming on a visual stem completion test after encoding under full- and divided-attention conditions. Increased rCBF in the anterior prefrontal cortex was observed in the full- but not in the divided-attention condition. Because explicit retrieval is compromised after encoding under the divided-attention condition, prefrontal involvement in cross-modality priming indicates recruitment of an aspect of explicit retrieval mechanism. The aspect of explicit retrieval that is most likely to be involved in cross-modality priming is the familiarity effect. Copyright 2001 Academic Press.

  16. An improved risk-explicit interval linear programming model for pollution load allocation for watershed management.

    PubMed

    Xia, Bisheng; Qian, Xin; Yao, Hong

    2017-11-01

    Although the risk-explicit interval linear programming (REILP) model has solved the problem of having interval solutions, it has an equity problem, which can lead to unbalanced allocation between different decision variables. Therefore, an improved REILP model is proposed. This model adds an equity objective function and three constraint conditions to overcome this equity problem. In this case, pollution reduction is in proportion to pollutant load, which supports balanced development between different regional economies. The model is used to solve the problem of pollution load allocation in a small transboundary watershed. Compared with the REILP original model result, our model achieves equity between the upstream and downstream pollutant loads; it also overcomes the problem of greatest pollution reduction, where sources are nearest to the control section. The model provides a better solution to the problem of pollution load allocation than previous versions.

  17. A tale of twin Higgs: natural twin two Higgs doublet models

    DOE PAGES

    Yu, Jiang-Hao

    2016-12-28

    In original twin Higgs model, vacuum misalignment between electroweak and new physics scales is realized by adding explicit Z 2 breaking term. Introducing additional twin Higgs could accommodate spontaneous Z 2 breaking, which explains origin of this misalignment. We introduce a class of twin two Higgs doublet models with most general scalar potential, and discuss general conditions which trigger electroweak and Z 2 symmetry breaking. Various scenarios on realising the vacuum misalignment are systematically discussed in a natural composite two Higgs double model framework: explicit Z 2 breaking, radiative Z 2 breaking, tadpole-induced Z 2 breaking, and quartic-induced Z 2more » breaking. Finally, we investigate the Higgs mass spectra and Higgs phenomenology in these scenarios.« less

  18. Rain scavenging of solid rocket exhaust clouds

    NASA Technical Reports Server (NTRS)

    Dingle, A. N.

    1978-01-01

    An explicit model for cloud microphysics was developed for application to the problem of co-condensation/vaporization of HCl and H2O in the presence of Al2O3 particulate nuclei. Validity of the explicit model relative to the implicit model, which has been customarily applied to atmospheric cloud studies, was demonstrated by parallel computations of H2O condensation upon (NH4)2 SO4 nuclei. A mesoscale predictive model designed to account for the impact of wet processes on atmospheric dynamics is also under development. Input data specifying the equilibrium state of HC1 and H2O vapors in contact with aqueous HC1 solutions were found to be limited, particularly in respect to temperature range.

  19. Habitat fragmentation resulting in overgrazing by herbivores.

    PubMed

    Kondoh, Michio

    2003-12-21

    Habitat fragmentation sometimes results in outbreaks of herbivorous insect and causes an enormous loss of primary production. It is hypothesized that the driving force behind such herbivore outbreaks is disruption of natural enemy attack that releases herbivores from top-down control. To test this hypothesis I studied how trophic community structure changes along a gradient of habitat fragmentation level using spatially implicit and explicit models of a tri-trophic (plant, herbivore and natural enemy) food chain. While in spatially implicit model number of trophic levels gradually decreases with increasing fragmentation, in spatially explicit model a relatively low level of habitat fragmentation leads to overgrazing by herbivore to result in extinction of the plant population followed by a total system collapse. This provides a theoretical support to the hypothesis that habitat fragmentation can lead to overgrazing by herbivores and suggests a central role of spatial structure in the influence of habitat fragmentation on trophic communities. Further, the spatially explicit model shows (i) that the total system collapse by the overgrazing can occur only if herbivore colonization rate is high; (ii) that with increasing natural enemy colonization rate, the fragmentation level that leads to the system collapse becomes higher, and the frequency of the collapse is lowered.

  20. Advancing parabolic operators in thermodynamic MHD models: Explicit super time-stepping versus implicit schemes with Krylov solvers

    NASA Astrophysics Data System (ADS)

    Caplan, R. M.; Mikić, Z.; Linker, J. A.; Lionello, R.

    2017-05-01

    We explore the performance and advantages/disadvantages of using unconditionally stable explicit super time-stepping (STS) algorithms versus implicit schemes with Krylov solvers for integrating parabolic operators in thermodynamic MHD models of the solar corona. Specifically, we compare the second-order Runge-Kutta Legendre (RKL2) STS method with the implicit backward Euler scheme computed using the preconditioned conjugate gradient (PCG) solver with both a point-Jacobi and a non-overlapping domain decomposition ILU0 preconditioner. The algorithms are used to integrate anisotropic Spitzer thermal conduction and artificial kinematic viscosity at time-steps much larger than classic explicit stability criteria allow. A key component of the comparison is the use of an established MHD model (MAS) to compute a real-world simulation on a large HPC cluster. Special attention is placed on the parallel scaling of the algorithms. It is shown that, for a specific problem and model, the RKL2 method is comparable or surpasses the implicit method with PCG solvers in performance and scaling, but suffers from some accuracy limitations. These limitations, and the applicability of RKL methods are briefly discussed.

  1. The Be-WetSpa-Pest modeling approach to simulate human and environmental exposure from pesticide application

    NASA Astrophysics Data System (ADS)

    Binder, Claudia; Garcia-Santos, Glenda; Andreoli, Romano; Diaz, Jaime; Feola, Giuseppe; Wittensoeldner, Moritz; Yang, Jing

    2016-04-01

    This study presents an integrative and spatially explicit modeling approach for analyzing human and environmental exposure from pesticide application of smallholders in the potato producing Andean region in Colombia. The modeling approach fulfills the following criteria: (i) it includes environmental and human compartments; (ii) it contains a behavioral decision-making model for estimating the effect of policies on pesticide flows to humans and the environment; (iii) it is spatially explicit; and (iv) it is modular and easily expandable to include additional modules, crops or technologies. The model was calibrated and validated for the Vereda La Hoya and was used to explore the effect of different policy measures in the region. The model has moderate data requirements and can be adapted relatively easy to other regions in developing countries with similar conditions.

  2. Learning to Model in Engineering

    ERIC Educational Resources Information Center

    Gainsburg, Julie

    2013-01-01

    Policymakers and education scholars recommend incorporating mathematical modeling into mathematics education. Limited implementation of modeling instruction in schools, however, has constrained research on how students learn to model, leaving unresolved debates about whether modeling should be reified and explicitly taught as a competence, whether…

  3. From single Debye-Hückel chains to polyelectrolyte solutions: Simulation results

    NASA Astrophysics Data System (ADS)

    Kremer, Kurt

    1996-03-01

    This lecture will present results from simulations of single weakly charged flexible chains, where the electrostatic part of the interaction is modeled by a Debye-Hückel potential,( with U. Micka, IFF, Forschungszentrum Jülich, 52425 Jülich, Germany) as well as simulations of polyelectrolyte solutions, where the counterions are explicitly taken into account( with M. J. Stevens, Sandia Nat. Lab., Albuquerque, NM 87185-1111) ( M. J. Stevens, K. Kremer, JCP 103), 1669 (1995). The first set of the simulations is meant to clear a recent contoversy on the dependency of the persistence length LP on the screening length Γ. While the analytic theories give Lp ~ Γ^x with either x=1 or x=2, the simulations find for all experimentally accessible chain lengths a varying exponent, which is significantly smaller than 1. This causes serious doubts on the applicability of this model for weakly charged polyelectrolytes in general. The second part deals with strongly charged flexible polyelectrolytes in salt free solution. These simulations are performed for multichain systems. The full Coulomb interactions of the monomers and counterions are treated explicitly. Experimental measurements of the osmotic pressure and the structure factor are reproduced and extended. The simulations reveal a new picture of the chain structure based on calculations of the structure factor, persistence length, end-to-end distance, etc. Even at very low density, the chains show significant bending. Furthermore, the chains contract significantly before they start to overlap. We also show that counterion condensation dramatically alters the chain structure, even for a good solvent backbone.

  4. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE PAGES

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; ...

    2018-04-17

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  5. Implicit-explicit (IMEX) Runge-Kutta methods for non-hydrostatic atmospheric models

    NASA Astrophysics Data System (ADS)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; Reynolds, Daniel R.; Ullrich, Paul A.; Woodward, Carol S.

    2018-04-01

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit-explicit (IMEX) additive Runge-Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit - vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored. The accuracy and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.

  6. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  7. How the Brain Decides When to Work and When to Rest: Dissociation of Implicit-Reactive from Explicit-Predictive Computational Processes

    PubMed Central

    Meyniel, Florent; Safra, Lou; Pessiglione, Mathias

    2014-01-01

    A pervasive case of cost-benefit problem is how to allocate effort over time, i.e. deciding when to work and when to rest. An economic decision perspective would suggest that duration of effort is determined beforehand, depending on expected costs and benefits. However, the literature on exercise performance emphasizes that decisions are made on the fly, depending on physiological variables. Here, we propose and validate a general model of effort allocation that integrates these two views. In this model, a single variable, termed cost evidence, accumulates during effort and dissipates during rest, triggering effort cessation and resumption when reaching bounds. We assumed that such a basic mechanism could explain implicit adaptation, whereas the latent parameters (slopes and bounds) could be amenable to explicit anticipation. A series of behavioral experiments manipulating effort duration and difficulty was conducted in a total of 121 healthy humans to dissociate implicit-reactive from explicit-predictive computations. Results show 1) that effort and rest durations are adapted on the fly to variations in cost-evidence level, 2) that the cost-evidence fluctuations driving the behavior do not match explicit ratings of exhaustion, and 3) that actual difficulty impacts effort duration whereas expected difficulty impacts rest duration. Taken together, our findings suggest that cost evidence is implicitly monitored online, with an accumulation rate proportional to actual task difficulty. In contrast, cost-evidence bounds and dissipation rate might be adjusted in anticipation, depending on explicit task difficulty. PMID:24743711

  8. Assessment of the Simulated Molecular Composition with the GECKO-A Modeling Tool Using Chamber Observations for α-Pinene.

    NASA Astrophysics Data System (ADS)

    Aumont, B.; Camredon, M.; Isaacman-VanWertz, G. A.; Karam, C.; Valorso, R.; Madronich, S.; Kroll, J. H.

    2016-12-01

    Gas phase oxidation of VOC is a gradual process leading to the formation of multifunctional organic compounds, i.e., typically species with higher oxidation state, high water solubility and low volatility. These species contribute to the formation of secondary organic aerosols (SOA) viamultiphase processes involving a myriad of organic species that evolve through thousands of reactions and gas/particle mass exchanges. Explicit chemical mechanisms reflect the understanding of these multigenerational oxidation steps. These mechanisms rely directly on elementary reactions to describe the chemical evolution and track the identity of organic carbon through various phases down to ultimate oxidation products. The development, assessment and improvement of such explicit schemes is a key issue, as major uncertainties remain on the chemical pathways involved during atmospheric oxidation of organic matter. An array of mass spectrometric techniques (CIMS, PTRMS, AMS) was recently used to track the composition of organic species during α-pinene oxidation in the MIT environmental chamber, providing an experimental database to evaluate and improve explicit mechanisms. In this study, the GECKO-A tool (Generator for Explicit Chemistry and Kinetics of Organics in the Atmosphere) is used to generate fully explicit oxidation schemes for α-pinene multiphase oxidation simulating the MIT experiment. The ability of the GECKO-A chemical scheme to explain the organic molecular composition in the gas and the condensed phases is explored. First results of this model/observation comparison at the molecular level will be presented.

  9. Understanding the influence of external perturbation on aziridinium ion formation

    NASA Astrophysics Data System (ADS)

    Sinha, Sourab; Bhattacharyya, Pradip Kr

    2018-01-01

    A density functional theory study is performed to understand the effect of discrete water molecules during Az+ ion formation in nitrogen mustards. A comparative study in gas phase, and implicit and explicit solvation models of three drug molecules (mustine, chlorambucil and melphalan) is reported. Noteworthy changes in the structure and C-N stretching frequencies of the transition states have been observed in the presence of explicit water molecules. Presence of explicit water molecules reduces the positive charge around the tricyclic Az+ ring, and hence stabilising it. Both activation energy and rate constants are seen to be significantly affected in the presence of discrete water molecules.

  10. Determining X-ray source intensity and confidence bounds in crowded fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Primini, F. A.; Kashyap, V. L., E-mail: fap@head.cfa.harvard.edu

    We present a rigorous description of the general problem of aperture photometry in high-energy astrophysics photon-count images, in which the statistical noise model is Poisson, not Gaussian. We compute the full posterior probability density function for the expected source intensity for various cases of interest, including the important cases in which both source and background apertures contain contributions from the source, and when multiple source apertures partially overlap. A Bayesian approach offers the advantages of allowing one to (1) include explicit prior information on source intensities, (2) propagate posterior distributions as priors for future observations, and (3) use Poisson likelihoods,more » making the treatment valid in the low-counts regime. Elements of this approach have been implemented in the Chandra Source Catalog.« less

  11. Locally adaptive, spatially explicit projection of US population for 2030 and 2050.

    PubMed

    McKee, Jacob J; Rose, Amy N; Bright, Edward A; Huynh, Timmy; Bhaduri, Budhendra L

    2015-02-03

    Localized adverse events, including natural hazards, epidemiological events, and human conflict, underscore the criticality of quantifying and mapping current population. Building on the spatial interpolation technique previously developed for high-resolution population distribution data (LandScan Global and LandScan USA), we have constructed an empirically informed spatial distribution of projected population of the contiguous United States for 2030 and 2050, depicting one of many possible population futures. Whereas most current large-scale, spatially explicit population projections typically rely on a population gravity model to determine areas of future growth, our projection model departs from these by accounting for multiple components that affect population distribution. Modeled variables, which included land cover, slope, distances to larger cities, and a moving average of current population, were locally adaptive and geographically varying. The resulting weighted surface was used to determine which areas had the greatest likelihood for future population change. Population projections of county level numbers were developed using a modified version of the US Census's projection methodology, with the US Census's official projection as the benchmark. Applications of our model include incorporating multiple various scenario-driven events to produce a range of spatially explicit population futures for suitability modeling, service area planning for governmental agencies, consequence assessment, mitigation planning and implementation, and assessment of spatially vulnerable populations.

  12. Analytical transport network theory to guide the design of 3-D microstructural networks in energy materials: Part 1. Flow without reactions

    NASA Astrophysics Data System (ADS)

    Cocco, Alex P.; Nakajo, Arata; Chiu, Wilson K. S.

    2017-12-01

    We present a fully analytical, heuristic model - the "Analytical Transport Network Model" - for steady-state, diffusive, potential flow through a 3-D network. Employing a combination of graph theory, linear algebra, and geometry, the model explicitly relates a microstructural network's topology and the morphology of its channels to an effective material transport coefficient (a general term meant to encompass, e.g., conductivity or diffusion coefficient). The model's transport coefficient predictions agree well with those from electrochemical fin (ECF) theory and finite element analysis (FEA), but are computed 0.5-1.5 and 5-6 orders of magnitude faster, respectively. In addition, the theory explicitly relates a number of morphological and topological parameters directly to the transport coefficient, whereby the distributions that characterize the structure are readily available for further analysis. Furthermore, ATN's explicit development provides insight into the nature of the tortuosity factor and offers the potential to apply theory from network science and to consider the optimization of a network's effective resistance in a mathematically rigorous manner. The ATN model's speed and relative ease-of-use offer the potential to aid in accelerating the design (with respect to transport), and thus reducing the cost, of energy materials.

  13. Developing and testing a global-scale regression model to quantify mean annual streamflow

    NASA Astrophysics Data System (ADS)

    Barbarossa, Valerio; Huijbregts, Mark A. J.; Hendriks, A. Jan; Beusen, Arthur H. W.; Clavreul, Julie; King, Henry; Schipper, Aafke M.

    2017-01-01

    Quantifying mean annual flow of rivers (MAF) at ungauged sites is essential for assessments of global water supply, ecosystem integrity and water footprints. MAF can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict MAF based on climate and catchment characteristics. Yet, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. In this study, we developed a global-scale regression model for MAF based on a dataset unprecedented in size, using observations of discharge and catchment characteristics from 1885 catchments worldwide, measuring between 2 and 106 km2. In addition, we compared the performance of the regression model with the predictive ability of the spatially explicit global hydrological model PCR-GLOBWB by comparing results from both models to independent measurements. We obtained a regression model explaining 89% of the variance in MAF based on catchment area and catchment averaged mean annual precipitation and air temperature, slope and elevation. The regression model performed better than PCR-GLOBWB for the prediction of MAF, as root-mean-square error (RMSE) values were lower (0.29-0.38 compared to 0.49-0.57) and the modified index of agreement (d) was higher (0.80-0.83 compared to 0.72-0.75). Our regression model can be applied globally to estimate MAF at any point of the river network, thus providing a feasible alternative to spatially explicit process-based global hydrological models.

  14. Spatially explicit decision support for selecting translocation areas for Mojave desert tortoises

    USGS Publications Warehouse

    Heaton, Jill S.; Nussear, Kenneth E.; Esque, Todd C.; Inman, Richard D.; Davenport, Frank; Leuteritz, Thomas E.; Medica, Philip A.; Strout, Nathan W.; Burgess, Paul A.; Benvenuti, Lisa

    2008-01-01

    Spatially explicit decision support systems are assuming an increasing role in natural resource and conservation management. In order for these systems to be successful, however, they must address real-world management problems with input from both the scientific and management communities. The National Training Center at Fort Irwin, California, has expanded its training area, encroaching U.S. Fish and Wildlife Service critical habitat set aside for the Mojave desert tortoise (Gopherus agassizii), a federally threatened species. Of all the mitigation measures proposed to offset expansion, the most challenging to implement was the selection of areas most feasible for tortoise translocation. We developed an objective, open, scientifically defensible spatially explicit decision support system to evaluate translocation potential within the Western Mojave Recovery Unit for tortoise populations under imminent threat from military expansion. Using up to a total of 10 biological, anthropogenic, and/or logistical criteria, seven alternative translocation scenarios were developed. The final translocation model was a consensus model between the seven scenarios. Within the final model, six potential translocation areas were identified.

  15. Analysis of explicit model predictive control for path-following control

    PubMed Central

    2018-01-01

    In this paper, explicit Model Predictive Control(MPC) is employed for automated lane-keeping systems. MPC has been regarded as the key to handle such constrained systems. However, the massive computational complexity of MPC, which employs online optimization, has been a major drawback that limits the range of its target application to relatively small and/or slow problems. Explicit MPC can reduce this computational burden using a multi-parametric quadratic programming technique(mp-QP). The control objective is to derive an optimal front steering wheel angle at each sampling time so that autonomous vehicles travel along desired paths, including straight, circular, and clothoid parts, at high entry speeds. In terms of the design of the proposed controller, a method of choosing weighting matrices in an optimization problem and the range of horizons for path-following control are described through simulations. For the verification of the proposed controller, simulation results obtained using other control methods such as MPC, Linear-Quadratic Regulator(LQR), and driver model are employed, and CarSim, which reflects the features of a vehicle more realistically than MATLAB/Simulink, is used for reliable demonstration. PMID:29534080

  16. Analysis of explicit model predictive control for path-following control.

    PubMed

    Lee, Junho; Chang, Hyuk-Jun

    2018-01-01

    In this paper, explicit Model Predictive Control(MPC) is employed for automated lane-keeping systems. MPC has been regarded as the key to handle such constrained systems. However, the massive computational complexity of MPC, which employs online optimization, has been a major drawback that limits the range of its target application to relatively small and/or slow problems. Explicit MPC can reduce this computational burden using a multi-parametric quadratic programming technique(mp-QP). The control objective is to derive an optimal front steering wheel angle at each sampling time so that autonomous vehicles travel along desired paths, including straight, circular, and clothoid parts, at high entry speeds. In terms of the design of the proposed controller, a method of choosing weighting matrices in an optimization problem and the range of horizons for path-following control are described through simulations. For the verification of the proposed controller, simulation results obtained using other control methods such as MPC, Linear-Quadratic Regulator(LQR), and driver model are employed, and CarSim, which reflects the features of a vehicle more realistically than MATLAB/Simulink, is used for reliable demonstration.

  17. Are adverse effects incorporated in economic models? An initial review of current practice.

    PubMed

    Craig, D; McDaid, C; Fonseca, T; Stock, C; Duffy, S; Woolacott, N

    2009-12-01

    To identify methodological research on the incorporation of adverse effects in economic models and to review current practice. Major electronic databases (Cochrane Methodology Register, Health Economic Evaluations Database, NHS Economic Evaluation Database, EconLit, EMBASE, Health Management Information Consortium, IDEAS, MEDLINE and Science Citation Index) were searched from inception to September 2007. Health technology assessment (HTA) reports commissioned by the National Institute for Health Research (NIHR) HTA programme and published between 2004 and 2007 were also reviewed. The reviews of methodological research on the inclusion of adverse effects in decision models and of current practice were carried out according to standard methods. Data were summarised in a narrative synthesis. Of the 719 potentially relevant references in the methodological research review, five met the inclusion criteria; however, they contained little information of direct relevance to the incorporation of adverse effects in models. Of the 194 HTA monographs published from 2004 to 2007, 80 were reviewed, covering a range of research and therapeutic areas. In total, 85% of the reports included adverse effects in the clinical effectiveness review and 54% of the decision models included adverse effects in the model; 49% included adverse effects in the clinical review and model. The link between adverse effects in the clinical review and model was generally weak; only 3/80 (< 4%) used the results of a meta-analysis from the systematic review of clinical effectiveness and none used only data from the review without further manipulation. Of the models including adverse effects, 67% used a clinical adverse effects parameter, 79% used a cost of adverse effects parameter, 86% used one of these and 60% used both. Most models (83%) used utilities, but only two (2.5%) used solely utilities to incorporate adverse effects and were explicit that the utility captured relevant adverse effects; 53% of those models that included utilities derived them from patients on treatment and could therefore be interpreted as capturing adverse effects. In total, 30% of the models that included adverse effects used withdrawals related to drug toxicity and therefore might be interpreted as using withdrawals to capture adverse effects, but this was explicitly stated in only three reports. Of the 37 models that did not include adverse effects, 18 provided justification for this omission, most commonly lack of data; 19 appeared to make no explicit consideration of adverse effects in the model. There is an implicit assumption within modelling guidance that adverse effects are very important but there is a lack of clarity regarding how they should be dealt with and considered in modelling. In many cases a lack of clear reporting in the HTAs made it extremely difficult to ascertain what had actually been carried out in consideration of adverse effects. The main recommendation is for much clearer and explicit reporting of adverse effects, or their exclusion, in decision models and for explicit recognition in future guidelines that 'all relevant outcomes' should include some consideration of adverse events.

  18. Improving smoothing efficiency of rigid conformal polishing tool using time-dependent smoothing evaluation model

    NASA Astrophysics Data System (ADS)

    Song, Chi; Zhang, Xuejun; Zhang, Xin; Hu, Haifei; Zeng, Xuefeng

    2017-06-01

    A rigid conformal (RC) lap can smooth mid-spatial-frequency (MSF) errors, which are naturally smaller than the tool size, while still removing large-scale errors in a short time. However, the RC-lap smoothing efficiency performance is poorer than expected, and existing smoothing models cannot explicitly specify the methods to improve this efficiency. We presented an explicit time-dependent smoothing evaluation model that contained specific smoothing parameters directly derived from the parametric smoothing model and the Preston equation. Based on the time-dependent model, we proposed a strategy to improve the RC-lap smoothing efficiency, which incorporated the theoretical model, tool optimization, and efficiency limit determination. Two sets of smoothing experiments were performed to demonstrate the smoothing efficiency achieved using the time-dependent smoothing model. A high, theory-like tool influence function and a limiting tool speed of 300 RPM were o

  19. Simple liquid models with corrected dielectric constants

    PubMed Central

    Fennell, Christopher J.; Li, Libo; Dill, Ken A.

    2012-01-01

    Molecular simulations often use explicit-solvent models. Sometimes explicit-solvent models can give inaccurate values for basic liquid properties, such as the density, heat capacity, and permittivity, as well as inaccurate values for molecular transfer free energies. Such errors have motivated the development of more complex solvents, such as polarizable models. We describe an alternative here. We give new fixed-charge models of solvents for molecular simulations – water, carbon tetrachloride, chloroform and dichloromethane. Normally, such solvent models are parameterized to agree with experimental values of the neat liquid density and enthalpy of vaporization. Here, in addition to those properties, our parameters are chosen to give the correct dielectric constant. We find that these new parameterizations also happen to give better values for other properties, such as the self-diffusion coefficient. We believe that parameterizing fixed-charge solvent models to fit experimental dielectric constants may provide better and more efficient ways to treat solvents in computer simulations. PMID:22397577

  20. Multi-model predictive control based on LMI: from the adaptation of the state-space model to the analytic description of the control law

    NASA Astrophysics Data System (ADS)

    Falugi, P.; Olaru, S.; Dumur, D.

    2010-08-01

    This article proposes an explicit robust predictive control solution based on linear matrix inequalities (LMIs). The considered predictive control strategy uses different local descriptions of the system dynamics and uncertainties and thus allows the handling of less conservative input constraints. The computed control law guarantees constraint satisfaction and asymptotic stability. The technique is effective for a class of nonlinear systems embedded into polytopic models. A detailed discussion of the procedures which adapt the partition of the state space is presented. For the practical implementation the construction of suitable (explicit) descriptions of the control law are described upon concrete algorithms.

  1. Efficient Translation of LTL Formulae into Buchi Automata

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Lerda, Flavio

    2001-01-01

    Model checking is a fully automated technique for checking that a system satisfies a set of required properties. With explicit-state model checkers, properties are typically defined in linear-time temporal logic (LTL), and are translated into B chi automata in order to be checked. This report presents how we have combined and improved existing techniques to obtain an efficient LTL to B chi automata translator. In particular, we optimize the core of existing tableau-based approaches to generate significantly smaller automata. Our approach has been implemented and is being released as part of the Java PathFinder software (JPF), an explicit state model checker under development at the NASA Ames Research Center.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Jiang-Hao

    In original twin Higgs model, vacuum misalignment between electroweak and new physics scales is realized by adding explicit Z 2 breaking term. Introducing additional twin Higgs could accommodate spontaneous Z 2 breaking, which explains origin of this misalignment. We introduce a class of twin two Higgs doublet models with most general scalar potential, and discuss general conditions which trigger electroweak and Z 2 symmetry breaking. Various scenarios on realising the vacuum misalignment are systematically discussed in a natural composite two Higgs double model framework: explicit Z 2 breaking, radiative Z 2 breaking, tadpole-induced Z 2 breaking, and quartic-induced Z 2more » breaking. Finally, we investigate the Higgs mass spectra and Higgs phenomenology in these scenarios.« less

  3. Fire in the Brazilian Amazon: A Spatially Explicit Model for Policy Impact Analysis

    NASA Technical Reports Server (NTRS)

    Arima, Eugenio Y.; Simmons, Cynthia S.; Walker, Robert T.; Cochrane, Mark A.

    2007-01-01

    This article implements a spatially explicit model to estimate the probability of forest and agricultural fires in the Brazilian Amazon. We innovate by using variables that reflect farmgate prices of beef and soy, and also provide a conceptual model of managed and unmanaged fires in order to simulate the impact of road paving, cattle exports, and conservation area designation on the occurrence of fire. Our analysis shows that fire is positively correlated with the price of beef and soy, and that the creation of new conservation units may offset the negative environmental impacts caused by the increasing number of fire events associated with early stages of frontier development.

  4. Federal Workforce Quality: Measurement and Improvement

    DTIC Science & Technology

    1992-08-01

    explicit standards of production and service quality . Assessment Tools 4 OPM should institutionalize its data collection program of longitudinal research...include data about quirements, should set explicit standards of various aspects of the model. That is, the production and service quality . effort...are the immediate consumers service quality are possible. of the products and services delivered, and still others in the larger society who have no

  5. Teachers' Implicit Attitudes, Explicit Beliefs, and the Mediating Role of Respect and Cultural Responsibility on Mastery and Performance-Focused Instructional Practices

    ERIC Educational Resources Information Center

    Kumar, Revathy; Karabenick, Stuart A.; Burgoon, Jacob N.

    2015-01-01

    The theory of planned behavior and the dual process attitude-to-behavior MODE model framed an examination of how White teachers' (N = 241) implicit and explicit attitudes toward White versus non-White students were related to their classroom instructional practices in 2 school districts with a high percentage of Arab American and Chaldean American…

  6. The Impact of a Systematic and Explicit Vocabulary Intervention in Spanish with Spanish-Speaking English Learners in First Grade

    ERIC Educational Resources Information Center

    Cena, Johanna; Baker, Doris Luft; Kame'enui, Edward J.; Baker, Scott K.; Park, Yonghan; Smolkowski, Keith

    2013-01-01

    This study examined the impact of a 15-min daily explicit vocabulary intervention in Spanish on expressive and receptive vocabulary knowledge and oral reading fluency in Spanish, and on language proficiency in English. Fifty Spanish-speaking English learners who received 90 min of Spanish reading instruction in an early transition model were…

  7. Explicit time integration of finite element models on a vectorized, concurrent computer with shared memory

    NASA Technical Reports Server (NTRS)

    Gilbertsen, Noreen D.; Belytschko, Ted

    1990-01-01

    The implementation of a nonlinear explicit program on a vectorized, concurrent computer with shared memory is described and studied. The conflict between vectorization and concurrency is described and some guidelines are given for optimal block sizes. Several example problems are summarized to illustrate the types of speed-ups which can be achieved by reprogramming as compared to compiler optimization.

  8. Multiscale modeling of a rectifying bipolar nanopore: explicit-water versus implicit-water simulations.

    PubMed

    Ható, Zoltán; Valiskó, Mónika; Kristóf, Tamás; Gillespie, Dirk; Boda, Dezsö

    2017-07-21

    In a multiscale modeling approach, we present computer simulation results for a rectifying bipolar nanopore at two modeling levels. In an all-atom model, we use explicit water to simulate ion transport directly with the molecular dynamics technique. In a reduced model, we use implicit water and apply the Local Equilibrium Monte Carlo method together with the Nernst-Planck transport equation. This hybrid method makes the fast calculation of ion transport possible at the price of lost details. We show that the implicit-water model is an appropriate representation of the explicit-water model when we look at the system at the device (i.e., input vs. output) level. The two models produce qualitatively similar behavior of the electrical current for different voltages and model parameters. Looking at the details of concentration and potential profiles, we find profound differences between the two models. These differences, however, do not influence the basic behavior of the model as a device because they do not influence the z-dependence of the concentration profiles which are the main determinants of current. These results then address an old paradox: how do reduced models, whose assumptions should break down in a nanoscale device, predict experimental data? Our simulations show that reduced models can still capture the overall device physics correctly, even though they get some important aspects of the molecular-scale physics quite wrong; reduced models work because they include the physics that is necessary from the point of view of device function. Therefore, reduced models can suffice for general device understanding and device design, but more detailed models might be needed for molecular level understanding.

  9. Memory Systems Do Not Divide on Consciousness: Reinterpreting Memory in Terms of Activation and Binding

    PubMed Central

    Reder, Lynne M.; Park, Heekyeong; Kieffaber, Paul D.

    2009-01-01

    There is a popular hypothesis that performance on implicit and explicit memory tasks reflects 2 distinct memory systems. Explicit memory is said to store those experiences that can be consciously recollected, and implicit memory is said to store experiences and affect subsequent behavior but to be unavailable to conscious awareness. Although this division based on awareness is a useful taxonomy for memory tasks, the authors review the evidence that the unconscious character of implicit memory does not necessitate that it be treated as a separate system of human memory. They also argue that some implicit and explicit memory tasks share the same memory representations and that the important distinction is whether the task (implicit or explicit) requires the formation of a new association. The authors review and critique dissociations from the behavioral, amnesia, and neuroimaging literatures that have been advanced in support of separate explicit and implicit memory systems by highlighting contradictory evidence and by illustrating how the data can be accounted for using a simple computational memory model that assumes the same memory representation for those disparate tasks. PMID:19210052

  10. A New Canopy Integration Factor

    NASA Astrophysics Data System (ADS)

    Badgley, G.; Anderegg, L. D. L.; Baker, I. T.; Berry, J. A.

    2017-12-01

    Ecosystem modelers have long debated how to best represent within-canopy heterogeneity. Can one big leaf represent the full range of canopy physiological responses? Or you need two leaves - sun and shade - to get things right? Is it sufficient to treat the canopy as a diffuse medium? Or would it be better to explicitly represent separate canopy layers? These are open questions that have been subject of an enormous amount of research and scrutiny. Yet regardless of how the canopy is represented, each model must grapple with correctly parameterizing its canopy in a way that properly translates leaf-level processes to the canopy and ecosystem scale. We present a new approach for integrating whole-canopy biochemistry by combining remote sensing with ecological theory. Using the Simple Biosphere model (SiB), we redefined how SiB scales photosynthetic processes from leaf-to-canopy as a function of satellite-derived measurements of solar-induced chlorophyll fluorescence (SIF). Across multiple long-term study sites, our approach improves the accuracy of daily modeled photosynthesis by as much as 25 percent. We share additional insights on how SIF might be more directly integrated into photosynthesis models, as well as present ideas for harnessing SIF to more accurately parameterize canopy biochemical variables.

  11. Models of social evolution: can we do better to predict 'who helps whom to achieve what'?

    PubMed

    Rodrigues, António M M; Kokko, Hanna

    2016-02-05

    Models of social evolution and the evolution of helping have been classified in numerous ways. Two categorical differences have, however, escaped attention in the field. Models tend not to justify why they use a particular assumption structure about who helps whom: a large number of authors model peer-to-peer cooperation of essentially identical individuals, probably for reasons of mathematical convenience; others are inspired by particular cooperatively breeding species, and tend to assume unidirectional help where subordinates help a dominant breed more efficiently. Choices regarding what the help achieves (i.e. which life-history trait of the helped individual is improved) are similarly made without much comment: fecundity benefits are much more commonly modelled than survival enhancements, despite evidence that these may interact when the helped individual can perform life-history reallocations (load-lightening and related phenomena). We review our current theoretical understanding of effects revealed when explicitly asking 'who helps whom to achieve what', from models of mutual aid in partnerships to the very few models that explicitly contrast the strength of selection to help enhance another individual's fecundity or survival. As a result of idiosyncratic modelling choices in contemporary literature, including the varying degree to which demographic consequences are made explicit, there is surprisingly little agreement on what types of help are predicted to evolve most easily. We outline promising future directions to fill this gap. © 2016 The Author(s).

  12. Models of social evolution: can we do better to predict ‘who helps whom to achieve what’?

    PubMed Central

    Rodrigues, António M. M.; Kokko, Hanna

    2016-01-01

    Models of social evolution and the evolution of helping have been classified in numerous ways. Two categorical differences have, however, escaped attention in the field. Models tend not to justify why they use a particular assumption structure about who helps whom: a large number of authors model peer-to-peer cooperation of essentially identical individuals, probably for reasons of mathematical convenience; others are inspired by particular cooperatively breeding species, and tend to assume unidirectional help where subordinates help a dominant breed more efficiently. Choices regarding what the help achieves (i.e. which life-history trait of the helped individual is improved) are similarly made without much comment: fecundity benefits are much more commonly modelled than survival enhancements, despite evidence that these may interact when the helped individual can perform life-history reallocations (load-lightening and related phenomena). We review our current theoretical understanding of effects revealed when explicitly asking ‘who helps whom to achieve what’, from models of mutual aid in partnerships to the very few models that explicitly contrast the strength of selection to help enhance another individual's fecundity or survival. As a result of idiosyncratic modelling choices in contemporary literature, including the varying degree to which demographic consequences are made explicit, there is surprisingly little agreement on what types of help are predicted to evolve most easily. We outline promising future directions to fill this gap. PMID:26729928

  13. Analytical results for a stochastic model of gene expression with arbitrary partitioning of proteins

    NASA Astrophysics Data System (ADS)

    Tschirhart, Hugo; Platini, Thierry

    2018-05-01

    In biophysics, the search for analytical solutions of stochastic models of cellular processes is often a challenging task. In recent work on models of gene expression, it was shown that a mapping based on partitioning of Poisson arrivals (PPA-mapping) can lead to exact solutions for previously unsolved problems. While the approach can be used in general when the model involves Poisson processes corresponding to creation or degradation, current applications of the method and new results derived using it have been limited to date. In this paper, we present the exact solution of a variation of the two-stage model of gene expression (with time dependent transition rates) describing the arbitrary partitioning of proteins. The methodology proposed makes full use of the PPA-mapping by transforming the original problem into a new process describing the evolution of three biological switches. Based on a succession of transformations, the method leads to a hierarchy of reduced models. We give an integral expression of the time dependent generating function as well as explicit results for the mean, variance, and correlation function. Finally, we discuss how results for time dependent parameters can be extended to the three-stage model and used to make inferences about models with parameter fluctuations induced by hidden stochastic variables.

  14. Parameterization of subgrid-scale stress by the velocity gradient tensor

    NASA Technical Reports Server (NTRS)

    Lund, Thomas S.; Novikov, E. A.

    1993-01-01

    The objective of this work is to construct and evaluate subgrid-scale models that depend on both the strain rate and the vorticity. This will be accomplished by first assuming that the subgrid-scale stress is a function of the strain and rotation rate tensors. Extensions of the Caley-Hamilton theorem can then be used to write the assumed functional dependence explicitly in the form of a tensor polynomial involving products of the strain and rotation rates. Finally, use of this explicit expression as a subgrid-scale model will be evaluated using direct numerical simulation data for homogeneous, isotropic turbulence.

  15. Default contagion risks in Russian interbank market

    NASA Astrophysics Data System (ADS)

    Leonidov, A. V.; Rumyantsev, E. L.

    2016-06-01

    Systemic risks of default contagion in the Russian interbank market are investigated. The analysis is based on considering the bow-tie structure of the weighted oriented graph describing the structure of the interbank loans. A probabilistic model of interbank contagion explicitly taking into account the empirical bow-tie structure reflecting functionality of the corresponding nodes (borrowers, lenders, borrowers and lenders simultaneously), degree distributions and disassortativity of the interbank network under consideration based on empirical data is developed. The characteristics of contagion-related systemic risk calculated with this model are shown to be in agreement with those of explicit stress tests.

  16. Explicit polarization: a quantum mechanical framework for developing next generation force fields.

    PubMed

    Gao, Jiali; Truhlar, Donald G; Wang, Yingjie; Mazack, Michael J M; Löffler, Patrick; Provorse, Makenzie R; Rehak, Pavel

    2014-09-16

    Conspectus Molecular mechanical force fields have been successfully used to model condensed-phase and biological systems for a half century. By means of careful parametrization, such classical force fields can be used to provide useful interpretations of experimental findings and predictions of certain properties. Yet, there is a need to further improve computational accuracy for the quantitative prediction of biomolecular interactions and to model properties that depend on the wave functions and not just the energy terms. A new strategy called explicit polarization (X-Pol) has been developed to construct the potential energy surface and wave functions for macromolecular and liquid-phase simulations on the basis of quantum mechanics rather than only using quantum mechanical results to fit analytic force fields. In this spirit, this approach is called a quantum mechanical force field (QMFF). X-Pol is a general fragment method for electronic structure calculations based on the partition of a condensed-phase or macromolecular system into subsystems ("fragments") to achieve computational efficiency. Here, intrafragment energy and the mutual electronic polarization of interfragment interactions are treated explicitly using quantum mechanics. X-Pol can be used as a general, multilevel electronic structure model for macromolecular systems, and it can also serve as a new-generation force field. As a quantum chemical model, a variational many-body (VMB) expansion approach is used to systematically improve interfragment interactions, including exchange repulsion, charge delocalization, dispersion, and other correlation energies. As a quantum mechanical force field, these energy terms are approximated by empirical functions in the spirit of conventional molecular mechanics. This Account first reviews the formulation of X-Pol, in the full variationally correct version, in the faster embedded version, and with systematic many-body improvements. We discuss illustrative examples involving water clusters (which show the power of two-body corrections), ethylmethylimidazolium acetate ionic liquids (which reveal that the amount of charge transfer between anion and cation is much smaller than what has been assumed in some classical simulations), and a solvated protein in aqueous solution (which shows that the average charge distribution of carbonyl groups along the polypeptide chain depends strongly on their position in the sequence, whereas they are fixed in most classical force fields). The development of QMFFs also offers an opportunity to extend the accuracy of biochemical simulations to areas where classical force fields are often insufficient, especially in the areas of spectroscopy, reactivity, and enzyme catalysis.

  17. Do You See What I See? Exploring the Consequences of Luminosity Limits in Black Hole–Galaxy Evolution Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Mackenzie L.; Hickox, Ryan C.; DiPompeo, Michael A.

    In studies of the connection between active galactic nuclei (AGNs) and their host galaxies, there is widespread disagreement on some key aspects of the connection. These disagreements largely stem from a lack of understanding of the nature of the full underlying AGN population. Recent attempts to probe this connection utilize both observations and simulations to correct for a missed population, but presently are limited by intrinsic biases and complicated models. We take a simple simulation for galaxy evolution and add a new prescription for AGN activity to connect galaxy growth to dark matter halo properties and AGN activity to starmore » formation. We explicitly model selection effects to produce an “observed” AGN population for comparison with observations and empirically motivated models of the local universe. This allows us to bypass the difficulties inherent in models that attempt to infer the AGN population by inverting selection effects. We investigate the impact of selecting AGNs based on thresholds in luminosity or Eddington ratio on the “observed” AGN population. By limiting our model AGN sample in luminosity, we are able to recreate the observed local AGN luminosity function and specific star formation-stellar mass distribution, and show that using an Eddington ratio threshold introduces less bias into the sample by selecting the full range of growing black holes, despite the challenge of selecting low-mass black holes. We find that selecting AGNs using these various thresholds yield samples with different AGN host galaxy properties.« less

  18. Pauci ex tanto numero: reduce redundancy in multi-model ensembles

    NASA Astrophysics Data System (ADS)

    Solazzo, E.; Riccio, A.; Kioutsioukis, I.; Galmarini, S.

    2013-08-01

    We explicitly address the fundamental issue of member diversity in multi-model ensembles. To date, no attempts in this direction have been documented within the air quality (AQ) community despite the extensive use of ensembles in this field. Common biases and redundancy are the two issues directly deriving from lack of independence, undermining the significance of a multi-model ensemble, and are the subject of this study. Shared, dependant biases among models do not cancel out but will instead determine a biased ensemble. Redundancy derives from having too large a portion of common variance among the members of the ensemble, producing overconfidence in the predictions and underestimation of the uncertainty. The two issues of common biases and redundancy are analysed in detail using the AQMEII ensemble of AQ model results for four air pollutants in two European regions. We show that models share large portions of bias and variance, extending well beyond those induced by common inputs. We make use of several techniques to further show that subsets of models can explain the same amount of variance as the full ensemble with the advantage of being poorly correlated. Selecting the members for generating skilful, non-redundant ensembles from such subsets proved, however, non-trivial. We propose and discuss various methods of member selection and rate the ensemble performance they produce. In most cases, the full ensemble is outscored by the reduced ones. We conclude that, although independence of outputs may not always guarantee enhancement of scores (but this depends upon the skill being investigated), we discourage selecting the members of the ensemble simply on the basis of scores; that is, independence and skills need to be considered disjointly.

  19. Pauci ex tanto numero: reducing redundancy in multi-model ensembles

    NASA Astrophysics Data System (ADS)

    Solazzo, E.; Riccio, A.; Kioutsioukis, I.; Galmarini, S.

    2013-02-01

    We explicitly address the fundamental issue of member diversity in multi-model ensembles. To date no attempts in this direction are documented within the air quality (AQ) community, although the extensive use of ensembles in this field. Common biases and redundancy are the two issues directly deriving from lack of independence, undermining the significance of a multi-model ensemble, and are the subject of this study. Shared biases among models will determine a biased ensemble, making therefore essential the errors of the ensemble members to be independent so that bias can cancel out. Redundancy derives from having too large a portion of common variance among the members of the ensemble, producing overconfidence in the predictions and underestimation of the uncertainty. The two issues of common biases and redundancy are analysed in detail using the AQMEII ensemble of AQ model results for four air pollutants in two European regions. We show that models share large portions of bias and variance, extending well beyond those induced by common inputs. We make use of several techniques to further show that subsets of models can explain the same amount of variance as the full ensemble with the advantage of being poorly correlated. Selecting the members for generating skilful, non-redundant ensembles from such subsets proved, however, non-trivial. We propose and discuss various methods of member selection and rate the ensemble performance they produce. In most cases, the full ensemble is outscored by the reduced ones. We conclude that, although independence of outputs may not always guarantee enhancement of scores (but this depends upon the skill being investigated) we discourage selecting the members of the ensemble simply on the basis of scores, that is, independence and skills need to be considered disjointly.

  20. AN INDIVIDUAL-BASED MODEL OF COTTUS POPULATION DYNAMICS

    EPA Science Inventory

    We explored population dynamics of a southern Appalachian population of Cottus bairdi using a spatially-explicit, individual-based model. The model follows daily growth, mortality, and spawning of individuals as a function of flow and temperature. We modeled movement of juveniles...

  1. Convergence studies of deterministic methods for LWR explicit reflector methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canepa, S.; Hursin, M.; Ferroukhi, H.

    2013-07-01

    The standard approach in modem 3-D core simulators, employed either for steady-state or transient simulations, is to use Albedo coefficients or explicit reflectors at the core axial and radial boundaries. In the latter approach, few-group homogenized nuclear data are a priori produced with lattice transport codes using 2-D reflector models. Recently, the explicit reflector methodology of the deterministic CASMO-4/SIMULATE-3 code system was identified to potentially constitute one of the main sources of errors for core analyses of the Swiss operating LWRs, which are all belonging to GII design. Considering that some of the new GIII designs will rely on verymore » different reflector concepts, a review and assessment of the reflector methodology for various LWR designs appeared as relevant. Therefore, the purpose of this paper is to first recall the concepts of the explicit reflector modelling approach as employed by CASMO/SIMULATE. Then, for selected reflector configurations representative of both GII and GUI designs, a benchmarking of the few-group nuclear data produced with the deterministic lattice code CASMO-4 and its successor CASMO-5, is conducted. On this basis, a convergence study with regards to geometrical requirements when using deterministic methods with 2-D homogenous models is conducted and the effect on the downstream 3-D core analysis accuracy is evaluated for a typical GII deflector design in order to assess the results against available plant measurements. (authors)« less

  2. Comparison of MM/GBSA calculations based on explicit and implicit solvent simulations.

    PubMed

    Godschalk, Frithjof; Genheden, Samuel; Söderhjelm, Pär; Ryde, Ulf

    2013-05-28

    Molecular mechanics with generalised Born and surface area solvation (MM/GBSA) is a popular method to calculate the free energy of the binding of ligands to proteins. It involves molecular dynamics (MD) simulations with an explicit solvent of the protein-ligand complex to give a set of snapshots for which energies are calculated with an implicit solvent. This change in the solvation method (explicit → implicit) would strictly require that the energies are reweighted with the implicit-solvent energies, which is normally not done. In this paper we calculate MM/GBSA energies with two generalised Born models for snapshots generated by the same methods or by explicit-solvent simulations for five synthetic N-acetyllactosamine derivatives binding to galectin-3. We show that the resulting energies are very different both in absolute and relative terms, showing that the change in the solvent model is far from innocent and that standard MM/GBSA is not a consistent method. The ensembles generated with the various solvent models are quite different with root-mean-square deviations of 1.2-1.4 Å. The ensembles can be converted to each other by performing short MD simulations with the new method, but the convergence is slow, showing mean absolute differences in the calculated energies of 6-7 kJ mol(-1) after 2 ps simulations. Minimisations show even slower convergence and there are strong indications that the energies obtained from minimised structures are different from those obtained by MD.

  3. Case Study: The Transfer of Tacit Knowledge from Community College Full-Time to Adjunct Faculty

    ERIC Educational Resources Information Center

    Guzzo, Linda R.

    2013-01-01

    Knowledge is a valuable resource that fosters innovation and growth in organizations. There are two forms of knowledge: explicit knowledge or documented information and tacit knowledge or undocumented information which resides in individuals' minds. There is heightened interest in knowledge management and specifically the transfer of tacit…

  4. Career Education Programming in Three Diverse High Schools: A Critical Psychology--Case Study Research Approach

    ERIC Educational Resources Information Center

    Ali, Saba Rasheed; Yang, Ling-Yan; Button, Christopher J.; McCoy, Thomasin T. H.

    2012-01-01

    From a critical psychology perspective, Prilleltensky and Nelson advocate for research that has explicit focus on social change and can allow for full participation and empowerment of those under study. The current article describes the collaborative development, implementation, and evaluation of a career education program within three ethnically…

  5. 50 CFR 600.310 - National Standard 1-Optimum Yield.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    .... At the time a stock complex is established, the FMP should provide a full and explicit description of... complex. When indicator stock(s) are used, periodic re-evaluation of available quantitative or qualitative... sufficiently to allow rebuilding within an acceptable time frame (also see paragraph (j)(3)(ii) of this section...

  6. 50 CFR 600.310 - National Standard 1-Optimum Yield.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... At the time a stock complex is established, the FMP should provide a full and explicit description of... complex. When indicator stock(s) are used, periodic re-evaluation of available quantitative or qualitative... sufficiently to allow rebuilding within an acceptable time frame (also see paragraph (j)(3)(ii) of this section...

  7. 50 CFR 600.310 - National Standard 1-Optimum Yield.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    .... At the time a stock complex is established, the FMP should provide a full and explicit description of... complex. When indicator stock(s) are used, periodic re-evaluation of available quantitative or qualitative... sufficiently to allow rebuilding within an acceptable time frame (also see paragraph (j)(3)(ii) of this section...

  8. 50 CFR 600.310 - National Standard 1-Optimum Yield.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    .... At the time a stock complex is established, the FMP should provide a full and explicit description of... complex. When indicator stock(s) are used, periodic re-evaluation of available quantitative or qualitative... sufficiently to allow rebuilding within an acceptable time frame (also see paragraph (j)(3)(ii) of this section...

  9. 50 CFR 600.310 - National Standard 1-Optimum Yield.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... At the time a stock complex is established, the FMP should provide a full and explicit description of... complex. When indicator stock(s) are used, periodic re-evaluation of available quantitative or qualitative... sufficiently to allow rebuilding within an acceptable time frame (also see paragraph (j)(3)(ii) of this section...

  10. Exact simulation of integrate-and-fire models with exponential currents.

    PubMed

    Brette, Romain

    2007-10-01

    Neural networks can be simulated exactly using event-driven strategies, in which the algorithm advances directly from one spike to the next spike. It applies to neuron models for which we have (1) an explicit expression for the evolution of the state variables between spikes and (2) an explicit test on the state variables that predicts whether and when a spike will be emitted. In a previous work, we proposed a method that allows exact simulation of an integrate-and-fire model with exponential conductances, with the constraint of a single synaptic time constant. In this note, we propose a method, based on polynomial root finding, that applies to integrate-and-fire models with exponential currents, with possibly many different synaptic time constants. Models can include biexponential synaptic currents and spike-triggered adaptation currents.

  11. Low-Storage, Explicit Runge-Kutta Schemes for the Compressible Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Kennedy, Chistopher A.; Carpenter, Mark H.; Lewis, R. Michael

    1999-01-01

    The derivation of storage explicit Runge-Kutta (ERK) schemes has been performed in the context of integrating the compressible Navier-Stokes equations via direct numerical simulation. Optimization of ERK methods is done across the broad range of properties, such as stability and accuracy efficiency, linear and nonlinear stability, error control reliability, step change stability, and dissipation/dispersion accuracy, subject to varying degrees of memory economization. Following van der Houwen and Wray, 16 ERK pairs are presented using from two to five registers of memory per equation, per grid point and having accuracies from third- to fifth-order. Methods have been assessed using the differential equation testing code DETEST, and with the 1D wave equation. Two of the methods have been applied to the DNS of a compressible jet as well as methane-air and hydrogen-air flames. Derived 3(2) and 4(3) pairs are competitive with existing full-storage methods. Although a substantial efficiency penalty accompanies use of two- and three-register, fifth-order methods, the best contemporary full-storage methods can be pearl), matched while still saving two to three registers of memory.

  12. A mechanistic soil biogeochemistry model with explicit representation of microbial and macrofaunal activities and nutrient cycles

    NASA Astrophysics Data System (ADS)

    Fatichi, Simone; Manzoni, Stefano; Or, Dani; Paschalis, Athanasios

    2016-04-01

    The potential of a given ecosystem to store and release carbon is inherently linked to soil biogeochemical processes. These processes are deeply connected to the water, energy, and vegetation dynamics above and belowground. Recently, it has been advocated that a mechanistic representation of soil biogeochemistry require: (i) partitioning of soil organic carbon (SOC) pools according to their functional role; (ii) an explicit representation of microbial dynamics; (iii) coupling of carbon and nutrient cycles. While some of these components have been introduced in specialized models, they have been rarely implemented in terrestrial biosphere models and tested in real cases. In this study, we combine a new soil biogeochemistry model with an existing model of land-surface hydrology and vegetation dynamics (T&C). Specifically the soil biogeochemistry component explicitly separates different litter pools and distinguishes SOC in particulate, dissolved and mineral associated fractions. Extracellular enzymes and microbial pools are explicitly represented differentiating the functional roles of bacteria, saprotrophic and mycorrhizal fungi. Microbial activity depends on temperature, soil moisture and litter or SOC stoichiometry. The activity of macrofauna is also modeled. Nutrient dynamics include the cycles of nitrogen, phosphorous and potassium. The model accounts for feedbacks between nutrient limitations and plant growth as well as for plant stoichiometric flexibility. In turn, litter input is a function of the simulated vegetation dynamics. Root exudation and export to mycorrhiza are computed based on a nutrient uptake cost function. The combined model is tested to reproduce respiration dynamics and nitrogen cycle in few sites where data were available to test plausibility of results across a range of different metrics. For instance in a Swiss grassland ecosystem, fine root, bacteria, fungal and macrofaunal respiration account for 40%, 23%, 33% and 4% of total belowground respiration, respectively. Root exudation and carbon export to mycorrhizal represent about 7% of plant Net Primary Production. The model allows exploring the temporal dynamics of respiration fluxes from the different ecosystem components and designing virtual experiments on the controls exerted by environmental variables and/or soil microbes and mycorrhizal associations on soil carbon storage, plant growth, and nutrient leaching.

  13. Empirical evaluation of spatial and non-spatial European-scale multimedia fate models: results and implications for chemical risk assessment.

    PubMed

    Armitage, James M; Cousins, Ian T; Hauck, Mara; Harbers, Jasper V; Huijbregts, Mark A J

    2007-06-01

    Multimedia environmental fate models are commonly-applied tools for assessing the fate and distribution of contaminants in the environment. Owing to the large number of chemicals in use and the paucity of monitoring data, such models are often adopted as part of decision-support systems for chemical risk assessment. The purpose of this study was to evaluate the performance of three multimedia environmental fate models (spatially- and non-spatially-explicit) at a European scale. The assessment was conducted for four polycyclic aromatic hydrocarbons (PAHs) and hexachlorobenzene (HCB) and compared predicted and median observed concentrations using monitoring data collected for air, water, sediments and soils. Model performance in the air compartment was reasonable for all models included in the evaluation exercise as predicted concentrations were typically within a factor of 3 of the median observed concentrations. Furthermore, there was good correspondence between predictions and observations in regions that had elevated median observed concentrations for both spatially-explicit models. On the other hand, all three models consistently underestimated median observed concentrations in sediment and soil by 1-3 orders of magnitude. Although regions with elevated median observed concentrations in these environmental media were broadly identified by the spatially-explicit models, the magnitude of the discrepancy between predicted and median observed concentrations is of concern in the context of chemical risk assessment. These results were discussed in terms of factors influencing model performance such as the steady-state assumption, inaccuracies in emission estimates and the representativeness of monitoring data.

  14. Derivation of an Explicit Form of the Percolation-Based Effective-Medium Approximation for Thermal Conductivity of Partially Saturated Soils

    NASA Astrophysics Data System (ADS)

    Sadeghi, Morteza; Ghanbarian, Behzad; Horton, Robert

    2018-02-01

    Thermal conductivity is an essential component in multiphysics models and coupled simulation of heat transfer, fluid flow, and solute transport in porous media. In the literature, various empirical, semiempirical, and physical models were developed for thermal conductivity and its estimation in partially saturated soils. Recently, Ghanbarian and Daigle (GD) proposed a theoretical model, using the percolation-based effective-medium approximation, whose parameters are physically meaningful. The original GD model implicitly formulates thermal conductivity λ as a function of volumetric water content θ. For the sake of computational efficiency in numerical calculations, in this study, we derive an explicit λ(θ) form of the GD model. We also demonstrate that some well-known empirical models, e.g., Chung-Horton, widely applied in the HYDRUS model, as well as mixing models are special cases of the GD model under specific circumstances. Comparison with experiments indicates that the GD model can accurately estimate soil thermal conductivity.

  15. Configuration of the thermal landscape determines thermoregulatory performance of ectotherms

    PubMed Central

    Sears, Michael W.; Angilletta, Michael J.; Schuler, Matthew S.; Borchert, Jason; Dilliplane, Katherine F.; Stegman, Monica; Rusch, Travis W.; Mitchell, William A.

    2016-01-01

    Although most organisms thermoregulate behaviorally, biologists still cannot easily predict whether mobile animals will thermoregulate in natural environments. Current models fail because they ignore how the spatial distribution of thermal resources constrains thermoregulatory performance over space and time. To overcome this limitation, we modeled the spatially explicit movements of animals constrained by access to thermal resources. Our models predict that ectotherms thermoregulate more accurately when thermal resources are dispersed throughout space than when these resources are clumped. This prediction was supported by thermoregulatory behaviors of lizards in outdoor arenas with known distributions of environmental temperatures. Further, simulations showed how the spatial structure of the landscape qualitatively affects responses of animals to climate. Biologists will need spatially explicit models to predict impacts of climate change on local scales. PMID:27601639

  16. Modeling wildlife populations with HexSim

    EPA Science Inventory

    HexSim is a framework for constructing spatially-explicit, individual-based computer models designed for simulating terrestrial wildlife population dynamics and interactions. HexSim is useful for a broad set of modeling applications including population viability analysis for on...

  17. Local Minima Free Parameterized Appearance Models

    PubMed Central

    Nguyen, Minh Hoai; De la Torre, Fernando

    2010-01-01

    Parameterized Appearance Models (PAMs) (e.g. Eigentracking, Active Appearance Models, Morphable Models) are commonly used to model the appearance and shape variation of objects in images. While PAMs have numerous advantages relative to alternate approaches, they have at least two drawbacks. First, they are especially prone to local minima in the fitting process. Second, often few if any of the local minima of the cost function correspond to acceptable solutions. To solve these problems, this paper proposes a method to learn a cost function by explicitly optimizing that the local minima occur at and only at the places corresponding to the correct fitting parameters. To the best of our knowledge, this is the first paper to address the problem of learning a cost function to explicitly model local properties of the error surface to fit PAMs. Synthetic and real examples show improvement in alignment performance in comparison with traditional approaches. PMID:21804750

  18. Generalized Born Models of Macromolecular Solvation Effects

    NASA Astrophysics Data System (ADS)

    Bashford, Donald; Case, David A.

    2000-10-01

    It would often be useful in computer simulations to use a simple description of solvation effects, instead of explicitly representing the individual solvent molecules. Continuum dielectric models often work well in describing the thermodynamic aspects of aqueous solvation, and approximations to such models that avoid the need to solve the Poisson equation are attractive because of their computational efficiency. Here we give an overview of one such approximation, the generalized Born model, which is simple and fast enough to be used for molecular dynamics simulations of proteins and nucleic acids. We discuss its strengths and weaknesses, both for its fidelity to the underlying continuum model and for its ability to replace explicit consideration of solvent molecules in macromolecular simulations. We focus particularly on versions of the generalized Born model that have a pair-wise analytical form, and therefore fit most naturally into conventional molecular mechanics calculations.

  19. The origin of consistent protein structure refinement from structural averaging.

    PubMed

    Park, Hahnbeom; DiMaio, Frank; Baker, David

    2015-06-02

    Recent studies have shown that explicit solvent molecular dynamics (MD) simulation followed by structural averaging can consistently improve protein structure models. We find that improvement upon averaging is not limited to explicit water MD simulation, as consistent improvements are also observed for more efficient implicit solvent MD or Monte Carlo minimization simulations. To determine the origin of these improvements, we examine the changes in model accuracy brought about by averaging at the individual residue level. We find that the improvement in model quality from averaging results from the superposition of two effects: a dampening of deviations from the correct structure in the least well modeled regions, and a reinforcement of consistent movements towards the correct structure in better modeled regions. These observations are consistent with an energy landscape model in which the magnitude of the energy gradient toward the native structure decreases with increasing distance from the native state. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. A solution to the surface intersection problem. [Boolean functions in geometric modeling

    NASA Technical Reports Server (NTRS)

    Timer, H. G.

    1977-01-01

    An application-independent geometric model within a data base framework should support the use of Boolean operators which allow the user to construct a complex model by appropriately combining a series of simple models. The use of these operators leads to the concept of implicitly and explicitly defined surfaces. With an explicitly defined model, the surface area may be computed by simply summing the surface areas of the bounding surfaces. For an implicitly defined model, the surface area computation must deal with active and inactive regions. Because the surface intersection problem involves four unknowns and its solution is a space curve, the parametric coordinates of each surface must be determined as a function of the arc length. Various subproblems involved in the general intersection problem are discussed, and the mathematical basis for their solution is presented along with a program written in FORTRAN IV for implementation on the IBM 370 TSO system.

  1. Prediction of High-Lift Flows using Turbulent Closure Models

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Gatski, Thomas B.; Ying, Susan X.; Bertelrud, Arild

    1997-01-01

    The flow over two different multi-element airfoil configurations is computed using linear eddy viscosity turbulence models and a nonlinear explicit algebraic stress model. A subset of recently-measured transition locations using hot film on a McDonnell Douglas configuration is presented, and the effect of transition location on the computed solutions is explored. Deficiencies in wake profile computations are found to be attributable in large part to poor boundary layer prediction on the generating element, and not necessarily inadequate turbulence modeling in the wake. Using measured transition locations for the main element improves the prediction of its boundary layer thickness, skin friction, and wake profile shape. However, using measured transition locations on the slat still yields poor slat wake predictions. The computation of the slat flow field represents a key roadblock to successful predictions of multi-element flows. In general, the nonlinear explicit algebraic stress turbulence model gives very similar results to the linear eddy viscosity models.

  2. Monte Carlo Analysis of the Battery-Type High Temperature Gas Cooled Reactor

    NASA Astrophysics Data System (ADS)

    Grodzki, Marcin; Darnowski, Piotr; Niewiński, Grzegorz

    2017-12-01

    The paper presents a neutronic analysis of the battery-type 20 MWth high-temperature gas cooled reactor. The developed reactor model is based on the publicly available data being an `early design' variant of the U-battery. The investigated core is a battery type small modular reactor, graphite moderated, uranium fueled, prismatic, helium cooled high-temperature gas cooled reactor with graphite reflector. The two core alternative designs were investigated. The first has a central reflector and 30×4 prismatic fuel blocks and the second has no central reflector and 37×4 blocks. The SERPENT Monte Carlo reactor physics computer code, with ENDF and JEFF nuclear data libraries, was applied. Several nuclear design static criticality calculations were performed and compared with available reference results. The analysis covered the single assembly models and full core simulations for two geometry models: homogenous and heterogenous (explicit). A sensitivity analysis of the reflector graphite density was performed. An acceptable agreement between calculations and reference design was obtained. All calculations were performed for the fresh core state.

  3. The Magnetic Reconnection Code: an AMR-based fully implicit simulation suite

    NASA Astrophysics Data System (ADS)

    Germaschewski, K.; Bhattacharjee, A.; Ng, C.-S.

    2006-12-01

    Extended MHD models, which incorporate two-fluid effects, are promising candidates to enhance understanding of collisionless reconnection phenomena in laboratory, space and astrophysical plasma physics. In this paper, we introduce two simulation codes in the Magnetic Reconnection Code suite which integrate reduced and full extended MHD models. Numerical integration of these models comes with two challenges: Small-scale spatial structures, e.g. thin current sheets, develop and must be well resolved by the code. Adaptive mesh refinement (AMR) is employed to provide high resolution where needed while maintaining good performance. Secondly, the two-fluid effects in extended MHD give rise to dispersive waves, which lead to a very stringent CFL condition for explicit codes, while reconnection happens on a much slower time scale. We use a fully implicit Crank--Nicholson time stepping algorithm. Since no efficient preconditioners are available for our system of equations, we instead use a direct solver to handle the inner linear solves. This requires us to actually compute the Jacobian matrix, which is handled by a code generator that calculates the derivative symbolically and then outputs code to calculate it.

  4. First Gridded Spatial Field Reconstructions of Snow from Tree Rings

    NASA Astrophysics Data System (ADS)

    Coulthard, B. L.; Anchukaitis, K. J.; Pederson, G. T.; Alder, J. R.; Hostetler, S. W.; Gray, S. T.

    2017-12-01

    Western North America's mountain snowpacks provide critical water resources for human populations and ecosystems. Warmer temperatures and changing precipitation patterns will increasingly alter the quantity, extent, and persistence of snow in coming decades. A comprehensive understanding of the causes and range of long-term variability in this system is required for forecasting future anomalies, but snowpack observations are limited and sparse. While individual tree ring-based annual snowpack reconstructions have been developed for specific regions and mountain ranges, we present here the first collection of spatially-explicit gridded field reconstructions of seasonal snowpack within the American Rocky Mountains. Capitalizing on a new western North American snow-sensitive network of over 700 tree-ring chronologies, as well as recent advances in PRISM-based snow modeling, our gridded reconstructions offer a full space-time characterization of snow and associated water resource fluctuations over several centuries. The quality of reconstructions is evaluated against existing observations, proxy-records, and an independently-developed first-order monthly snow model.

  5. Retracted: An impulsive predator-prey model with disease in the prey for integrated pest management

    NASA Astrophysics Data System (ADS)

    Shi, Ruiqing

    2017-06-01

    This article has been withdrawn at the request of the author(s) and/or editor. The Publisher apologizes for any inconvenience this may cause. The full Elsevier Policy on Article Withdrawal can be found at http://www.elsevier.com/locate/withdrawalpolicy. The article is not original and for the most part already appeared in Applied Mathematical Modelling (volume 33, pages 2248-2256). One of the conditions of submission of a paper for publication is that authors declare explicitly that their work is original and has not appeared in a publication elsewhere. Re-use of any data should be appropriately cited. As such this article represents an abuse of the scientific publishing system. The scientific community takes a very strong view on this matter and apologies are offered to readers of the journal that this was not detected during the submission process. DOI of original article: http://dx.doi.org/10.1016/j.apm.2008.06.001

  6. Scalable File Systems for High Performance Computing Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandt, S A

    2007-10-03

    Simulations of mode I interlaminar fracture toughness tests of a carbon-reinforced composite material (BMS 8-212) were conducted with LSDYNA. The fracture toughness tests were performed by U.C. Berkeley. The simulations were performed to investigate the validity and practicality of employing decohesive elements to represent interlaminar bond failures that are prevalent in carbon-fiber composite structure penetration events. The simulations employed a decohesive element formulation that was verified on a simple two element model before being employed to perform the full model simulations. Care was required during the simulations to ensure that the explicit time integration of LSDYNA duplicate the near steady-statemore » testing conditions. In general, this study validated the use of employing decohesive elements to represent the interlaminar bond failures seen in carbon-fiber composite structures, but the practicality of employing the elements to represent the bond failures seen in carbon-fiber composite structures during penetration events was not established.« less

  7. The Use of Modeling-Based Text to Improve Students' Modeling Competencies

    ERIC Educational Resources Information Center

    Jong, Jing-Ping; Chiu, Mei-Hung; Chung, Shiao-Lan

    2015-01-01

    This study investigated the effects of a modeling-based text on 10th graders' modeling competencies. Fifteen 10th graders read a researcher-developed modeling-based science text on the ideal gas law that included explicit descriptions and representations of modeling processes (i.e., model selection, model construction, model validation, model…

  8. Benefits of explicit urban parameterization in regional climate modeling to study climate and city interactions

    NASA Astrophysics Data System (ADS)

    Daniel, M.; Lemonsu, Aude; Déqué, M.; Somot, S.; Alias, A.; Masson, V.

    2018-06-01

    Most climate models do not explicitly model urban areas and at best describe them as rock covers. Nonetheless, the very high resolutions reached now by the regional climate models may justify and require a more realistic parameterization of surface exchanges between urban canopy and atmosphere. To quantify the potential impact of urbanization on the regional climate, and evaluate the benefits of a detailed urban canopy model compared with a simpler approach, a sensitivity study was carried out over France at a 12-km horizontal resolution with the ALADIN-Climate regional model for 1980-2009 time period. Different descriptions of land use and urban modeling were compared, corresponding to an explicit modeling of cities with the urban canopy model TEB, a conventional and simpler approach representing urban areas as rocks, and a vegetated experiment for which cities are replaced by natural covers. A general evaluation of ALADIN-Climate was first done, that showed an overestimation of the incoming solar radiation but satisfying results in terms of precipitation and near-surface temperatures. The sensitivity analysis then highlighted that urban areas had a significant impact on modeled near-surface temperature. A further analysis on a few large French cities indicated that over the 30 years of simulation they all induced a warming effect both at daytime and nighttime with values up to + 1.5 °C for the city of Paris. The urban model also led to a regional warming extending beyond the urban areas boundaries. Finally, the comparison to temperature observations available for Paris area highlighted that the detailed urban canopy model improved the modeling of the urban heat island compared with a simpler approach.

  9. Chained Aggregation and Control System Design:; A Geometric Approach.

    DTIC Science & Technology

    1982-10-01

    Furthermore, it explicitly identifies a reduced order modal used to meet the design goals. This results in an interactive design pro- cedure which allows...same framework. This leads directly to dynamic compen- sator design. The results are applied to decentralized control problems, non interactive ...goals. Furthermore, it explicitly identifies a reduced order model used to meet the design goals. This results in an interactive design procedure which

  10. A Bidirectional Subsurface Remote Sensing Reflectance Model Explicitly Accounting for Particle Backscattering Shapes

    NASA Astrophysics Data System (ADS)

    He, Shuangyan; Zhang, Xiaodong; Xiong, Yuanheng; Gray, Deric

    2017-11-01

    The subsurface remote sensing reflectance (rrs, sr-1), particularly its bidirectional reflectance distribution function (BRDF), depends fundamentally on the angular shape of the volume scattering functions (VSFs, m-1 sr-1). Recent technological advancement has greatly expanded the collection, and the knowledge of natural variability, of the VSFs of oceanic particles. This allows us to test the Zaneveld's theoretical rrs model that explicitly accounts for particle VSF shapes. We parameterized the rrs model based on HydroLight simulations using 114 VSFs measured in three coastal waters around the United States and in oceanic waters of North Atlantic Ocean. With the absorption coefficient (a), backscattering coefficient (bb), and VSF shape as inputs, the parameterized model is able to predict rrs with a root mean square relative error of ˜4% for solar zenith angles from 0 to 75°, viewing zenith angles from 0 to 60°, and viewing azimuth angles from 0 to 180°. A test with the field data indicates the performance of our model, when using only a and bb as inputs and selecting the VSF shape using bb, is comparable to or slightly better than the currently used models by Morel et al. and Lee et al. Explicitly expressing VSF shapes in rrs modeling has great potential to further constrain the uncertainty in the ocean color studies as our knowledge on the VSFs of natural particles continues to improve. Our study represents a first effort in this direction.

  11. Implicit and explicit host effects on excitons in pentacene derivatives.

    PubMed

    Charlton, R J; Fogarty, R M; Bogatko, S; Zuehlsdorff, T J; Hine, N D M; Heeney, M; Horsfield, A P; Haynes, P D

    2018-03-14

    An ab initio study of the effects of implicit and explicit hosts on the excited state properties of pentacene and its nitrogen-based derivatives has been performed using ground state density functional theory (DFT), time-dependent DFT, and ΔSCF. We observe a significant solvatochromic redshift in the excitation energy of the lowest singlet state (S 1 ) of pentacene from inclusion in a p-terphenyl host compared to vacuum; for an explicit host consisting of six nearest neighbour p-terphenyls, we obtain a redshift of 65 meV while a conductor-like polarisable continuum model (CPCM) yields a 78 meV redshift. Comparison is made between the excitonic properties of pentacene and four of its nitrogen-based analogs, 1,8-, 2,9-, 5,12-, and 6,13-diazapentacene with the latter found to be the most distinct due to local distortions in the ground state electronic structure. We observe that a CPCM is insufficient to fully understand the impact of the host due to the presence of a mild charge-transfer (CT) coupling between the chromophore and neighbouring p-terphenyls, a phenomenon which can only be captured using an explicit model. The strength of this CT interaction increases as the nitrogens are brought closer to the central acene ring of pentacene.

  12. Implicit and explicit host effects on excitons in pentacene derivatives

    NASA Astrophysics Data System (ADS)

    Charlton, R. J.; Fogarty, R. M.; Bogatko, S.; Zuehlsdorff, T. J.; Hine, N. D. M.; Heeney, M.; Horsfield, A. P.; Haynes, P. D.

    2018-03-01

    An ab initio study of the effects of implicit and explicit hosts on the excited state properties of pentacene and its nitrogen-based derivatives has been performed using ground state density functional theory (DFT), time-dependent DFT, and ΔSCF. We observe a significant solvatochromic redshift in the excitation energy of the lowest singlet state (S1) of pentacene from inclusion in a p-terphenyl host compared to vacuum; for an explicit host consisting of six nearest neighbour p-terphenyls, we obtain a redshift of 65 meV while a conductor-like polarisable continuum model (CPCM) yields a 78 meV redshift. Comparison is made between the excitonic properties of pentacene and four of its nitrogen-based analogs, 1,8-, 2,9-, 5,12-, and 6,13-diazapentacene with the latter found to be the most distinct due to local distortions in the ground state electronic structure. We observe that a CPCM is insufficient to fully understand the impact of the host due to the presence of a mild charge-transfer (CT) coupling between the chromophore and neighbouring p-terphenyls, a phenomenon which can only be captured using an explicit model. The strength of this CT interaction increases as the nitrogens are brought closer to the central acene ring of pentacene.

  13. Seek and you shall remember: Scene semantics interact with visual search to build better memories

    PubMed Central

    Draschkow, Dejan; Wolfe, Jeremy M.; Võ, Melissa L.-H.

    2014-01-01

    Memorizing critical objects and their locations is an essential part of everyday life. In the present study, incidental encoding of objects in naturalistic scenes during search was compared to explicit memorization of those scenes. To investigate if prior knowledge of scene structure influences these two types of encoding differently, we used meaningless arrays of objects as well as objects in real-world, semantically meaningful images. Surprisingly, when participants were asked to recall scenes, their memory performance was markedly better for searched objects than for objects they had explicitly tried to memorize, even though participants in the search condition were not explicitly asked to memorize objects. This finding held true even when objects were observed for an equal amount of time in both conditions. Critically, the recall benefit for searched over memorized objects in scenes was eliminated when objects were presented on uniform, non-scene backgrounds rather than in a full scene context. Thus, scene semantics not only help us search for objects in naturalistic scenes, but appear to produce a representation that supports our memory for those objects beyond intentional memorization. PMID:25015385

  14. An in-depth stability analysis of nonuniform FDTD combined with novel local implicitization techniques

    NASA Astrophysics Data System (ADS)

    Van Londersele, Arne; De Zutter, Daniël; Vande Ginste, Dries

    2017-08-01

    This work focuses on efficient full-wave solutions of multiscale electromagnetic problems in the time domain. Three local implicitization techniques are proposed and carefully analyzed in order to relax the traditional time step limit of the Finite-Difference Time-Domain (FDTD) method on a nonuniform, staggered, tensor product grid: Newmark, Crank-Nicolson (CN) and Alternating-Direction-Implicit (ADI) implicitization. All of them are applied in preferable directions, alike Hybrid Implicit-Explicit (HIE) methods, as to limit the rank of the sparse linear systems. Both exponential and linear stability are rigorously investigated for arbitrary grid spacings and arbitrary inhomogeneous, possibly lossy, isotropic media. Numerical examples confirm the conservation of energy inside a cavity for a million iterations if the time step is chosen below the proposed, relaxed limit. Apart from the theoretical contributions, new accomplishments such as the development of the leapfrog Alternating-Direction-Hybrid-Implicit-Explicit (ADHIE) FDTD method and a less stringent Courant-like time step limit for the conventional, fully explicit FDTD method on a nonuniform grid, have immediate practical applications.

  15. Constrained Unfolding of a Helical Peptide: Implicit versus Explicit Solvents.

    PubMed

    Bureau, Hailey R; Merz, Dale R; Hershkovits, Eli; Quirk, Stephen; Hernandez, Rigoberto

    2015-01-01

    Steered Molecular Dynamics (SMD) has been seen to provide the potential of mean force (PMF) along a peptide unfolding pathway effectively but at significant computational cost, particularly in all-atom solvents. Adaptive steered molecular dynamics (ASMD) has been seen to provide a significant computational advantage by limiting the spread of the trajectories in a staged approach. The contraction of the trajectories at the end of each stage can be performed by taking a structure whose nonequilibrium work is closest to the Jarzynski average (in naive ASMD) or by relaxing the trajectories under a no-work condition (in full-relaxation ASMD--namely, FR-ASMD). Both approaches have been used to determine the energetics and hydrogen-bonding structure along the pathway for unfolding of a benchmark peptide initially constrained as an α-helix in a water environment. The energetics are quite different to those in vacuum, but are found to be similar between implicit and explicit solvents. Surprisingly, the hydrogen-bonding pathways are also similar in the implicit and explicit solvents despite the fact that the solvent contact plays an important role in opening the helix.

  16. Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code

    NASA Astrophysics Data System (ADS)

    Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.

    2015-12-01

    WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be presented. These simulations highlight the code features included in the latest release of WEC-Sim (v1.2), including: wave directionality, nonlinear hydrostatics and hydrodynamics, user-defined wave elevation time-series, state space radiation, and WEC-Sim compatibility with BEMIO (open source AQWA/WAMI/NEMOH coefficient parser).

  17. High-Resolution Modeling to Assess Tropical Cyclone Activity in Future Climate Regimes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lackmann, Gary

    2013-06-10

    Applied research is proposed with the following objectives: (i) to determine the most likely level of tropical cyclone intensity and frequency in future climate regimes, (ii) to provide a quantitative measure of uncertainty in these predictions, and (iii) to improve understanding of the linkage between tropical cyclones and the planetary-scale circulation. Current mesoscale weather forecasting models, such as the Weather Research and Forecasting (WRF) model, are capable of simulating the full intensity of tropical cyclones (TC) with realistic structures. However, in order to accurately represent both the primary and secondary circulations in these systems, model simulations must be configured withmore » sufficient resolution to explicitly represent convection (omitting the convective parameterization scheme). Most previous numerical studies of TC activity at seasonal and longer time scales have not utilized such explicit convection (EC) model runs. Here, we propose to employ the moving nest capability of WRF to optimally represent TC activity on a seasonal scale using a downscaling approach. The statistical results of a suite of these high-resolution TC simulations will yield a realistic representation of TC intensity on a seasonal basis, while at the same time allowing analysis of the feedback that TCs exert on the larger-scale climate system. Experiments will be driven with analyzed lateral boundary conditions for several recent Atlantic seasons, spanning a range of activity levels and TC track patterns. Results of the ensemble of WRF simulations will then be compared to analyzed TC data in order to determine the extent to which this modeling setup can reproduce recent levels of TC activity. Next, the boundary conditions (sea-surface temperature, tropopause height, and thermal/moisture profiles) from the recent seasons will be altered in a manner consistent with various future GCM/RCM scenarios, but that preserves the large-scale shear and incipient disturbance activity. This will allow (i) a direct comparison of future TC activity that could be expected for an active or inactive season in an altered climate regime, and (ii) a measure of the level of uncertainty and variability in TC activity resulting from different carbon emission scenarios.« less

  18. General quantitative genetic methods for comparative biology: phylogenies, taxonomies and multi-trait models for continuous and categorical characters.

    PubMed

    Hadfield, J D; Nakagawa, S

    2010-03-01

    Although many of the statistical techniques used in comparative biology were originally developed in quantitative genetics, subsequent development of comparative techniques has progressed in relative isolation. Consequently, many of the new and planned developments in comparative analysis already have well-tested solutions in quantitative genetics. In this paper, we take three recent publications that develop phylogenetic meta-analysis, either implicitly or explicitly, and show how they can be considered as quantitative genetic models. We highlight some of the difficulties with the proposed solutions, and demonstrate that standard quantitative genetic theory and software offer solutions. We also show how results from Bayesian quantitative genetics can be used to create efficient Markov chain Monte Carlo algorithms for phylogenetic mixed models, thereby extending their generality to non-Gaussian data. Of particular utility is the development of multinomial models for analysing the evolution of discrete traits, and the development of multi-trait models in which traits can follow different distributions. Meta-analyses often include a nonrandom collection of species for which the full phylogenetic tree has only been partly resolved. Using missing data theory, we show how the presented models can be used to correct for nonrandom sampling and show how taxonomies and phylogenies can be combined to give a flexible framework with which to model dependence.

  19. New explicit equations for the accurate calculation of the growth and evaporation of hydrometeors by the diffusion of water vapor

    NASA Technical Reports Server (NTRS)

    Srivastava, R. C.; Coen, J. L.

    1992-01-01

    The traditional explicit growth equation has been widely used to calculate the growth and evaporation of hydrometeors by the diffusion of water vapor. This paper reexamines the assumptions underlying the traditional equation and shows that large errors (10-30 percent in some cases) result if it is used carelessly. More accurate explicit equations are derived by approximating the saturation vapor-density difference as a quadratic rather than a linear function of the temperature difference between the particle and ambient air. These new equations, which reduce the error to less than a few percent, merit inclusion in a broad range of atmospheric models.

  20. Neutral models as a way to evaluate the Sea Level Affecting Marshes Model (SLAMM)

    EPA Science Inventory

    A commonly used landscape model to simulate wetland change – the Sea Level Affecting Marshes Model(SLAMM) – has rarely been explicitly assessed for its prediction accuracy. Here, we evaluated this model using recently proposed neutral models – including the random constraint matc...

  1. An efficient hydro-mechanical model for coupled multi-porosity and discrete fracture porous media

    NASA Astrophysics Data System (ADS)

    Yan, Xia; Huang, Zhaoqin; Yao, Jun; Li, Yang; Fan, Dongyan; Zhang, Kai

    2018-02-01

    In this paper, a numerical model is developed for coupled analysis of deforming fractured porous media with multiscale fractures. In this model, the macro-fractures are modeled explicitly by the embedded discrete fracture model, and the supporting effects of fluid and fillings in these fractures are represented explicitly in the geomechanics model. On the other hand, matrix and micro-fractures are modeled by a multi-porosity model, which aims to accurately describe the transient matrix-fracture fluid exchange process. A stabilized extended finite element method scheme is developed based on the polynomial pressure projection technique to address the displacement oscillation along macro-fracture boundaries. After that, the mixed space discretization and modified fixed stress sequential implicit methods based on non-matching grids are applied to solve the coupling model. Finally, we demonstrate the accuracy and application of the proposed method to capture the coupled hydro-mechanical impacts of multiscale fractures on fractured porous media.

  2. Modeling the Spatial Dynamics of Regional Land Use: The CLUE-S Model

    NASA Astrophysics Data System (ADS)

    Verburg, Peter H.; Soepboer, Welmoed; Veldkamp, A.; Limpiada, Ramil; Espaldon, Victoria; Mastura, Sharifah S. A.

    2002-09-01

    Land-use change models are important tools for integrated environmental management. Through scenario analysis they can help to identify near-future critical locations in the face of environmental change. A dynamic, spatially explicit, land-use change model is presented for the regional scale: CLUE-S. The model is specifically developed for the analysis of land use in small regions (e.g., a watershed or province) at a fine spatial resolution. The model structure is based on systems theory to allow the integrated analysis of land-use change in relation to socio-economic and biophysical driving factors. The model explicitly addresses the hierarchical organization of land use systems, spatial connectivity between locations and stability. Stability is incorporated by a set of variables that define the relative elasticity of the actual land-use type to conversion. The user can specify these settings based on expert knowledge or survey data. Two applications of the model in the Philippines and Malaysia are used to illustrate the functioning of the model and its validation.

  3. The utility of modeling word identification from visual input within models of eye movements in reading

    PubMed Central

    Bicknell, Klinton; Levy, Roger

    2012-01-01

    Decades of empirical work have shown that a range of eye movement phenomena in reading are sensitive to the details of the process of word identification. Despite this, major models of eye movement control in reading do not explicitly model word identification from visual input. This paper presents a argument for developing models of eye movements that do include detailed models of word identification. Specifically, we argue that insights into eye movement behavior can be gained by understanding which phenomena naturally arise from an account in which the eyes move for efficient word identification, and that one important use of such models is to test which eye movement phenomena can be understood this way. As an extended case study, we present evidence from an extension of a previous model of eye movement control in reading that does explicitly model word identification from visual input, Mr. Chips (Legge, Klitz, & Tjan, 1997), to test two proposals for the effect of using linguistic context on reading efficiency. PMID:23074362

  4. Modeling the spatial dynamics of regional land use: the CLUE-S model.

    PubMed

    Verburg, Peter H; Soepboer, Welmoed; Veldkamp, A; Limpiada, Ramil; Espaldon, Victoria; Mastura, Sharifah S A

    2002-09-01

    Land-use change models are important tools for integrated environmental management. Through scenario analysis they can help to identify near-future critical locations in the face of environmental change. A dynamic, spatially explicit, land-use change model is presented for the regional scale: CLUE-S. The model is specifically developed for the analysis of land use in small regions (e.g., a watershed or province) at a fine spatial resolution. The model structure is based on systems theory to allow the integrated analysis of land-use change in relation to socio-economic and biophysical driving factors. The model explicitly addresses the hierarchical organization of land use systems, spatial connectivity between locations and stability. Stability is incorporated by a set of variables that define the relative elasticity of the actual land-use type to conversion. The user can specify these settings based on expert knowledge or survey data. Two applications of the model in the Philippines and Malaysia are used to illustrate the functioning of the model and its validation.

  5. Nonadiabatic dynamics of electron transfer in solution: Explicit and implicit solvent treatments that include multiple relaxation time scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwerdtfeger, Christine A.; Soudackov, Alexander V.; Hammes-Schiffer, Sharon, E-mail: shs3@illinois.edu

    2014-01-21

    The development of efficient theoretical methods for describing electron transfer (ET) reactions in condensed phases is important for a variety of chemical and biological applications. Previously, dynamical dielectric continuum theory was used to derive Langevin equations for a single collective solvent coordinate describing ET in a polar solvent. In this theory, the parameters are directly related to the physical properties of the system and can be determined from experimental data or explicit molecular dynamics simulations. Herein, we combine these Langevin equations with surface hopping nonadiabatic dynamics methods to calculate the rate constants for thermal ET reactions in polar solvents formore » a wide range of electronic couplings and reaction free energies. Comparison of explicit and implicit solvent calculations illustrates that the mapping from explicit to implicit solvent models is valid even for solvents exhibiting complex relaxation behavior with multiple relaxation time scales and a short-time inertial response. The rate constants calculated for implicit solvent models with a single solvent relaxation time scale corresponding to water, acetonitrile, and methanol agree well with analytical theories in the Golden rule and solvent-controlled regimes, as well as in the intermediate regime. The implicit solvent models with two relaxation time scales are in qualitative agreement with the analytical theories but quantitatively overestimate the rate constants compared to these theories. Analysis of these simulations elucidates the importance of multiple relaxation time scales and the inertial component of the solvent response, as well as potential shortcomings of the analytical theories based on single time scale solvent relaxation models. This implicit solvent approach will enable the simulation of a wide range of ET reactions via the stochastic dynamics of a single collective solvent coordinate with parameters that are relevant to experimentally accessible systems.« less

  6. Explicit Computations of Instantons and Large Deviations in Beta-Plane Turbulence

    NASA Astrophysics Data System (ADS)

    Laurie, J.; Bouchet, F.; Zaboronski, O.

    2012-12-01

    We use a path integral formalism and instanton theory in order to make explicit analytical predictions about large deviations and rare events in beta-plane turbulence. The path integral formalism is a concise way to get large deviation results in dynamical systems forced by random noise. In the most simple cases, it leads to the same results as the Freidlin-Wentzell theory, but it has a wider range of applicability. This approach is however usually extremely limited, due to the complexity of the theoretical problems. As a consequence it provides explicit results in a fairly limited number of models, often extremely simple ones with only a few degrees of freedom. Few exception exist outside the realm of equilibrium statistical physics. We will show that the barotropic model of beta-plane turbulence is one of these non-equilibrium exceptions. We describe sets of explicit solutions to the instanton equation, and precise derivations of the action functional (or large deviation rate function). The reason why such exact computations are possible is related to the existence of hidden symmetries and conservation laws for the instanton dynamics. We outline several applications of this apporach. For instance, we compute explicitly the very low probability to observe flows with an energy much larger or smaller than the typical one. Moreover, we consider regimes for which the system has multiple attractors (corresponding to different numbers of alternating jets), and discuss the computation of transition probabilities between two such attractors. These extremely rare events are of the utmost importance as the dynamics undergo qualitative macroscopic changes during such transitions.

  7. Size-dependent error of the density functional theory ionization potential in vacuum and solution

    DOE PAGES

    Sosa Vazquez, Xochitl A.; Isborn, Christine M.

    2015-12-22

    Density functional theory is often the method of choice for modeling the energetics of large molecules and including explicit solvation effects. It is preferable to use a method that treats systems of different sizes and with different amounts of explicit solvent on equal footing. However, recent work suggests that approximate density functional theory has a size-dependent error in the computation of the ionization potential. We here investigate the lack of size-intensivity of the ionization potential computed with approximate density functionals in vacuum and solution. We show that local and semi-local approximations to exchange do not yield a constant ionization potentialmore » for an increasing number of identical isolated molecules in vacuum. Instead, as the number of molecules increases, the total energy required to ionize the system decreases. Rather surprisingly, we find that this is still the case in solution, whether using a polarizable continuum model or with explicit solvent that breaks the degeneracy of each solute, and we find that explicit solvent in the calculation can exacerbate the size-dependent delocalization error. We demonstrate that increasing the amount of exact exchange changes the character of the polarization of the solvent molecules; for small amounts of exact exchange the solvent molecules contribute a fraction of their electron density to the ionized electron, but for larger amounts of exact exchange they properly polarize in response to the cationic solute. As a result, in vacuum and explicit solvent, the ionization potential can be made size-intensive by optimally tuning a long-range corrected hybrid functional.« less

  8. Size-dependent error of the density functional theory ionization potential in vacuum and solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sosa Vazquez, Xochitl A.; Isborn, Christine M., E-mail: cisborn@ucmerced.edu

    2015-12-28

    Density functional theory is often the method of choice for modeling the energetics of large molecules and including explicit solvation effects. It is preferable to use a method that treats systems of different sizes and with different amounts of explicit solvent on equal footing. However, recent work suggests that approximate density functional theory has a size-dependent error in the computation of the ionization potential. We here investigate the lack of size-intensivity of the ionization potential computed with approximate density functionals in vacuum and solution. We show that local and semi-local approximations to exchange do not yield a constant ionization potentialmore » for an increasing number of identical isolated molecules in vacuum. Instead, as the number of molecules increases, the total energy required to ionize the system decreases. Rather surprisingly, we find that this is still the case in solution, whether using a polarizable continuum model or with explicit solvent that breaks the degeneracy of each solute, and we find that explicit solvent in the calculation can exacerbate the size-dependent delocalization error. We demonstrate that increasing the amount of exact exchange changes the character of the polarization of the solvent molecules; for small amounts of exact exchange the solvent molecules contribute a fraction of their electron density to the ionized electron, but for larger amounts of exact exchange they properly polarize in response to the cationic solute. In vacuum and explicit solvent, the ionization potential can be made size-intensive by optimally tuning a long-range corrected hybrid functional.« less

  9. On Spatially Explicit Models of Epidemic and Endemic Cholera: The Haiti and Lake Kivu Case Studies.

    NASA Astrophysics Data System (ADS)

    Rinaldo, A.; Bertuzzo, E.; Mari, L.; Finger, F.; Casagrandi, R.; Gatto, M.; Rodriguez-Iturbe, I.

    2014-12-01

    The first part of the Lecture deals with the predictive ability of mechanistic models for the Haitian cholera epidemic. Predictive models of epidemic cholera need to resolve at suitable aggregation levels spatial data pertaining to local communities, epidemiological records, hydrologic drivers, waterways, patterns of human mobility and proxies of exposure rates. A formal model comparison framework provides a quantitative assessment of the explanatory and predictive abilities of various model settings with different spatial aggregation levels. Intensive computations and objective model comparisons show that parsimonious spatially explicit models accounting for spatial connections have superior explanatory power than spatially disconnected ones for short-to intermediate calibration windows. In general, spatially connected models show better predictive ability than disconnected ones. We suggest limits and validity of the various approaches and discuss the pathway towards the development of case-specific predictive tools in the context of emergency management. The second part deals with approaches suitable to describe patterns of endemic cholera. Cholera outbreaks have been reported in the Democratic Republic of the Congo since the 1970s. Here we employ a spatially explicit, inhomogeneous Markov chain model to describe cholera incidence in eight health zones on the shore of lake Kivu. Remotely sensed datasets of chlorophyll a concentration in the lake, precipitation and indices of global climate anomalies are used as environmental drivers in addition to baseline seasonality. The effect of human mobility is also modelled mechanistically. We test several models on a multi-year dataset of reported cholera cases. Fourteen models, accounting for different environmental drivers, are selected in calibration. Among these, the one accounting for seasonality, El Nino Southern Oscillation, precipitation and human mobility outperforms the others in cross-validation.

  10. Modeling Wood Encroachment in Abandoned Grasslands in the Eifel National Park – Model Description and Testing

    PubMed Central

    Hudjetz, Silvana; Lennartz, Gottfried; Krämer, Klara; Roß-Nickoll, Martina; Gergs, André; Preuss, Thomas G.

    2014-01-01

    The degradation of natural and semi-natural landscapes has become a matter of global concern. In Germany, semi-natural grasslands belong to the most species-rich habitat types but have suffered heavily from changes in land use. After abandonment, the course of succession at a specific site is often difficult to predict because many processes interact. In order to support decision making when managing semi-natural grasslands in the Eifel National Park, we built the WoodS-Model (Woodland Succession Model). A multimodeling approach was used to integrate vegetation dynamics in both the herbaceous and shrub/tree layer. The cover of grasses and herbs was simulated in a compartment model, whereas bushes and trees were modelled in an individual-based manner. Both models worked and interacted in a spatially explicit, raster-based landscape. We present here the model description, parameterization and testing. We show highly detailed projections of the succession of a semi-natural grassland including the influence of initial vegetation composition, neighborhood interactions and ungulate browsing. We carefully weighted the single processes against each other and their relevance for landscape development under different scenarios, while explicitly considering specific site conditions. Model evaluation revealed that the model is able to emulate successional patterns as observed in the field as well as plausible results for different population densities of red deer. Important neighborhood interactions such as seed dispersal, the protection of seedlings from browsing ungulates by thorny bushes, and the inhibition of wood encroachment by the herbaceous layer, have been successfully reproduced. Therefore, not only a detailed model but also detailed initialization turned out to be important for spatially explicit projections of a given site. The advantage of the WoodS-Model is that it integrates these many mutually interacting processes of succession. PMID:25494057

  11. Explicit Pharmacokinetic Modeling: Tools for Documentation, Verification, and Portability

    EPA Science Inventory

    Quantitative estimates of tissue dosimetry of environmental chemicals due to multiple exposure pathways require the use of complex mathematical models, such as physiologically-based pharmacokinetic (PBPK) models. The process of translating the abstract mathematics of a PBPK mode...

  12. Probing phenylalanine/adenine pi-stacking interactions in protein complexes with explicitly correlated and CCSD(T) computations.

    PubMed

    Copeland, Kari L; Anderson, Julie A; Farley, Adam R; Cox, James R; Tschumper, Gregory S

    2008-11-13

    To examine the effects of pi-stacking interactions between aromatic amino acid side chains and adenine bearing ligands in crystalline protein structures, 26 toluene/(N9-methyl)adenine model configurations have been constructed from protein/ligand crystal structures. Full geometry optimizations with the MP2 method cause the 26 crystal structures to collapse to six unique structures. The complete basis set (CBS) limit of the CCSD(T) interaction energies has been determined for all 32 structures by combining explicitly correlated MP2-R12 computations with a correction for higher-order correlation effects from CCSD(T) calculations. The CCSD(T) CBS limit interaction energies of the 26 crystal structures range from -3.19 to -6.77 kcal mol (-1) and average -5.01 kcal mol (-1). The CCSD(T) CBS limit interaction energies of the optimized complexes increase by roughly 1.5 kcal mol (-1) on average to -6.54 kcal mol (-1) (ranging from -5.93 to -7.05 kcal mol (-1)). Corrections for higher-order correlation effects are extremely important for both sets of structures and are responsible for the modest increase in the interaction energy after optimization. The MP2 method overbinds the crystal structures by 2.31 kcal mol (-1) on average compared to 4.50 kcal mol (-1) for the optimized structures.

  13. Valence and arousal-based affective evaluations of foods.

    PubMed

    Woodward, Halley E; Treat, Teresa A; Cameron, C Daryl; Yegorova, Vitaliya

    2017-01-01

    We investigated the nutrient-specific and individual-specific validity of dual-process models of valenced and arousal-based affective evaluations of foods across the disordered eating spectrum. 283 undergraduate women provided implicit and explicit valence and arousal-based evaluations of 120 food photos with known nutritional information on structurally similar indirect and direct affect misattribution procedures (AMP; Payne et al., 2005, 2008), and completed questionnaires assessing body mass index (BMI), hunger, restriction, and binge eating. Nomothetically, added fat and added sugar enhance evaluations of foods. Idiographically, hunger and binge eating enhance activation, whereas BMI and restriction enhance pleasantness. Added fat is salient for women who are heavier, hungrier, or who restrict; added sugar is influential for less hungry women. Restriction relates only to valence, whereas binge eating relates only to arousal. Findings are similar across implicit and explicit affective evaluations, albeit stronger for explicit, providing modest support for dual-process models of affective evaluation of foods. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Scalar products of Bethe vectors in models with {\\mathfrak{gl}}(2| 1) symmetry 1. Super-analog of Reshetikhin formula

    NASA Astrophysics Data System (ADS)

    Hutsalyuk, A.; Liashyk, A.; Pakuliak, S. Z.; Ragoucy, E.; Slavnov, N. A.

    2016-11-01

    We study the scalar products of Bethe vectors in integrable models solvable by the nested algebraic Bethe ansatz and possessing {gl}(2| 1) symmetry. Using explicit formulas of the monodromy matrix entries’ multiple actions onto Bethe vectors we obtain a representation for the scalar product in the most general case. This explicit representation appears to be a sum over partitions of the Bethe parameters. It can be used for the analysis of scalar products involving on-shell Bethe vectors. As a by-product, we obtain a determinant representation for the scalar products of generic Bethe vectors in integrable models with {gl}(1| 1) symmetry. Dedicated to the memory of Petr Petrovich Kulish.

  15. Estimating and interpreting migration of Amazonian forests using spatially implicit and semi-explicit neutral models.

    PubMed

    Pos, Edwin; Guevara Andino, Juan Ernesto; Sabatier, Daniel; Molino, Jean-François; Pitman, Nigel; Mogollón, Hugo; Neill, David; Cerón, Carlos; Rivas-Torres, Gonzalo; Di Fiore, Anthony; Thomas, Raquel; Tirado, Milton; Young, Kenneth R; Wang, Ophelia; Sierra, Rodrigo; García-Villacorta, Roosevelt; Zagt, Roderick; Palacios Cuenca, Walter; Aulestia, Milton; Ter Steege, Hans

    2017-06-01

    With many sophisticated methods available for estimating migration, ecologists face the difficult decision of choosing for their specific line of work. Here we test and compare several methods, performing sanity and robustness tests, applying to large-scale data and discussing the results and interpretation. Five methods were selected to compare for their ability to estimate migration from spatially implicit and semi-explicit simulations based on three large-scale field datasets from South America (Guyana, Suriname, French Guiana and Ecuador). Space was incorporated semi-explicitly by a discrete probability mass function for local recruitment, migration from adjacent plots or from a metacommunity. Most methods were able to accurately estimate migration from spatially implicit simulations. For spatially semi-explicit simulations, estimation was shown to be the additive effect of migration from adjacent plots and the metacommunity. It was only accurate when migration from the metacommunity outweighed that of adjacent plots, discrimination, however, proved to be impossible. We show that migration should be considered more an approximation of the resemblance between communities and the summed regional species pool. Application of migration estimates to simulate field datasets did show reasonably good fits and indicated consistent differences between sets in comparison with earlier studies. We conclude that estimates of migration using these methods are more an approximation of the homogenization among local communities over time rather than a direct measurement of migration and hence have a direct relationship with beta diversity. As betadiversity is the result of many (non)-neutral processes, we have to admit that migration as estimated in a spatial explicit world encompasses not only direct migration but is an ecological aggregate of these processes. The parameter m of neutral models then appears more as an emerging property revealed by neutral theory instead of being an effective mechanistic parameter and spatially implicit models should be rejected as an approximation of forest dynamics.

  16. Polymer brushes in explicit poor solvents studied using a new variant of the bond fluctuation model

    NASA Astrophysics Data System (ADS)

    Jentzsch, Christoph; Sommer, Jens-Uwe

    2014-09-01

    Using a variant of the Bond Fluctuation Model which improves its parallel efficiency in particular running on graphic cards we perform large scale simulations of polymer brushes in poor explicit solvent. Grafting density, solvent quality, and chain length are varied. Different morphological structures in particular octopus micelles are observed for low grafting densities. We reconsider the theoretical model for octopus micelles proposed by Williams using scaling arguments with the relevant scaling variable being σ/σc, and with the characteristic grafting density given by σc ˜ N-4/3. We find that octopus micelles only grow laterally, but not in height and we propose an extension of the model by assuming a cylindrical shape instead of a spherical geometry for the micelle-core. We show that the scaling variable σ/σc can be applied to master plots for the averaged height of the brush, the size of the micelles, and the number of chains per micelle. The exponents in the corresponding power law relations for the grafting density and chain length are in agreement with the model for flat cylindrical micelles. We also investigate the surface roughness and find that polymer brushes in explicit poor solvent at grafting densities higher than the stretching transition are flat and surface rippling can only be observed close to the stretching transition.

  17. Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake

    Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less

  18. Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD

    DOE PAGES

    Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake; ...

    2017-03-24

    Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less

  19. CVD-MPFA full pressure support, coupled unstructured discrete fracture-matrix Darcy-flux approximations

    NASA Astrophysics Data System (ADS)

    Ahmed, Raheel; Edwards, Michael G.; Lamine, Sadok; Huisman, Bastiaan A. H.; Pal, Mayur

    2017-11-01

    Two novel control-volume methods are presented for flow in fractured media, and involve coupling the control-volume distributed multi-point flux approximation (CVD-MPFA) constructed with full pressure support (FPS), to two types of discrete fracture-matrix approximation for simulation on unstructured grids; (i) involving hybrid grids and (ii) a lower dimensional fracture model. Flow is governed by Darcy's law together with mass conservation both in the matrix and the fractures, where large discontinuities in permeability tensors can occur. Finite-volume FPS schemes are more robust than the earlier CVD-MPFA triangular pressure support (TPS) schemes for problems involving highly anisotropic homogeneous and heterogeneous full-tensor permeability fields. We use a cell-centred hybrid-grid method, where fractures are modelled by lower-dimensional interfaces between matrix cells in the physical mesh but expanded to equi-dimensional cells in the computational domain. We present a simple procedure to form a consistent hybrid-grid locally for a dual-cell. We also propose a novel hybrid-grid for intersecting fractures, for the FPS method, which reduces the condition number of the global linear system and leads to larger time steps for tracer transport. The transport equation for tracer flow is coupled with the pressure equation and provides flow parameter assessment of the fracture models. Transport results obtained via TPS and FPS hybrid-grid formulations are compared with the corresponding results of fine-scale explicit equi-dimensional formulations. The results show that the hybrid-grid FPS method applies to general full-tensor fields and provides improved robust approximations compared to the hybrid-grid TPS method for fractured domains, for both weakly anisotropic permeability fields and very strong anisotropic full-tensor permeability fields where the TPS scheme exhibits spurious oscillations. The hybrid-grid FPS formulation is extended to compressible flow and the results demonstrate the method is also robust for transient flow. Furthermore, we present FPS coupled with a lower-dimensional fracture model, where fractures are strictly lower-dimensional in the physical mesh as well as in the computational domain. We present a comparison of the hybrid-grid FPS method and the lower-dimensional fracture model for several cases of isotropic and anisotropic fractured media which illustrate the benefits of the respective methods.

  20. The importance of time cost in pricing outpatient care.

    PubMed

    Heshmat, S

    1988-01-01

    The purpose of this article is to discuss the component of the full price charged to patients using outpatient care. The full price of a visit to a physician is equal to out-of-pocket payment (money price), and time costs. In particular, the article discusses the concept of time price (marginal value of time for a patient), and presents a specific example to illustrate the concept of time price elasticity. The concepts and information presented in this article can help marketing managers in setting pricing strategy that would explicitly consider time price.

Top