Sample records for model explicitly includes

  1. Explicit continuous charge-based compact model for long channel heavily doped surrounding-gate MOSFETs incorporating interface traps and quantum effects

    NASA Astrophysics Data System (ADS)

    Hamzah, Afiq; Hamid, Fatimah A.; Ismail, Razali

    2016-12-01

    An explicit solution for long-channel surrounding-gate (SRG) MOSFETs is presented from intrinsic to heavily doped body including the effects of interface traps and fixed oxide charges. The solution is based on the core SRGMOSFETs model of the Unified Charge Control Model (UCCM) for heavily doped conditions. The UCCM model of highly doped SRGMOSFETs is derived to obtain the exact equivalent expression as in the undoped case. Taking advantage of the undoped explicit charge-based expression, the asymptotic limits for below threshold and above threshold have been redefined to include the effect of trap states for heavily doped cases. After solving the asymptotic limits, an explicit mobile charge expression is obtained which includes the trap state effects. The explicit mobile charge model shows very good agreement with respect to numerical simulation over practical terminal voltages, doping concentration, geometry effects, and trap state effects due to the fixed oxide charges and interface traps. Then, the drain current is obtained using the Pao-Sah's dual integral, which is expressed as a function of inversion charge densities at the source/drain ends. The drain current agreed well with the implicit solution and numerical simulation for all regions of operation without employing any empirical parameters. A comparison with previous explicit models has been conducted to verify the competency of the proposed model with the doping concentration of 1× {10}19 {{cm}}-3, as the proposed model has better advantages in terms of its simplicity and accuracy at a higher doping concentration.

  2. DoD Product Line Practice Workshop Report

    DTIC Science & Technology

    1998-05-01

    capability. The essential enterprise management practices include ensuring sound business goals providing an appropriate funding model performing...business. This way requires vision and explicit support at the organizational level. There must be an explicit funding model to support the development...the same group seems to work best in smaller organizations. A funding model for core asset development also needs to be developed because the core

  3. Are mixed explicit/implicit solvation models reliable for studying phosphate hydrolysis? A comparative study of continuum, explicit and mixed solvation models.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamerlin, Shina C. L.; Haranczyk, Maciej; Warshel, Arieh

    2009-05-01

    Phosphate hydrolysis is ubiquitous in biology. However, despite intensive research on this class of reactions, the precise nature of the reaction mechanism remains controversial. In this work, we have examined the hydrolysis of three homologous phosphate diesters. The solvation free energy was simulated by means of either an implicit solvation model (COSMO), hybrid quantum mechanical / molecular mechanical free energy perturbation (QM/MM-FEP) or a mixed solvation model in which N water molecules were explicitly included in the ab initio description of the reacting system (where N=1-3), with the remainder of the solvent being implicitly modelled as a continuum. Here, bothmore » COSMO and QM/MM-FEP reproduce Delta Gobs within an error of about 2kcal/mol. However, we demonstrate that in order to obtain any form of reliable results from a mixed model, it is essential to carefully select the explicit water molecules from short QM/MM runs that act as a model for the true infinite system. Additionally, the mixed models tend to be increasingly inaccurate the more explicit water molecules are placed into the system. Thus, our analysis indicates that this approach provides an unreliable way for modelling phosphate hydrolysis in solution.« less

  4. Explicit Instruction Elements in Core Reading Programs

    ERIC Educational Resources Information Center

    Child, Angela R.

    2012-01-01

    Classroom teachers are provided instructional recommendations for teaching reading from their adopted core reading programs (CRPs). Explicit instruction elements or what is also called instructional moves, including direct explanation, modeling, guided practice, independent practice, discussion, feedback, and monitoring, were examined within CRP…

  5. Connecting Free Energy Surfaces in Implicit and Explicit Solvent: an Efficient Method to Compute Conformational and Solvation Free Energies

    PubMed Central

    Deng, Nanjie; Zhang, Bin W.; Levy, Ronald M.

    2015-01-01

    The ability to accurately model solvent effects on free energy surfaces is important for understanding many biophysical processes including protein folding and misfolding, allosteric transitions and protein-ligand binding. Although all-atom simulations in explicit solvent can provide an accurate model for biomolecules in solution, explicit solvent simulations are hampered by the slow equilibration on rugged landscapes containing multiple basins separated by barriers. In many cases, implicit solvent models can be used to significantly speed up the conformational sampling; however, implicit solvent simulations do not fully capture the effects of a molecular solvent, and this can lead to loss of accuracy in the estimated free energies. Here we introduce a new approach to compute free energy changes in which the molecular details of explicit solvent simulations are retained while also taking advantage of the speed of the implicit solvent simulations. In this approach, the slow equilibration in explicit solvent, due to the long waiting times before barrier crossing, is avoided by using a thermodynamic cycle which connects the free energy basins in implicit solvent and explicit solvent using a localized decoupling scheme. We test this method by computing conformational free energy differences and solvation free energies of the model system alanine dipeptide in water. The free energy changes between basins in explicit solvent calculated using fully explicit solvent paths agree with the corresponding free energy differences obtained using the implicit/explicit thermodynamic cycle to within 0.3 kcal/mol out of ~3 kcal/mol at only ~8 % of the computational cost. We note that WHAM methods can be used to further improve the efficiency and accuracy of the explicit/implicit thermodynamic cycle. PMID:26236174

  6. Connecting free energy surfaces in implicit and explicit solvent: an efficient method to compute conformational and solvation free energies.

    PubMed

    Deng, Nanjie; Zhang, Bin W; Levy, Ronald M

    2015-06-09

    The ability to accurately model solvent effects on free energy surfaces is important for understanding many biophysical processes including protein folding and misfolding, allosteric transitions, and protein–ligand binding. Although all-atom simulations in explicit solvent can provide an accurate model for biomolecules in solution, explicit solvent simulations are hampered by the slow equilibration on rugged landscapes containing multiple basins separated by barriers. In many cases, implicit solvent models can be used to significantly speed up the conformational sampling; however, implicit solvent simulations do not fully capture the effects of a molecular solvent, and this can lead to loss of accuracy in the estimated free energies. Here we introduce a new approach to compute free energy changes in which the molecular details of explicit solvent simulations are retained while also taking advantage of the speed of the implicit solvent simulations. In this approach, the slow equilibration in explicit solvent, due to the long waiting times before barrier crossing, is avoided by using a thermodynamic cycle which connects the free energy basins in implicit solvent and explicit solvent using a localized decoupling scheme. We test this method by computing conformational free energy differences and solvation free energies of the model system alanine dipeptide in water. The free energy changes between basins in explicit solvent calculated using fully explicit solvent paths agree with the corresponding free energy differences obtained using the implicit/explicit thermodynamic cycle to within 0.3 kcal/mol out of ∼3 kcal/mol at only ∼8% of the computational cost. We note that WHAM methods can be used to further improve the efficiency and accuracy of the implicit/explicit thermodynamic cycle.

  7. Are adverse effects incorporated in economic models? An initial review of current practice.

    PubMed

    Craig, D; McDaid, C; Fonseca, T; Stock, C; Duffy, S; Woolacott, N

    2009-12-01

    To identify methodological research on the incorporation of adverse effects in economic models and to review current practice. Major electronic databases (Cochrane Methodology Register, Health Economic Evaluations Database, NHS Economic Evaluation Database, EconLit, EMBASE, Health Management Information Consortium, IDEAS, MEDLINE and Science Citation Index) were searched from inception to September 2007. Health technology assessment (HTA) reports commissioned by the National Institute for Health Research (NIHR) HTA programme and published between 2004 and 2007 were also reviewed. The reviews of methodological research on the inclusion of adverse effects in decision models and of current practice were carried out according to standard methods. Data were summarised in a narrative synthesis. Of the 719 potentially relevant references in the methodological research review, five met the inclusion criteria; however, they contained little information of direct relevance to the incorporation of adverse effects in models. Of the 194 HTA monographs published from 2004 to 2007, 80 were reviewed, covering a range of research and therapeutic areas. In total, 85% of the reports included adverse effects in the clinical effectiveness review and 54% of the decision models included adverse effects in the model; 49% included adverse effects in the clinical review and model. The link between adverse effects in the clinical review and model was generally weak; only 3/80 (< 4%) used the results of a meta-analysis from the systematic review of clinical effectiveness and none used only data from the review without further manipulation. Of the models including adverse effects, 67% used a clinical adverse effects parameter, 79% used a cost of adverse effects parameter, 86% used one of these and 60% used both. Most models (83%) used utilities, but only two (2.5%) used solely utilities to incorporate adverse effects and were explicit that the utility captured relevant adverse effects; 53% of those models that included utilities derived them from patients on treatment and could therefore be interpreted as capturing adverse effects. In total, 30% of the models that included adverse effects used withdrawals related to drug toxicity and therefore might be interpreted as using withdrawals to capture adverse effects, but this was explicitly stated in only three reports. Of the 37 models that did not include adverse effects, 18 provided justification for this omission, most commonly lack of data; 19 appeared to make no explicit consideration of adverse effects in the model. There is an implicit assumption within modelling guidance that adverse effects are very important but there is a lack of clarity regarding how they should be dealt with and considered in modelling. In many cases a lack of clear reporting in the HTAs made it extremely difficult to ascertain what had actually been carried out in consideration of adverse effects. The main recommendation is for much clearer and explicit reporting of adverse effects, or their exclusion, in decision models and for explicit recognition in future guidelines that 'all relevant outcomes' should include some consideration of adverse events.

  8. Including resonances in the multiperipheral model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pinsky, S.S.; Snider, D.R.; Thomas, G.H.

    1973-10-01

    A simple generalization of the multiperipheral model (MPM) and the Mueller--Regge Model (MRM) is given which has improved phenomenological capabilities by explicitly incorporating resonance phenomena, and still is simple enough to be an important theoretical laboratory. The model is discussed both with and without charge. In addition, the one channel, two channel, three channel and N channel cases are explicitly treated. Particular attention is paid to the constraints of charge conservation and positivity in the MRM. The recently proven equivalence between the MRM and MPM is extended to this model, and is used extensively. (auth)

  9. Explicit Pore Pressure Material Model in Carbon-Cloth Phenolic

    NASA Technical Reports Server (NTRS)

    Gutierrez-Lemini, Danton; Ehle, Curt

    2003-01-01

    An explicit material model that uses predicted pressure in the pores of a carbon-cloth phenolic (CCP) composite has been developed. This model is intended to be used within a finite-element model to predict phenomena specific to CCP components of solid-fuel-rocket nozzles subjected to high operating temperatures and to mechanical stresses that can be great enough to cause structural failures. Phenomena that can be predicted with the help of this model include failures of specimens in restrained-thermal-growth (RTG) tests, pocketing erosion, and ply lifting

  10. Studies of implicit and explicit solution techniques in transient thermal analysis of structures

    NASA Technical Reports Server (NTRS)

    Adelman, H. M.; Haftka, R. T.; Robinson, J. C.

    1982-01-01

    Studies aimed at an increase in the efficiency of calculating transient temperature fields in complex aerospace vehicle structures are reported. The advantages and disadvantages of explicit and implicit algorithms are discussed and a promising set of implicit algorithms with variable time steps, known as GEARIB, is described. Test problems, used for evaluating and comparing various algorithms, are discussed and finite element models of the configurations are described. These problems include a coarse model of the Space Shuttle wing, an insulated frame tst article, a metallic panel for a thermal protection system, and detailed models of sections of the Space Shuttle wing. Results generally indicate a preference for implicit over explicit algorithms for transient structural heat transfer problems when the governing equations are stiff (typical of many practical problems such as insulated metal structures). The effects on algorithm performance of different models of an insulated cylinder are demonstrated. The stiffness of the problem is highly sensitive to modeling details and careful modeling can reduce the stiffness of the equations to the extent that explicit methods may become the best choice. Preliminary applications of a mixed implicit-explicit algorithm and operator splitting techniques for speeding up the solution of the algebraic equations are also described.

  11. Studies of implicit and explicit solution techniques in transient thermal analysis of structures

    NASA Astrophysics Data System (ADS)

    Adelman, H. M.; Haftka, R. T.; Robinson, J. C.

    1982-08-01

    Studies aimed at an increase in the efficiency of calculating transient temperature fields in complex aerospace vehicle structures are reported. The advantages and disadvantages of explicit and implicit algorithms are discussed and a promising set of implicit algorithms with variable time steps, known as GEARIB, is described. Test problems, used for evaluating and comparing various algorithms, are discussed and finite element models of the configurations are described. These problems include a coarse model of the Space Shuttle wing, an insulated frame tst article, a metallic panel for a thermal protection system, and detailed models of sections of the Space Shuttle wing. Results generally indicate a preference for implicit over explicit algorithms for transient structural heat transfer problems when the governing equations are stiff (typical of many practical problems such as insulated metal structures). The effects on algorithm performance of different models of an insulated cylinder are demonstrated. The stiffness of the problem is highly sensitive to modeling details and careful modeling can reduce the stiffness of the equations to the extent that explicit methods may become the best choice. Preliminary applications of a mixed implicit-explicit algorithm and operator splitting techniques for speeding up the solution of the algebraic equations are also described.

  12. Late positive potential to explicit sexual images associated with the number of sexual intercourse partners

    PubMed Central

    Steele, Vaughn R.; Staley, Cameron; Sabatinelli, Dean

    2015-01-01

    Risky sexual behaviors typically occur when a person is sexually motivated by potent, sexual reward cues. Yet, individual differences in sensitivity to sexual cues have not been examined with respect to sexual risk behaviors. A greater responsiveness to sexual cues might provide greater motivation for a person to act sexually; a lower responsiveness to sexual cues might lead a person to seek more intense, novel, possibly risky, sexual acts. In this study, event-related potentials were recorded in 64 men and women while they viewed a series of emotional, including explicit sexual, photographs. The motivational salience of the sexual cues was varied by including more and less explicit sexual images. Indeed, the more explicit sexual stimuli resulted in enhanced late positive potentials (LPP) relative to the less explicit sexual images. Participants with fewer sexual intercourse partners in the last year had reduced LPP amplitude to the less explicit sexual images than the more explicit sexual images, whereas participants with more partners responded similarly to the more and less explicit sexual images. This pattern of results is consistent with a greater responsivity model. Those who engage in more sexual behaviors consistent with risk are also more responsive to less explicit sexual cues. PMID:24526189

  13. Assessment of the GECKO-A Modeling Tool and Simplified 3D Model Parameterizations for SOA Formation

    NASA Astrophysics Data System (ADS)

    Aumont, B.; Hodzic, A.; La, S.; Camredon, M.; Lannuque, V.; Lee-Taylor, J. M.; Madronich, S.

    2014-12-01

    Explicit chemical mechanisms aim to embody the current knowledge of the transformations occurring in the atmosphere during the oxidation of organic matter. These explicit mechanisms are therefore useful tools to explore the fate of organic matter during its tropospheric oxidation and examine how these chemical processes shape the composition and properties of the gaseous and the condensed phases. Furthermore, explicit mechanisms provide powerful benchmarks to design and assess simplified parameterizations to be included 3D model. Nevertheless, the explicit mechanism describing the oxidation of hydrocarbons with backbones larger than few carbon atoms involves millions of secondary organic compounds, far exceeding the size of chemical mechanisms that can be written manually. Data processing tools can however be designed to overcome these difficulties and automatically generate consistent and comprehensive chemical mechanisms on a systematic basis. The Generator for Explicit Chemistry and Kinetics of Organics in the Atmosphere (GECKO-A) has been developed for the automatic writing of explicit chemical schemes of organic species and their partitioning between the gas and condensed phases. GECKO-A can be viewed as an expert system that mimics the steps by which chemists might develop chemical schemes. GECKO-A generates chemical schemes according to a prescribed protocol assigning reaction pathways and kinetics data on the basis of experimental data and structure-activity relationships. In its current version, GECKO-A can generate the full atmospheric oxidation scheme for most linear, branched and cyclic precursors, including alkanes and alkenes up to C25. Assessments of the GECKO-A modeling tool based on chamber SOA observations will be presented. GECKO-A was recently used to design a parameterization for SOA formation based on a Volatility Basis Set (VBS) approach. First results will be presented.

  14. Finite Element Modeling of Coupled Flexible Multibody Dynamics and Liquid Sloshing

    DTIC Science & Technology

    2006-09-01

    tanks is presented. The semi-discrete combined solid and fluid equations of motions are integrated using a time- accurate parallel explicit solver...Incompressible fluid flow in a moving/deforming container including accurate modeling of the free-surface, turbulence, and viscous effects ...paper, a single computational code which uses a time- accurate explicit solution procedure is used to solve both the solid and fluid equations of

  15. Explicit filtering in large eddy simulation using a discontinuous Galerkin method

    NASA Astrophysics Data System (ADS)

    Brazell, Matthew J.

    The discontinuous Galerkin (DG) method is a formulation of the finite element method (FEM). DG provides the ability for a high order of accuracy in complex geometries, and allows for highly efficient parallelization algorithms. These attributes make the DG method attractive for solving the Navier-Stokes equations for large eddy simulation (LES). The main goal of this work is to investigate the feasibility of adopting an explicit filter in the numerical solution of the Navier-Stokes equations with DG. Explicit filtering has been shown to increase the numerical stability of under-resolved simulations and is needed for LES with dynamic sub-grid scale (SGS) models. The explicit filter takes advantage of DG's framework where the solution is approximated using a polyno- mial basis where the higher modes of the solution correspond to a higher order polynomial basis. By removing high order modes, the filtered solution contains low order frequency content much like an explicit low pass filter. The explicit filter implementation is tested on a simple 1-D solver with an initial condi- tion that has some similarity to turbulent flows. The explicit filter does restrict the resolution as well as remove accumulated energy in the higher modes from aliasing. However, the ex- plicit filter is unable to remove numerical errors causing numerical dissipation. A second test case solves the 3-D Navier-Stokes equations of the Taylor-Green vortex flow (TGV). The TGV is useful for SGS model testing because it is initially laminar and transitions into a fully turbulent flow. The SGS models investigated include the constant coefficient Smagorinsky model, dynamic Smagorinsky model, and dynamic Heinz model. The constant coefficient Smagorinsky model is over dissipative, this is generally not desirable however it does add stability. The dynamic Smagorinsky model generally performs better, especially during the laminar-turbulent transition region as expected. The dynamic Heinz model which is based on an improved model, handles the laminar-turbulent transition region well while also showing additional robustness.

  16. Solvent Reaction Field Potential inside an Uncharged Globular Protein: A Bridge between Implicit and Explicit Solvent Models?

    PubMed Central

    Baker, Nathan A.; McCammon, J. Andrew

    2008-01-01

    The solvent reaction field potential of an uncharged protein immersed in Simple Point Charge/Extended (SPC/E) explicit solvent was computed over a series of molecular dynamics trajectories, intotal 1560 ns of simulation time. A finite, positive potential of 13 to 24 kbTec−1 (where T = 300K), dependent on the geometry of the solvent-accessible surface, was observed inside the biomolecule. The primary contribution to this potential arose from a layer of positive charge density 1.0 Å from the solute surface, on average 0.008 ec/Å3, which we found to be the product of a highly ordered first solvation shell. Significant second solvation shell effects, including additional layers of charge density and a slight decrease in the short-range solvent-solvent interaction strength, were also observed. The impact of these findings on implicit solvent models was assessed by running similar explicit-solvent simulations on the fully charged protein system. When the energy due to the solvent reaction field in the uncharged system is accounted for, correlation between per-atom electrostatic energies for the explicit solvent model and a simple implicit (Poisson) calculation is 0.97, and correlation between per-atom energies for the explicit solvent model and a previously published, optimized Poisson model is 0.99. PMID:17949217

  17. Solvent reaction field potential inside an uncharged globular protein: A bridge between implicit and explicit solvent models?

    NASA Astrophysics Data System (ADS)

    Cerutti, David S.; Baker, Nathan A.; McCammon, J. Andrew

    2007-10-01

    The solvent reaction field potential of an uncharged protein immersed in simple point charge/extended explicit solvent was computed over a series of molecular dynamics trajectories, in total 1560ns of simulation time. A finite, positive potential of 13-24 kbTec-1 (where T =300K), dependent on the geometry of the solvent-accessible surface, was observed inside the biomolecule. The primary contribution to this potential arose from a layer of positive charge density 1.0Å from the solute surface, on average 0.008ec/Å3, which we found to be the product of a highly ordered first solvation shell. Significant second solvation shell effects, including additional layers of charge density and a slight decrease in the short-range solvent-solvent interaction strength, were also observed. The impact of these findings on implicit solvent models was assessed by running similar explicit solvent simulations on the fully charged protein system. When the energy due to the solvent reaction field in the uncharged system is accounted for, correlation between per-atom electrostatic energies for the explicit solvent model and a simple implicit (Poisson) calculation is 0.97, and correlation between per-atom energies for the explicit solvent model and a previously published, optimized Poisson model is 0.99.

  18. Explicit and Implicit Stigma of Mental Illness as Predictors of the Recovery Attitudes of Assertive Community Treatment Practitioners.

    PubMed

    Stull, Laura G; McConnell, Haley; McGrew, John; Salyers, Michelle P

    2017-01-01

    While explicit negative stereotypes of mental illness are well established as barriers to recovery, implicit attitudes also may negatively impact outcomes. The current study is unique in its focus on both explicit and implicit stigma as predictors of recovery attitudes of mental health practitioners. Assertive Community Treatment practitioners (n = 154) from 55 teams completed online measures of stigma, recovery attitudes, and an Implicit Association Test (IAT). Three of four explicit stigma variables (perceptions of blameworthiness, helplessness, and dangerousness) and all three implicit stigma variables were associated with lower recovery attitudes. In a multivariate, hierarchical model, however, implicit stigma did not explain additional variance in recovery attitudes. In the overall model, perceptions of dangerousness and implicitly associating mental illness with "bad" were significant individual predictors of lower recovery attitudes. The current study demonstrates a need for interventions to lower explicit stigma, particularly perceptions of dangerousness, to increase mental health providers' expectations for recovery. The extent to which implicit and explicit stigma differentially predict outcomes, including recovery attitudes, needs further research.

  19. Biomass and fire dynamics in a temperate forest-grassland mosaic: Integrating multi-species herbivory, climate, and fire with the FireBGCv2/GrazeBGC system

    Treesearch

    Robert A. Riggs; Robert E. Keane; Norm Cimon; Rachel Cook; Lisa Holsinger; John Cook; Timothy DelCurto; L.Scott Baggett; Donald Justice; David Powell; Martin Vavra; Bridgett Naylor

    2015-01-01

    Landscape fire succession models (LFSMs) predict spatially-explicit interactions between vegetation succession and disturbance, but these models have yet to fully integrate ungulate herbivory as a driver of their processes. We modified a complex LFSM, FireBGCv2, to include a multi-species herbivory module, GrazeBGC. The system is novel in that it explicitly...

  20. Locally adaptive, spatially explicit projection of US population for 2030 and 2050.

    PubMed

    McKee, Jacob J; Rose, Amy N; Bright, Edward A; Huynh, Timmy; Bhaduri, Budhendra L

    2015-02-03

    Localized adverse events, including natural hazards, epidemiological events, and human conflict, underscore the criticality of quantifying and mapping current population. Building on the spatial interpolation technique previously developed for high-resolution population distribution data (LandScan Global and LandScan USA), we have constructed an empirically informed spatial distribution of projected population of the contiguous United States for 2030 and 2050, depicting one of many possible population futures. Whereas most current large-scale, spatially explicit population projections typically rely on a population gravity model to determine areas of future growth, our projection model departs from these by accounting for multiple components that affect population distribution. Modeled variables, which included land cover, slope, distances to larger cities, and a moving average of current population, were locally adaptive and geographically varying. The resulting weighted surface was used to determine which areas had the greatest likelihood for future population change. Population projections of county level numbers were developed using a modified version of the US Census's projection methodology, with the US Census's official projection as the benchmark. Applications of our model include incorporating multiple various scenario-driven events to produce a range of spatially explicit population futures for suitability modeling, service area planning for governmental agencies, consequence assessment, mitigation planning and implementation, and assessment of spatially vulnerable populations.

  1. Evaluating spatially explicit burn probabilities for strategic fire management planning

    Treesearch

    C. Miller; M.-A. Parisien; A. A. Ager; M. A. Finney

    2008-01-01

    Spatially explicit information on the probability of burning is necessary for virtually all strategic fire and fuels management planning activities, including conducting wildland fire risk assessments, optimizing fuel treatments, and prevention planning. Predictive models providing a reliable estimate of the annual likelihood of fire at each point on the landscape have...

  2. On Spatially Explicit Models of Cholera Epidemics: Hydrologic controls, environmental drivers, human-mediated transmissions (Invited)

    NASA Astrophysics Data System (ADS)

    Rinaldo, A.; Bertuzzo, E.; Mari, L.; Righetto, L.; Gatto, M.; Casagrandi, R.; Rodriguez-Iturbe, I.

    2010-12-01

    A recently proposed model for cholera epidemics is examined. The model accounts for local communities of susceptibles and infectives in a spatially explicit arrangement of nodes linked by networks having different topologies. The vehicle of infection (Vibrio cholerae) is transported through the network links which are thought of as hydrological connections among susceptible communities. The mathematical tools used are borrowed from general schemes of reactive transport on river networks acting as the environmental matrix for the circulation and mixing of water-borne pathogens. The results of a large-scale application to the Kwa Zulu (Natal) epidemics of 2001-2002 will be discussed. Useful theoretical results derived in the spatially-explicit context will also be reviewed (like e.g. the exact derivation of the speed of propagation for traveling fronts of epidemics on regular lattices endowed with uniform population density). Network effects will be discussed. The analysis of the limit case of uniformly distributed population density proves instrumental in establishing the overall conditions for the relevance of spatially explicit models. To that extent, it is shown that the ratio between spreading and disease outbreak timescales proves the crucial parameter. The relevance of our results lies in the major differences potentially arising between the predictions of spatially explicit models and traditional compartmental models of the SIR-like type. Our results suggest that in many cases of real-life epidemiological interest timescales of disease dynamics may trigger outbreaks that significantly depart from the predictions of compartmental models. Finally, a view on further developments includes: hydrologically improved aquatic reservoir models for pathogens; human mobility patterns affecting disease propagation; double-peak emergence and seasonality in the spatially explicit epidemic context.

  3. Corruption of accuracy and efficiency of Markov chain Monte Carlo simulation by inaccurate numerical implementation of conceptual hydrologic models

    NASA Astrophysics Data System (ADS)

    Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.

    2010-10-01

    Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.

  4. The Be-WetSpa-Pest modeling approach to simulate human and environmental exposure from pesticide application

    NASA Astrophysics Data System (ADS)

    Binder, Claudia; Garcia-Santos, Glenda; Andreoli, Romano; Diaz, Jaime; Feola, Giuseppe; Wittensoeldner, Moritz; Yang, Jing

    2016-04-01

    This study presents an integrative and spatially explicit modeling approach for analyzing human and environmental exposure from pesticide application of smallholders in the potato producing Andean region in Colombia. The modeling approach fulfills the following criteria: (i) it includes environmental and human compartments; (ii) it contains a behavioral decision-making model for estimating the effect of policies on pesticide flows to humans and the environment; (iii) it is spatially explicit; and (iv) it is modular and easily expandable to include additional modules, crops or technologies. The model was calibrated and validated for the Vereda La Hoya and was used to explore the effect of different policy measures in the region. The model has moderate data requirements and can be adapted relatively easy to other regions in developing countries with similar conditions.

  5. Group-based differences in anti-aging bias among medical students.

    PubMed

    Ruiz, Jorge G; Andrade, Allen D; Anam, Ramanakumar; Taldone, Sabrina; Karanam, Chandana; Hogue, Christie; Mintzer, Michael J

    2015-01-01

    Medical students (MS) may develop ageist attitudes early in their training that may predict their future avoidance of caring for the elderly. This study sought to determine MS' patterns of explicit and implicit anti-aging bias, intent to practice with older people and using the quad model, the role of gender, race, and motivation-based differences. One hundred and three MS completed an online survey that included explicit and implicit measures. Explicit measures revealed a moderately positive perception of older people. Female medical students and those high in internal motivation showed lower anti-aging bias, and both were more likely to intend to practice with older people. Although the implicit measure revealed more negativity toward the elderly than the explicit measures, there were no group differences. However, using the quad model the authors identified gender, race, and motivation-based differences in controlled and automatic processes involved in anti-aging bias.

  6. Theories, models and frameworks used in capacity building interventions relevant to public health: a systematic review.

    PubMed

    Bergeron, Kim; Abdi, Samiya; DeCorby, Kara; Mensah, Gloria; Rempel, Benjamin; Manson, Heather

    2017-11-28

    There is limited research on capacity building interventions that include theoretical foundations. The purpose of this systematic review is to identify underlying theories, models and frameworks used to support capacity building interventions relevant to public health practice. The aim is to inform and improve capacity building practices and services offered by public health organizations. Four search strategies were used: 1) electronic database searching; 2) reference lists of included papers; 3) key informant consultation; and 4) grey literature searching. Inclusion and exclusion criteria are outlined with included papers focusing on capacity building, learning plans, professional development plans in combination with tools, resources, processes, procedures, steps, model, framework, guideline, described in a public health or healthcare setting, or non-government, government, or community organizations as they relate to healthcare, and explicitly or implicitly mention a theory, model and/or framework that grounds the type of capacity building approach developed. Quality assessment were performed on all included articles. Data analysis included a process for synthesizing, analyzing and presenting descriptive summaries, categorizing theoretical foundations according to which theory, model and/or framework was used and whether or not the theory, model or framework was implied or explicitly identified. Nineteen articles were included in this review. A total of 28 theories, models and frameworks were identified. Of this number, two theories (Diffusion of Innovations and Transformational Learning), two models (Ecological and Interactive Systems Framework for Dissemination and Implementation) and one framework (Bloom's Taxonomy of Learning) were identified as the most frequently cited. This review identifies specific theories, models and frameworks to support capacity building interventions relevant to public health organizations. It provides public health practitioners with a menu of potentially usable theories, models and frameworks to support capacity building efforts. The findings also support the need for the use of theories, models or frameworks to be intentional, explicitly identified, referenced and for it to be clearly outlined how they were applied to the capacity building intervention.

  7. Transient modeling/analysis of hyperbolic heat conduction problems employing mixed implicit-explicit alpha method

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; D'Costa, Joseph F.

    1991-01-01

    This paper describes the evaluation of mixed implicit-explicit finite element formulations for hyperbolic heat conduction problems involving non-Fourier effects. In particular, mixed implicit-explicit formulations employing the alpha method proposed by Hughes et al. (1987, 1990) are described for the numerical simulation of hyperbolic heat conduction models, which involves time-dependent relaxation effects. Existing analytical approaches for modeling/analysis of such models involve complex mathematical formulations for obtaining closed-form solutions, while in certain numerical formulations the difficulties include severe oscillatory solution behavior (which often disguises the true response) in the vicinity of the thermal disturbances, which propagate with finite velocities. In view of these factors, the alpha method is evaluated to assess the control of the amount of numerical dissipation for predicting the transient propagating thermal disturbances. Numerical test models are presented, and pertinent conclusions are drawn for the mixed-time integration simulation of hyperbolic heat conduction models involving non-Fourier effects.

  8. The importance of explicitly mapping instructional analogies in science education

    NASA Astrophysics Data System (ADS)

    Asay, Loretta Johnson

    Analogies are ubiquitous during instruction in science classrooms, yet research about the effectiveness of using analogies has produced mixed results. An aspect seldom studied is a model of instruction when using analogies. The few existing models for instruction with analogies have not often been examined quantitatively. The Teaching With Analogies (TWA) model (Glynn, 1991) is one of the models frequently cited in the variety of research about analogies. The TWA model outlines steps for instruction, including the step of explicitly mapping the features of the source to the target. An experimental study was conducted to examine the effects of explicitly mapping the features of the source and target in an analogy during computer-based instruction about electrical circuits. Explicit mapping was compared to no mapping and to a control with no analogy. Participants were ninth- and tenth-grade biology students who were each randomly assigned to one of three conditions (no analogy module, analogy module, or explicitly mapped analogy module) for computer-based instruction. Subjects took a pre-test before the instruction, which was used to assign them to a level of previous knowledge about electrical circuits for analysis of any differential effects. After the instruction modules, students took a post-test about electrical circuits. Two weeks later, they took a delayed post-test. No advantage was found for explicitly mapping the analogy. Learning patterns were the same, regardless of the type of instruction. Those who knew the least about electrical circuits, based on the pre-test, made the most gains. After the two-week delay, this group maintained the largest amount of their gain. Implications exist for science education classrooms, as analogy use should be based on research about effective practices. Further studies are suggested to foster the building of research-based models for classroom instruction with analogies.

  9. Cloud Modeling

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Moncrieff, Mitchell; Einaud, Franco (Technical Monitor)

    2001-01-01

    Numerical cloud models have been developed and applied extensively to study cloud-scale and mesoscale processes during the past four decades. The distinctive aspect of these cloud models is their ability to treat explicitly (or resolve) cloud-scale dynamics. This requires the cloud models to be formulated from the non-hydrostatic equations of motion that explicitly include the vertical acceleration terms since the vertical and horizontal scales of convection are similar. Such models are also necessary in order to allow gravity waves, such as those triggered by clouds, to be resolved explicitly. In contrast, the hydrostatic approximation, usually applied in global or regional models, does allow the presence of gravity waves. In addition, the availability of exponentially increasing computer capabilities has resulted in time integrations increasing from hours to days, domain grids boxes (points) increasing from less than 2000 to more than 2,500,000 grid points with 500 to 1000 m resolution, and 3-D models becoming increasingly prevalent. The cloud resolving model is now at a stage where it can provide reasonably accurate statistical information of the sub-grid, cloud-resolving processes poorly parameterized in climate models and numerical prediction models.

  10. Modeling wildlife populations with HexSim

    EPA Science Inventory

    HexSim is a framework for constructing spatially-explicit, individual-based computer models designed for simulating terrestrial wildlife population dynamics and interactions. HexSim is useful for a broad set of modeling applications including population viability analysis for on...

  11. A framework for developing objective and measurable recovery criteria for threatened and endangered species.

    PubMed

    Himes Boor, Gina K

    2014-02-01

    For species listed under the U.S. Endangered Species Act (ESA), the U.S. Fish and Wildlife Service and National Marine Fisheries Service are tasked with writing recovery plans that include "objective, measurable criteria" that define when a species is no longer at risk of extinction, but neither the act itself nor agency guidelines provide an explicit definition of objective, measurable criteria. Past reviews of recovery plans, including one published in 2012, show that many criteria lack quantitative metrics with clear biological rationale and are not meeting the measureable and objective mandate. I reviewed how objective, measureable criteria have been defined implicitly and explicitly in peer-reviewed literature, the ESA, other U.S. statutes, and legal decisions. Based on a synthesis of these sources, I propose the following 6 standards be used as minimum requirements for objective, measurable criteria: contain a quantitative threshold with calculable units, stipulate a timeframe over which they must be met, explicitly define the spatial extent or population to which they apply, specify a sampling procedure that includes sample size, specify a statistical significance level, and include justification by providing scientific evidence that the criteria define a species whose extinction risk has been reduced to the desired level. To meet these 6 standards, I suggest that recovery plans be explicitly guided by and organized around a population viability modeling framework even if data or agency resources are too limited to complete a viability model. When data and resources are available, recovery criteria can be developed from the population viability model results, but when data and resources are insufficient for model implementation, extinction risk thresholds can be used as criteria. A recovery-planning approach centered on viability modeling will also yield appropriately focused data-acquisition and monitoring plans and will facilitate a seamless transition from recovery planning to delisting. © 2013 Society for Conservation Biology.

  12. A Multi-Year Program Developing an Explicit Reflective Pedagogy for Teaching Pre-Service Teachers the Nature of Science by Ostention

    ERIC Educational Resources Information Center

    Smith, Mike U.; Scharmann, Lawrence

    2008-01-01

    This investigation delineates a multi-year action research agenda designed to develop an instructional model for teaching the nature of science (NOS) to preservice science teachers. Our past research strongly supports the use of explicit reflective instructional methods, which includes Thomas Kuhn's notion of learning by ostention and treating…

  13. Initialization and assimilation of cloud and rainwater in a regional model

    NASA Technical Reports Server (NTRS)

    Raymond, William H.; Olson, William S.

    1990-01-01

    The initialization and assimilation of cloud and rainwater quantities in a mesoscale regional model was examined. Forecasts of explicit cloud and rainwater are made using conservation equations. The physical processes include condensation, evaporation, autoconversion, accretion, and the removal of rainwater by fallout. These physical processes, some of which are parameterized, represent source and sink in terms in the conservation equations. The question of how to initialize the explicit liquid water calculations in numerical models and how to retain information about precipitation processes during the 4-D assimilation cycle are important issues that are addressed.

  14. Alcohol-Approach Inclinations and Drinking Identity as Predictors of Behavioral Economic Demand for Alcohol

    PubMed Central

    Ramirez, Jason J.; Dennhardt, Ashley A.; Baldwin, Scott A.; Murphy, James G.; Lindgren, Kristen P.

    2016-01-01

    Behavioral economic demand curve indices of alcohol consumption reflect decisions to consume alcohol at varying costs. Although these indices predict alcohol-related problems beyond established predictors, little is known about the determinants of elevated demand. Two cognitive constructs that may underlie alcohol demand are alcohol-approach inclinations and drinking identity. The aim of this study was to evaluate implicit and explicit measures of these constructs as predictors of alcohol demand curve indices. College student drinkers (N = 223, 59% female) completed implicit and explicit measures of drinking identity and alcohol-approach inclinations at three timepoints separated by three-month intervals, and completed the Alcohol Purchase Task to assess demand at Time 3. Given no change in our alcohol-approach inclinations and drinking identity measures over time, random intercept-only models were used to predict two demand indices: Amplitude, which represents maximum hypothetical alcohol consumption and expenditures, and Persistence, which represents sensitivity to increasing prices. When modeled separately, implicit and explicit measures of drinking identity and alcohol-approach inclinations positively predicted demand indices. When implicit and explicit measures were included in the same model, both measures of drinking identity predicted Amplitude, but only explicit drinking identity predicted Persistence. In contrast, explicit measures of alcohol-approach inclinations, but not implicit measures, predicted both demand indices. Therefore, there was more support for explicit, versus implicit, measures as unique predictors of alcohol demand. Overall, drinking identity and alcohol-approach inclinations both exhibit positive associations with alcohol demand and represent potentially modifiable cognitive constructs that may underlie elevated demand in college student drinkers. PMID:27379444

  15. On the performance of explicit and implicit algorithms for transient thermal analysis

    NASA Astrophysics Data System (ADS)

    Adelman, H. M.; Haftka, R. T.

    1980-09-01

    The status of an effort to increase the efficiency of calculating transient temperature fields in complex aerospace vehicle structures is described. The advantages and disadvantages of explicit and implicit algorithms are discussed. A promising set of implicit algorithms, known as the GEAR package is described. Four test problems, used for evaluating and comparing various algorithms, have been selected and finite element models of the configurations are discribed. These problems include a space shuttle frame component, an insulated cylinder, a metallic panel for a thermal protection system and a model of the space shuttle orbiter wing. Calculations were carried out using the SPAR finite element program, the MITAS lumped parameter program and a special purpose finite element program incorporating the GEAR algorithms. Results generally indicate a preference for implicit over explicit algorithms for solution of transient structural heat transfer problems when the governing equations are stiff. Careful attention to modeling detail such as avoiding thin or short high-conducting elements can sometimes reduce the stiffness to the extent that explicit methods become advantageous.

  16. Testing the cognitive catalyst model of rumination with explicit and implicit cognitive content.

    PubMed

    Sova, Christopher C; Roberts, John E

    2018-06-01

    The cognitive catalyst model posits that rumination and negative cognitive content, such as negative schema, interact to predict depressive affect. Past research has found support for this model using explicit measures of negative cognitive content such as self-report measures of trait self-esteem and dysfunctional attitudes. The present study tested whether these findings would extend to implicit measures of negative cognitive content such as implicit self-esteem, and whether effects would depend on initial mood state and history of depression. Sixty-one undergraduate students selected on the basis of depression history (27 previously depressed; 34 never depressed) completed explicit and implicit measures of negative cognitive content prior to random assignment to a rumination induction followed by a distraction induction or vice versa. Dysphoric affect was measured both before and after these inductions. Analyses revealed that explicit measures, but not implicit measures, interacted with rumination to predict change in dysphoric affect, and these interactions were further moderated by baseline levels of dysphoria. Limitations include the small nonclinical sample and use of a self-report measure of depression history. These findings suggest that rumination amplifies the association between explicit negative cognitive content and depressive affect primarily among people who are already experiencing sad mood. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. On the application of multilevel modeling in environmental and ecological studies

    USGS Publications Warehouse

    Qian, Song S.; Cuffney, Thomas F.; Alameddine, Ibrahim; McMahon, Gerard; Reckhow, Kenneth H.

    2010-01-01

    This paper illustrates the advantages of a multilevel/hierarchical approach for predictive modeling, including flexibility of model formulation, explicitly accounting for hierarchical structure in the data, and the ability to predict the outcome of new cases. As a generalization of the classical approach, the multilevel modeling approach explicitly models the hierarchical structure in the data by considering both the within- and between-group variances leading to a partial pooling of data across all levels in the hierarchy. The modeling framework provides means for incorporating variables at different spatiotemporal scales. The examples used in this paper illustrate the iterative process of model fitting and evaluation, a process that can lead to improved understanding of the system being studied.

  18. Neutral models as a way to evaluate the Sea Level Affecting Marshes Model (SLAMM)

    EPA Science Inventory

    A commonly used landscape model to simulate wetland change – the Sea Level Affecting Marshes Model(SLAMM) – has rarely been explicitly assessed for its prediction accuracy. Here, we evaluated this model using recently proposed neutral models – including the random constraint matc...

  19. The Use of Modeling-Based Text to Improve Students' Modeling Competencies

    ERIC Educational Resources Information Center

    Jong, Jing-Ping; Chiu, Mei-Hung; Chung, Shiao-Lan

    2015-01-01

    This study investigated the effects of a modeling-based text on 10th graders' modeling competencies. Fifteen 10th graders read a researcher-developed modeling-based science text on the ideal gas law that included explicit descriptions and representations of modeling processes (i.e., model selection, model construction, model validation, model…

  20. Federal Workforce Quality: Measurement and Improvement

    DTIC Science & Technology

    1992-08-01

    explicit standards of production and service quality . Assessment Tools 4 OPM should institutionalize its data collection program of longitudinal research...include data about quirements, should set explicit standards of various aspects of the model. That is, the production and service quality . effort...are the immediate consumers service quality are possible. of the products and services delivered, and still others in the larger society who have no

  1. Quantum mechanical force field for hydrogen fluoride with explicit electronic polarization.

    PubMed

    Mazack, Michael J M; Gao, Jiali

    2014-05-28

    The explicit polarization (X-Pol) theory is a fragment-based quantum chemical method that explicitly models the internal electronic polarization and intermolecular interactions of a chemical system. X-Pol theory provides a framework to construct a quantum mechanical force field, which we have extended to liquid hydrogen fluoride (HF) in this work. The parameterization, called XPHF, is built upon the same formalism introduced for the XP3P model of liquid water, which is based on the polarized molecular orbital (PMO) semiempirical quantum chemistry method and the dipole-preserving polarization consistent point charge model. We introduce a fluorine parameter set for PMO, and find good agreement for various gas-phase results of small HF clusters compared to experiments and ab initio calculations at the M06-2X/MG3S level of theory. In addition, the XPHF model shows reasonable agreement with experiments for a variety of structural and thermodynamic properties in the liquid state, including radial distribution functions, interaction energies, diffusion coefficients, and densities at various state points.

  2. Integrating remote sensing and spatially explicit epidemiological modeling

    NASA Astrophysics Data System (ADS)

    Finger, Flavio; Knox, Allyn; Bertuzzo, Enrico; Mari, Lorenzo; Bompangue, Didier; Gatto, Marino; Rinaldo, Andrea

    2015-04-01

    Spatially explicit epidemiological models are a crucial tool for the prediction of epidemiological patterns in time and space as well as for the allocation of health care resources. In addition they can provide valuable information about epidemiological processes and allow for the identification of environmental drivers of the disease spread. Most epidemiological models rely on environmental data as inputs. They can either be measured in the field by the means of conventional instruments or using remote sensing techniques to measure suitable proxies of the variables of interest. The later benefit from several advantages over conventional methods, including data availability, which can be an issue especially in developing, and spatial as well as temporal resolution of the data, which is particularly crucial for spatially explicit models. Here we present the case study of a spatially explicit, semi-mechanistic model applied to recurring cholera outbreaks in the Lake Kivu area (Democratic Republic of the Congo). The model describes the cholera incidence in eight health zones on the shore of the lake. Remotely sensed datasets of chlorophyll a concentration in the lake, precipitation and indices of global climate anomalies are used as environmental drivers. Human mobility and its effect on the disease spread is also taken into account. Several model configurations are tested on a data set of reported cases. The best models, accounting for different environmental drivers, and selected using the Akaike information criterion, are formally compared via cross validation. The best performing model accounts for seasonality, El Niño Southern Oscillation, precipitation and human mobility.

  3. A new solution method for wheel/rail rolling contact.

    PubMed

    Yang, Jian; Song, Hua; Fu, Lihua; Wang, Meng; Li, Wei

    2016-01-01

    To solve the problem of wheel/rail rolling contact of nonlinear steady-state curving, a three-dimensional transient finite element (FE) model is developed by the explicit software ANSYS/LS-DYNA. To improve the solving speed and efficiency, an explicit-explicit order solution method is put forward based on analysis of the features of implicit and explicit algorithm. The solution method was first applied to calculate the pre-loading of wheel/rail rolling contact with explicit algorithm, and then the results became the initial conditions in solving the dynamic process of wheel/rail rolling contact with explicit algorithm as well. Simultaneously, the common implicit-explicit order solution method is used to solve the FE model. Results show that the explicit-explicit order solution method has faster operation speed and higher efficiency than the implicit-explicit order solution method while the solution accuracy is almost the same. Hence, the explicit-explicit order solution method is more suitable for the wheel/rail rolling contact model with large scale and high nonlinearity.

  4. The mixed impact of medical school on medical students' implicit and explicit weight bias.

    PubMed

    Phelan, Sean M; Puhl, Rebecca M; Burke, Sara E; Hardeman, Rachel; Dovidio, John F; Nelson, David B; Przedworski, Julia; Burgess, Diana J; Perry, Sylvia; Yeazel, Mark W; van Ryn, Michelle

    2015-10-01

    Health care trainees demonstrate implicit (automatic, unconscious) and explicit (conscious) bias against people from stigmatised and marginalised social groups, which can negatively influence communication and decision making. Medical schools are well positioned to intervene and reduce bias in new physicians. This study was designed to assess medical school factors that influence change in implicit and explicit bias against individuals from one stigmatised group: people with obesity. This was a prospective cohort study of medical students enrolled at 49 US medical schools randomly selected from all US medical schools within the strata of public and private schools and region. Participants were 1795 medical students surveyed at the beginning of their first year and end of their fourth year. Web-based surveys included measures of weight bias, and medical school experiences and climate. Bias change was compared with changes in bias in the general public over the same period. Linear mixed models were used to assess the impact of curriculum, contact with people with obesity, and faculty role modelling on weight bias change. Increased implicit and explicit biases were associated with less positive contact with patients with obesity and more exposure to faculty role modelling of discriminatory behaviour or negative comments about patients with obesity. Increased implicit bias was associated with training in how to deal with difficult patients. On average, implicit weight bias decreased and explicit bias increased during medical school, over a period of time in which implicit weight bias in the general public increased and explicit bias remained stable. Medical schools may reduce students' weight biases by increasing positive contact between students and patients with obesity, eliminating unprofessional role modelling by faculty members and residents, and altering curricula focused on treating difficult patients. © 2015 John Wiley & Sons Ltd.

  5. The feasibility of using explicit method for linear correction of the particle size variation using NIR Spectroscopy combined with PLS2regression method

    NASA Astrophysics Data System (ADS)

    Yulia, M.; Suhandy, D.

    2018-03-01

    NIR spectra obtained from spectral data acquisition system contains both chemical information of samples as well as physical information of the samples, such as particle size and bulk density. Several methods have been established for developing calibration models that can compensate for sample physical information variations. One common approach is to include physical information variation in the calibration model both explicitly and implicitly. The objective of this study was to evaluate the feasibility of using explicit method to compensate the influence of different particle size of coffee powder in NIR calibration model performance. A number of 220 coffee powder samples with two different types of coffee (civet and non-civet) and two different particle sizes (212 and 500 µm) were prepared. Spectral data was acquired using NIR spectrometer equipped with an integrating sphere for diffuse reflectance measurement. A discrimination method based on PLS-DA was conducted and the influence of different particle size on the performance of PLS-DA was investigated. In explicit method, we add directly the particle size as predicted variable results in an X block containing only the NIR spectra and a Y block containing the particle size and type of coffee. The explicit inclusion of the particle size into the calibration model is expected to improve the accuracy of type of coffee determination. The result shows that using explicit method the quality of the developed calibration model for type of coffee determination is a little bit superior with coefficient of determination (R2) = 0.99 and root mean square error of cross-validation (RMSECV) = 0.041. The performance of the PLS2 calibration model for type of coffee determination with particle size compensation was quite good and able to predict the type of coffee in two different particle sizes with relatively high R2 pred values. The prediction also resulted in low bias and RMSEP values.

  6. Urban watershed modeling in Seattle, Washington using VELMA – a spatially explicit ecohydrological watershed model

    EPA Science Inventory

    Urban watersheds are notoriously difficult to model due to their complex, small-scale combinations of landscape and land use characteristics including impervious surfaces that ultimately affect the hydrologic system. We utilized EPA’s Visualizing Ecosystem Land Management A...

  7. Time-dependent density functional theory (TD-DFT) coupled with reference interaction site model self-consistent field explicitly including spatial electron density distribution (RISM-SCF-SEDD)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yokogawa, D., E-mail: d.yokogawa@chem.nagoya-u.ac.jp; Institute of Transformative Bio-Molecules

    2016-09-07

    Theoretical approach to design bright bio-imaging molecules is one of the most progressing ones. However, because of the system size and computational accuracy, the number of theoretical studies is limited to our knowledge. To overcome the difficulties, we developed a new method based on reference interaction site model self-consistent field explicitly including spatial electron density distribution and time-dependent density functional theory. We applied it to the calculation of indole and 5-cyanoindole at ground and excited states in gas and solution phases. The changes in the optimized geometries were clearly explained with resonance structures and the Stokes shift was correctly reproduced.

  8. A neurocomputational theory of how explicit learning bootstraps early procedural learning.

    PubMed

    Paul, Erick J; Ashby, F Gregory

    2013-01-01

    It is widely accepted that human learning and memory is mediated by multiple memory systems that are each best suited to different requirements and demands. Within the domain of categorization, at least two systems are thought to facilitate learning: an explicit (declarative) system depending largely on the prefrontal cortex, and a procedural (non-declarative) system depending on the basal ganglia. Substantial evidence suggests that each system is optimally suited to learn particular categorization tasks. However, it remains unknown precisely how these systems interact to produce optimal learning and behavior. In order to investigate this issue, the present research evaluated the progression of learning through simulation of categorization tasks using COVIS, a well-known model of human category learning that includes both explicit and procedural learning systems. Specifically, the model's parameter space was thoroughly explored in procedurally learned categorization tasks across a variety of conditions and architectures to identify plausible interaction architectures. The simulation results support the hypothesis that one-way interaction between the systems occurs such that the explicit system "bootstraps" learning early on in the procedural system. Thus, the procedural system initially learns a suboptimal strategy employed by the explicit system and later refines its strategy. This bootstrapping could be from cortical-striatal projections that originate in premotor or motor regions of cortex, or possibly by the explicit system's control of motor responses through basal ganglia-mediated loops.

  9. Modelling explicit fracture of nuclear fuel pellets using peridynamics

    NASA Astrophysics Data System (ADS)

    Mella, R.; Wenman, M. R.

    2015-12-01

    Three dimensional models of explicit cracking of nuclear fuel pellets for a variety of power ratings have been explored with peridynamics, a non-local, mesh free, fracture mechanics method. These models were implemented in the explicitly integrated molecular dynamics code LAMMPS, which was modified to include thermal strains in solid bodies. The models of fuel fracture, during initial power transients, are shown to correlate with the mean number of cracks observed on the inner and outer edges of the pellet, by experimental post irradiation examination of fuel, for power ratings of 10 and 15 W g-1 UO2. The models of the pellet show the ability to predict expected features such as the mid-height pellet crack, the correct number of radial cracks and initiation and coalescence of radial cracks. This work presents a modelling alternative to empirical fracture data found in many fuel performance codes and requires just one parameter of fracture strain. Weibull distributions of crack numbers were fitted to both numerical and experimental data using maximum likelihood estimation so that statistical comparison could be made. The findings show P-values of less than 0.5% suggesting an excellent agreement between model and experimental distributions.

  10. Parameter and uncertainty estimation for mechanistic, spatially explicit epidemiological models

    NASA Astrophysics Data System (ADS)

    Finger, Flavio; Schaefli, Bettina; Bertuzzo, Enrico; Mari, Lorenzo; Rinaldo, Andrea

    2014-05-01

    Epidemiological models can be a crucially important tool for decision-making during disease outbreaks. The range of possible applications spans from real-time forecasting and allocation of health-care resources to testing alternative intervention mechanisms such as vaccines, antibiotics or the improvement of sanitary conditions. Our spatially explicit, mechanistic models for cholera epidemics have been successfully applied to several epidemics including, the one that struck Haiti in late 2010 and is still ongoing. Calibration and parameter estimation of such models represents a major challenge because of properties unusual in traditional geoscientific domains such as hydrology. Firstly, the epidemiological data available might be subject to high uncertainties due to error-prone diagnosis as well as manual (and possibly incomplete) data collection. Secondly, long-term time-series of epidemiological data are often unavailable. Finally, the spatially explicit character of the models requires the comparison of several time-series of model outputs with their real-world counterparts, which calls for an appropriate weighting scheme. It follows that the usual assumption of a homoscedastic Gaussian error distribution, used in combination with classical calibration techniques based on Markov chain Monte Carlo algorithms, is likely to be violated, whereas the construction of an appropriate formal likelihood function seems close to impossible. Alternative calibration methods, which allow for accurate estimation of total model uncertainty, particularly regarding the envisaged use of the models for decision-making, are thus needed. Here we present the most recent developments regarding methods for parameter and uncertainty estimation to be used with our mechanistic, spatially explicit models for cholera epidemics, based on informal measures of goodness of fit.

  11. Implicit and explicit weight bias in a national sample of 4,732 medical students: the medical student CHANGES study.

    PubMed

    Phelan, Sean M; Dovidio, John F; Puhl, Rebecca M; Burgess, Diana J; Nelson, David B; Yeazel, Mark W; Hardeman, Rachel; Perry, Sylvia; van Ryn, Michelle

    2014-04-01

    To examine the magnitude of explicit and implicit weight biases compared to biases against other groups; and identify student factors predicting bias in a large national sample of medical students. A web-based survey was completed by 4,732 1st year medical students from 49 medical schools as part of a longitudinal study of medical education. The survey included a validated measure of implicit weight bias, the implicit association test, and 2 measures of explicit bias: a feeling thermometer and the anti-fat attitudes test. A majority of students exhibited implicit (74%) and explicit (67%) weight bias. Implicit weight bias scores were comparable to reported bias against racial minorities. Explicit attitudes were more negative toward obese people than toward racial minorities, gays, lesbians, and poor people. In multivariate regression models, implicit and explicit weight bias was predicted by lower BMI, male sex, and non-Black race. Either implicit or explicit bias was also predicted by age, SES, country of birth, and specialty choice. Implicit and explicit weight bias is common among 1st year medical students, and varies across student factors. Future research should assess implications of biases and test interventions to reduce their impact. Copyright © 2013 The Obesity Society.

  12. Advancing the Explicit Representation of Lake Processes in WRF-Hydro

    NASA Astrophysics Data System (ADS)

    Yates, D. N.; Read, L.; Barlage, M. J.; Gochis, D.

    2017-12-01

    Realistic simulation of physical processes in lakes is essential for closing the water and energy budgets in a coupled land-surface and hydrologic model, such as the Weather Research and Forecasting (WRF) model's WRF-Hydro framework. A current version of WRF-Hydro, the National Water Model (NWM), includes 1,506 waterbodies derived from the National Hydrography Database, each of which is modeled using a level-pool routing scheme. This presentation discusses the integration of WRF's one-dimensional lake model into WRF-Hydro, which is used to estimate waterbody fluxes and thus explicitly represent latent and sensible heat and the mass balance occurring over the lakes. Results of these developments are presented through a case study from Lake Winnebago, Wisconsin. Scalability and computational benchmarks to expand to the continental-scale NWM are discussed.

  13. A spatially explicit hydro-ecological modeling framework (BEPS-TerrainLab V2.0): Model description and test in a boreal ecosystem in Eastern North America

    NASA Astrophysics Data System (ADS)

    Govind, Ajit; Chen, Jing Ming; Margolis, Hank; Ju, Weimin; Sonnentag, Oliver; Giasson, Marc-André

    2009-04-01

    SummaryA spatially explicit, process-based hydro-ecological model, BEPS-TerrainLab V2.0, was developed to improve the representation of ecophysiological, hydro-ecological and biogeochemical processes of boreal ecosystems in a tightly coupled manner. Several processes unique to boreal ecosystems were implemented including the sub-surface lateral water fluxes, stratification of vegetation into distinct layers for explicit ecophysiological representation, inclusion of novel spatial upscaling strategies and biogeochemical processes. To account for preferential water fluxes common in humid boreal ecosystems, a novel scheme was introduced based on laboratory analyses. Leaf-scale ecophysiological processes were upscaled to canopy-scale by explicitly considering leaf physiological conditions as affected by light and water stress. The modified model was tested with 2 years of continuous measurements taken at the Eastern Old Black Spruce Site of the Fluxnet-Canada Research Network located in a humid boreal watershed in eastern Canada. Comparison of the simulated and measured ET, water-table depth (WTD), volumetric soil water content (VSWC) and gross primary productivity (GPP) revealed that BEPS-TerrainLab V2.0 simulates hydro-ecological processes with reasonable accuracy. The model was able to explain 83% of the ET, 92% of the GPP variability and 72% of the WTD dynamics. The model suggests that in humid ecosystems such as eastern North American boreal watersheds, topographically driven sub-surface baseflow is the main mechanism of soil water partitioning which significantly affects the local-scale hydrological conditions.

  14. Sensitivity of single column model simulations of Arctic springtime clouds to different cloud cover and mixed phase cloud parameterizations

    NASA Astrophysics Data System (ADS)

    Zhang, Junhua; Lohmann, Ulrike

    2003-08-01

    The single column model of the Canadian Centre for Climate Modeling and Analysis (CCCma) climate model is used to simulate Arctic spring cloud properties observed during the Surface Heat Budget of the Arctic Ocean (SHEBA) experiment. The model is driven by the rawinsonde observations constrained European Center for Medium-Range Weather Forecasts (ECMWF) reanalysis data. Five cloud parameterizations, including three statistical and two explicit schemes, are compared and the sensitivity to mixed phase cloud parameterizations is studied. Using the original mixed phase cloud parameterization of the model, the statistical cloud schemes produce more cloud cover, cloud water, and precipitation than the explicit schemes and in general agree better with observations. The mixed phase cloud parameterization from ECMWF decreases the initial saturation specific humidity threshold of cloud formation. This improves the simulated cloud cover in the explicit schemes and reduces the difference between the different cloud schemes. On the other hand, because the ECMWF mixed phase cloud scheme does not consider the Bergeron-Findeisen process, less ice crystals are formed. This leads to a higher liquid water path and less precipitation than what was observed.

  15. Ground-Based Telescope Parametric Cost Model

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Rowell, Ginger Holmes

    2004-01-01

    A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis, The model includes both engineering and performance parameters. While diameter continues to be the dominant cost driver, other significant factors include primary mirror radius of curvature and diffraction limited wavelength. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e.. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter are derived. This analysis indicates that recent mirror technology advances have indeed reduced the historical telescope cost curve.

  16. Download Trim.Fate

    EPA Pesticide Factsheets

    TRIM.FaTE is a spatially explicit, compartmental mass balance model that describes the movement and transformation of pollutants over time, through a user-defined, bounded system that includes both biotic and abiotic compartments.

  17. Explicit ions/implicit water generalized Born model for nucleic acids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tolokh, Igor S.; Thomas, Dennis G.; Onufriev, Alexey V.

    Ion atmosphere around highly charged nucleic acid molecules plays a significant role in their dynamics, structure and interactions. Here we utilized the implicit solvent framework to develop a model for the explicit treatment of ions interacting with nucleic acid molecules. The proposed explicit ions/implicit water model is based on a significantly modified generalized Born (GB) model, and utilizes a non-standard approach to defining the solute/solvent dielectric boundary. Specifically, the model includes modifications to the GB interaction terms for the case of multiple interacting solutes – disconnected dielectric boundary around the solute-ion or ion-ion pairs. Fully analytical description of all energymore » components for charge-charge interactions is provided. The effectiveness of the approach is demonstrated by calculating the potential of mean force (PMF) for Na+-Cl− ion pair and by carrying out a set of Monte Carlo (MC) simulations of mono- and trivalent ions interacting with DNA and RNA duplexes. The monovalent (Na+) and trivalent (CoHex3+) counterion distributions predicted by the model are in close quantitative agreement with all-atom explicit water molecular dynamics simulations used as reference. Expressed in the units of energy, the maximum deviations of local ion concentrations from the reference are within kBT. The proposed explicit ions/implicit water GB model is able to resolve subtle features and differences of CoHex distributions around DNA and RNA duplexes. These features include preferential CoHex binding inside the major groove of RNA duplex, in contrast to CoHex biding at the "external" surface of the sugar-phosphate backbone of DNA duplex; these differences in the counterion binding patters were shown earlier to be responsible for the observed drastic differences in condensation propensities between short DNA and RNA duplexes. MC simulations of CoHex ions interacting with homopolymeric poly(dA·dT) DNA duplex with modified (de-methylated) and native Thymine bases are used to explore the physics behind CoHex-Thymine interactions. The simulations suggest that the ion desolvation penalty due to proximity to the low dielectric volume of the methyl group can contribute significantly to CoHex-Thymine interactions. Compared to the steric repulsion between the ion and the methyl group, the desolvation penalty interaction has a longer range, and may be important to consider in the context of methylation effects on DNA condensation.« less

  18. Explicit ions/implicit water generalized Born model for nucleic acids

    NASA Astrophysics Data System (ADS)

    Tolokh, Igor S.; Thomas, Dennis G.; Onufriev, Alexey V.

    2018-05-01

    The ion atmosphere around highly charged nucleic acid molecules plays a significant role in their dynamics, structure, and interactions. Here we utilized the implicit solvent framework to develop a model for the explicit treatment of ions interacting with nucleic acid molecules. The proposed explicit ions/implicit water model is based on a significantly modified generalized Born (GB) model and utilizes a non-standard approach to define the solute/solvent dielectric boundary. Specifically, the model includes modifications to the GB interaction terms for the case of multiple interacting solutes—disconnected dielectric boundary around the solute-ion or ion-ion pairs. A fully analytical description of all energy components for charge-charge interactions is provided. The effectiveness of the approach is demonstrated by calculating the potential of mean force for Na+-Cl- ion pair and by carrying out a set of Monte Carlo (MC) simulations of mono- and trivalent ions interacting with DNA and RNA duplexes. The monovalent (Na+) and trivalent (CoHex3+) counterion distributions predicted by the model are in close quantitative agreement with all-atom explicit water molecular dynamics simulations used as reference. Expressed in the units of energy, the maximum deviations of local ion concentrations from the reference are within kBT. The proposed explicit ions/implicit water GB model is able to resolve subtle features and differences of CoHex distributions around DNA and RNA duplexes. These features include preferential CoHex binding inside the major groove of the RNA duplex, in contrast to CoHex biding at the "external" surface of the sugar-phosphate backbone of the DNA duplex; these differences in the counterion binding patters were earlier shown to be responsible for the observed drastic differences in condensation propensities between short DNA and RNA duplexes. MC simulations of CoHex ions interacting with the homopolymeric poly(dA.dT) DNA duplex with modified (de-methylated) and native thymine bases are used to explore the physics behind CoHex-thymine interactions. The simulations suggest that the ion desolvation penalty due to proximity to the low dielectric volume of the methyl group can contribute significantly to CoHex-thymine interactions. Compared to the steric repulsion between the ion and the methyl group, the desolvation penalty interaction has a longer range and may be important to consider in the context of methylation effects on DNA condensation.

  19. A Process Model of Principal Selection.

    ERIC Educational Resources Information Center

    Flanigan, J. L.; And Others

    A process model to assist school district superintendents in the selection of principals is presented in this paper. Components of the process are described, which include developing an action plan, formulating an explicit job description, advertising, assessing candidates' philosophy, conducting interview analyses, evaluating response to stress,…

  20. Explicitly represented polygon wall boundary model for the explicit MPS method

    NASA Astrophysics Data System (ADS)

    Mitsume, Naoto; Yoshimura, Shinobu; Murotani, Kohei; Yamada, Tomonori

    2015-05-01

    This study presents an accurate and robust boundary model, the explicitly represented polygon (ERP) wall boundary model, to treat arbitrarily shaped wall boundaries in the explicit moving particle simulation (E-MPS) method, which is a mesh-free particle method for strong form partial differential equations. The ERP model expresses wall boundaries as polygons, which are explicitly represented without using the distance function. These are derived so that for viscous fluids, and with less computational cost, they satisfy the Neumann boundary condition for the pressure and the slip/no-slip condition on the wall surface. The proposed model is verified and validated by comparing computed results with the theoretical solution, results obtained by other models, and experimental results. Two simulations with complex boundary movements are conducted to demonstrate the applicability of the E-MPS method to the ERP model.

  1. Classification of NLO operators for composite Higgs models

    NASA Astrophysics Data System (ADS)

    Alanne, Tommi; Bizot, Nicolas; Cacciapaglia, Giacomo; Sannino, Francesco

    2018-04-01

    We provide a general classification of template operators, up to next-to-leading order, that appear in chiral perturbation theories based on the two flavor patterns of spontaneous symmetry breaking SU (NF)/Sp (NF) and SU (NF)/SO (NF). All possible explicit-breaking sources parametrized by spurions transforming in the fundamental and in the two-index representations of the flavor symmetry are included. While our general framework can be applied to any model of strong dynamics, we specialize to composite-Higgs models, where the main explicit breaking sources are a current mass, the gauging of flavor symmetries, and the Yukawa couplings (for the top). For the top, we consider both bilinear couplings and linear ones à la partial compositeness. Our templates provide a basis for lattice calculations in specific models. As a special example, we consider the SU (4 )/Sp (4 )≅SO (6 )/SO (5 ) pattern which corresponds to the minimal fundamental composite-Higgs model. We further revisit issues related to the misalignment of the vacuum. In particular, we shed light on the physical properties of the singlet η , showing that it cannot develop a vacuum expectation value without explicit C P violation in the underlying theory.

  2. An Evaluation of Explicit Receptor Flexibility in Molecular Docking Using Molecular Dynamics and Torsion Angle Molecular Dynamics.

    PubMed

    Armen, Roger S; Chen, Jianhan; Brooks, Charles L

    2009-10-13

    Incorporating receptor flexibility into molecular docking should improve results for flexible proteins. However, the incorporation of explicit all-atom flexibility with molecular dynamics for the entire protein chain may also introduce significant error and "noise" that could decrease docking accuracy and deteriorate the ability of a scoring function to rank native-like poses. We address this apparent paradox by comparing the success of several flexible receptor models in cross-docking and multiple receptor ensemble docking for p38α mitogen-activated protein (MAP) kinase. Explicit all-atom receptor flexibility has been incorporated into a CHARMM-based molecular docking method (CDOCKER) using both molecular dynamics (MD) and torsion angle molecular dynamics (TAMD) for the refinement of predicted protein-ligand binding geometries. These flexible receptor models have been evaluated, and the accuracy and efficiency of TAMD sampling is directly compared to MD sampling. Several flexible receptor models are compared, encompassing flexible side chains, flexible loops, multiple flexible backbone segments, and treatment of the entire chain as flexible. We find that although including side chain and some backbone flexibility is required for improved docking accuracy as expected, docking accuracy also diminishes as additional and unnecessary receptor flexibility is included into the conformational search space. Ensemble docking results demonstrate that including protein flexibility leads to to improved agreement with binding data for 227 active compounds. This comparison also demonstrates that a flexible receptor model enriches high affinity compound identification without significantly increasing the number of false positives from low affinity compounds.

  3. An Evaluation of Explicit Receptor Flexibility in Molecular Docking Using Molecular Dynamics and Torsion Angle Molecular Dynamics

    PubMed Central

    Armen, Roger S.; Chen, Jianhan; Brooks, Charles L.

    2009-01-01

    Incorporating receptor flexibility into molecular docking should improve results for flexible proteins. However, the incorporation of explicit all-atom flexibility with molecular dynamics for the entire protein chain may also introduce significant error and “noise” that could decrease docking accuracy and deteriorate the ability of a scoring function to rank native-like poses. We address this apparent paradox by comparing the success of several flexible receptor models in cross-docking and multiple receptor ensemble docking for p38α mitogen-activated protein (MAP) kinase. Explicit all-atom receptor flexibility has been incorporated into a CHARMM-based molecular docking method (CDOCKER) using both molecular dynamics (MD) and torsion angle molecular dynamics (TAMD) for the refinement of predicted protein-ligand binding geometries. These flexible receptor models have been evaluated, and the accuracy and efficiency of TAMD sampling is directly compared to MD sampling. Several flexible receptor models are compared, encompassing flexible side chains, flexible loops, multiple flexible backbone segments, and treatment of the entire chain as flexible. We find that although including side chain and some backbone flexibility is required for improved docking accuracy as expected, docking accuracy also diminishes as additional and unnecessary receptor flexibility is included into the conformational search space. Ensemble docking results demonstrate that including protein flexibility leads to to improved agreement with binding data for 227 active compounds. This comparison also demonstrates that a flexible receptor model enriches high affinity compound identification without significantly increasing the number of false positives from low affinity compounds. PMID:20160879

  4. Exact simulation of integrate-and-fire models with exponential currents.

    PubMed

    Brette, Romain

    2007-10-01

    Neural networks can be simulated exactly using event-driven strategies, in which the algorithm advances directly from one spike to the next spike. It applies to neuron models for which we have (1) an explicit expression for the evolution of the state variables between spikes and (2) an explicit test on the state variables that predicts whether and when a spike will be emitted. In a previous work, we proposed a method that allows exact simulation of an integrate-and-fire model with exponential conductances, with the constraint of a single synaptic time constant. In this note, we propose a method, based on polynomial root finding, that applies to integrate-and-fire models with exponential currents, with possibly many different synaptic time constants. Models can include biexponential synaptic currents and spike-triggered adaptation currents.

  5. Toward an Optimal Pedagogy for Teamwork.

    PubMed

    Earnest, Mark A; Williams, Jason; Aagaard, Eva M

    2017-10-01

    Teamwork and collaboration are increasingly listed as core competencies for undergraduate health professions education. Despite the clear mandate for teamwork training, the optimal method for providing that training is much less certain. In this Perspective, the authors propose a three-level classification of pedagogical approaches to teamwork training based on the presence of two key learning factors: interdependent work and explicit training in teamwork. In this classification framework, level 1-minimal team learning-is where learners work in small groups but neither of the key learning factors is present. Level 2-implicit team learning-engages learners in interdependent learning activities but does not include an explicit focus on teamwork. Level 3-explicit team learning-creates environments where teams work interdependently toward common goals and are given explicit instruction and practice in teamwork. The authors provide examples that demonstrate each level. They then propose that the third level of team learning, explicit team learning, represents a best practice approach in teaching teamwork, highlighting their experience with an explicit team learning course at the University of Colorado Anschutz Medical Campus. Finally, they discuss several challenges to implementing explicit team-learning-based curricula: the lack of a common teamwork model on which to anchor such a curriculum; the question of whether the knowledge, skills, and attitudes acquired during training would be transferable to the authentic clinical environment; and effectively evaluating the impact of explicit team learning.

  6. Organism and population-level ecological models for ...

    EPA Pesticide Factsheets

    Ecological risk assessment typically focuses on animal populations as endpoints for regulatory ecotoxicology. Scientists at USEPA are developing models for animal populations exposed to a wide range of chemicals from pesticides to emerging contaminants. Modeled taxa include aquatic and terrestrial invertebrates, fish, amphibians, and birds, and employ a wide range of methods, from matrix-based projection models to mechanistic bioenergetics models and spatially explicit population models. not applicable

  7. TRIM.FaTE Public Reference Library Documentation

    EPA Pesticide Factsheets

    TRIM.FaTE is a spatially explicit, compartmental mass balance model that describes the movement and transformation of pollutants over time, through a user-defined, bounded system that includes both biotic and abiotic compartments.

  8. General Retarded Contact Self-energies in and beyond the Non-equilibrium Green's Functions Method

    NASA Astrophysics Data System (ADS)

    Kubis, Tillmann; He, Yu; Andrawis, Robert; Klimeck, Gerhard

    2016-03-01

    Retarded contact self-energies in the framework of nonequilibrium Green's functions allow to model the impact of lead structures on the device without explicitly including the leads in the actual device calculation. Most of the contact self-energy algorithms are limited to homogeneous or periodic, semi-infinite lead structures. In this work, the complex absorbing potential method is extended to solve retarded contact self-energies for arbitrary lead structures, including irregular and randomly disordered leads. This method is verified for regular leads against common approaches and on physically equivalent, but numerically different irregular leads. Transmission results on randomly alloyed In0.5Ga0.5As structures show the importance of disorder in the leads. The concept of retarded contact self-energies is expanded to model passivation of atomically resolved surfaces without explicitly increasing the device's Hamiltonian.

  9. Constant pH Molecular Dynamics of Proteins in Explicit Solvent with Proton Tautomerism

    PubMed Central

    Goh, Garrett B.; Hulbert, Benjamin S.; Zhou, Huiqing; Brooks, Charles L.

    2015-01-01

    pH is a ubiquitous regulator of biological activity, including protein-folding, protein-protein interactions and enzymatic activity. Existing constant pH molecular dynamics (CPHMD) models that were developed to address questions related to the pH-dependent properties of proteins are largely based on implicit solvent models. However, implicit solvent models are known to underestimate the desolvation energy of buried charged residues, increasing the error associated with predictions that involve internal ionizable residue that are important in processes like hydrogen transport and electron transfer. Furthermore, discrete water and ions cannot be modeled in implicit solvent, which are important in systems like membrane proteins and ion channels. We report on an explicit solvent constant pH molecular dynamics framework based on multi-site λ-dynamics (CPHMDMSλD). In the CPHMDMSλD framework, we performed seamless alchemical transitions between protonation and tautomeric states using multi-site λ-dynamics, and designed novel biasing potentials to ensure that the physical end-states are predominantly sampled. We show that explicit solvent CPHMDMSλD simulations model realistic pH-dependent properties of proteins such as the Hen-Egg White Lysozyme (HEWL), binding domain of 2-oxoglutarate dehydrogenase (BBL) and N-terminal domain of ribosomal L9 (NTL9), and the pKa predictions are in excellent agreement with experimental values, with a RMSE ranging from 0.72 to 0.84 pKa units. With the recent development of the explicit solvent CPHMDMSλD framework for nucleic acids, accurate modeling of pH-dependent properties of both major class of biomolecules – proteins and nucleic acids is now possible. PMID:24375620

  10. Buns, Scissors and Strawberry Laces--A Model of Science Education?

    ERIC Educational Resources Information Center

    Walsh, Ed; Edwards, Rebecca

    2009-01-01

    Models are included in the science National Curriculum because modelling is a key tool for scientists and an integral part of how science works. Modelling is explicitly referred to in the Programmes of Study for Science at Key Stage 3 and 4 (age 11-16) and in Assessing Pupil's Progress (APP). Pupils need to learn how to use models because they are…

  11. Development of landscape-level habitat suitability models for ten wildlife species in the central hardwoods region

    Treesearch

    Chadwick D. Rittenhouse; William D. Dijak; Frank R. III Thompson; Joshua J. Millspaugh

    2007-01-01

    Reports landscape-level habitat suitability models for 10 species in the Central Hardwoods Region of the Midwestern United States: American woodcock, cerulean warbler, Henslow's sparrow, Indiana bat, northern bobwhite, ruffed grouse, timber rattlesnake, wood thrush, worm-eating warbler, and yellow-breasted chat. All models included spatially explicit variables and...

  12. CDPOP: A spatially explicit cost distance population genetics program

    Treesearch

    Erin L. Landguth; S. A. Cushman

    2010-01-01

    Spatially explicit simulation of gene flow in complex landscapes is essential to explain observed population responses and provide a foundation for landscape genetics. To address this need, we wrote a spatially explicit, individual-based population genetics model (CDPOP). The model implements individual-based population modelling with Mendelian inheritance and k-allele...

  13. Analysis of explicit model predictive control for path-following control

    PubMed Central

    2018-01-01

    In this paper, explicit Model Predictive Control(MPC) is employed for automated lane-keeping systems. MPC has been regarded as the key to handle such constrained systems. However, the massive computational complexity of MPC, which employs online optimization, has been a major drawback that limits the range of its target application to relatively small and/or slow problems. Explicit MPC can reduce this computational burden using a multi-parametric quadratic programming technique(mp-QP). The control objective is to derive an optimal front steering wheel angle at each sampling time so that autonomous vehicles travel along desired paths, including straight, circular, and clothoid parts, at high entry speeds. In terms of the design of the proposed controller, a method of choosing weighting matrices in an optimization problem and the range of horizons for path-following control are described through simulations. For the verification of the proposed controller, simulation results obtained using other control methods such as MPC, Linear-Quadratic Regulator(LQR), and driver model are employed, and CarSim, which reflects the features of a vehicle more realistically than MATLAB/Simulink, is used for reliable demonstration. PMID:29534080

  14. Analysis of explicit model predictive control for path-following control.

    PubMed

    Lee, Junho; Chang, Hyuk-Jun

    2018-01-01

    In this paper, explicit Model Predictive Control(MPC) is employed for automated lane-keeping systems. MPC has been regarded as the key to handle such constrained systems. However, the massive computational complexity of MPC, which employs online optimization, has been a major drawback that limits the range of its target application to relatively small and/or slow problems. Explicit MPC can reduce this computational burden using a multi-parametric quadratic programming technique(mp-QP). The control objective is to derive an optimal front steering wheel angle at each sampling time so that autonomous vehicles travel along desired paths, including straight, circular, and clothoid parts, at high entry speeds. In terms of the design of the proposed controller, a method of choosing weighting matrices in an optimization problem and the range of horizons for path-following control are described through simulations. For the verification of the proposed controller, simulation results obtained using other control methods such as MPC, Linear-Quadratic Regulator(LQR), and driver model are employed, and CarSim, which reflects the features of a vehicle more realistically than MATLAB/Simulink, is used for reliable demonstration.

  15. Total Risk Integrated Methodology (TRIM) - TRIM.FaTE

    EPA Pesticide Factsheets

    TRIM.FaTE is a spatially explicit, compartmental mass balance model that describes the movement and transformation of pollutants over time, through a user-defined, bounded system that includes both biotic and abiotic compartments.

  16. An image-based reaction field method for electrostatic interactions in molecular dynamics simulations of aqueous solutions

    NASA Astrophysics Data System (ADS)

    Lin, Yuchun; Baumketner, Andrij; Deng, Shaozhong; Xu, Zhenli; Jacobs, Donald; Cai, Wei

    2009-10-01

    In this paper, a new solvation model is proposed for simulations of biomolecules in aqueous solutions that combines the strengths of explicit and implicit solvent representations. Solute molecules are placed in a spherical cavity filled with explicit water, thus providing microscopic detail where it is most needed. Solvent outside of the cavity is modeled as a dielectric continuum whose effect on the solute is treated through the reaction field corrections. With this explicit/implicit model, the electrostatic potential represents a solute molecule in an infinite bath of solvent, thus avoiding unphysical interactions between periodic images of the solute commonly used in the lattice-sum explicit solvent simulations. For improved computational efficiency, our model employs an accurate and efficient multiple-image charge method to compute reaction fields together with the fast multipole method for the direct Coulomb interactions. To minimize the surface effects, periodic boundary conditions are employed for nonelectrostatic interactions. The proposed model is applied to study liquid water. The effect of model parameters, which include the size of the cavity, the number of image charges used to compute reaction field, and the thickness of the buffer layer, is investigated in comparison with the particle-mesh Ewald simulations as a reference. An optimal set of parameters is obtained that allows for a faithful representation of many structural, dielectric, and dynamic properties of the simulated water, while maintaining manageable computational cost. With controlled and adjustable accuracy of the multiple-image charge representation of the reaction field, it is concluded that the employed model achieves convergence with only one image charge in the case of pure water. Future applications to pKa calculations, conformational sampling of solvated biomolecules and electrolyte solutions are briefly discussed.

  17. Explicit criteria for prioritization of cataract surgery

    PubMed Central

    Ma Quintana, José; Escobar, Antonio; Bilbao, Amaia

    2006-01-01

    Background Consensus techniques have been used previously to create explicit criteria to prioritize cataract extraction; however, the appropriateness of the intervention was not included explicitly in previous studies. We developed a prioritization tool for cataract extraction according to the RAND method. Methods Criteria were developed using a modified Delphi panel judgment process. A panel of 11 ophthalmologists was assembled. Ratings were analyzed regarding the level of agreement among panelists. We studied the effect of all variables on the final panel score using general linear and logistic regression models. Priority scoring systems were developed by means of optimal scaling and general linear models. The explicit criteria developed were summarized by means of regression tree analysis. Results Eight variables were considered to create the indications. Of the 310 indications that the panel evaluated, 22.6% were considered high priority, 52.3% intermediate priority, and 25.2% low priority. Agreement was reached for 31.9% of the indications and disagreement for 0.3%. Logistic regression and general linear models showed that the preoperative visual acuity of the cataractous eye, visual function, and anticipated visual acuity postoperatively were the most influential variables. Alternative and simple scoring systems were obtained by optimal scaling and general linear models where the previous variables were also the most important. The decision tree also shows the importance of the previous variables and the appropriateness of the intervention. Conclusion Our results showed acceptable validity as an evaluation and management tool for prioritizing cataract extraction. It also provides easy algorithms for use in clinical practice. PMID:16512893

  18. Exact Local Correlations and Full Counting Statistics for Arbitrary States of the One-Dimensional Interacting Bose Gas

    NASA Astrophysics Data System (ADS)

    Bastianello, Alvise; Piroli, Lorenzo; Calabrese, Pasquale

    2018-05-01

    We derive exact analytic expressions for the n -body local correlations in the one-dimensional Bose gas with contact repulsive interactions (Lieb-Liniger model) in the thermodynamic limit. Our results are valid for arbitrary states of the model, including ground and thermal states, stationary states after a quantum quench, and nonequilibrium steady states arising in transport settings. Calculations for these states are explicitly presented and physical consequences are critically discussed. We also show that the n -body local correlations are directly related to the full counting statistics for the particle-number fluctuations in a short interval, for which we provide an explicit analytic result.

  19. From puddles to planet: modeling approaches to vector-borne diseases at varying resolution and scale.

    PubMed

    Eckhoff, Philip A; Bever, Caitlin A; Gerardin, Jaline; Wenger, Edward A; Smith, David L

    2015-08-01

    Since the original Ross-Macdonald formulations of vector-borne disease transmission, there has been a broad proliferation of mathematical models of vector-borne disease, but many of these models retain most to all of the simplifying assumptions of the original formulations. Recently, there has been a new expansion of mathematical frameworks that contain explicit representations of the vector life cycle including aquatic stages, multiple vector species, host heterogeneity in biting rate, realistic vector feeding behavior, and spatial heterogeneity. In particular, there are now multiple frameworks for spatially explicit dynamics with movements of vector, host, or both. These frameworks are flexible and powerful, but require additional data to take advantage of these features. For a given question posed, utilizing a range of models with varying complexity and assumptions can provide a deeper understanding of the answers derived from models. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  20. The utility of modeling word identification from visual input within models of eye movements in reading

    PubMed Central

    Bicknell, Klinton; Levy, Roger

    2012-01-01

    Decades of empirical work have shown that a range of eye movement phenomena in reading are sensitive to the details of the process of word identification. Despite this, major models of eye movement control in reading do not explicitly model word identification from visual input. This paper presents a argument for developing models of eye movements that do include detailed models of word identification. Specifically, we argue that insights into eye movement behavior can be gained by understanding which phenomena naturally arise from an account in which the eyes move for efficient word identification, and that one important use of such models is to test which eye movement phenomena can be understood this way. As an extended case study, we present evidence from an extension of a previous model of eye movement control in reading that does explicitly model word identification from visual input, Mr. Chips (Legge, Klitz, & Tjan, 1997), to test two proposals for the effect of using linguistic context on reading efficiency. PMID:23074362

  1. Batch-mode Reinforcement Learning for improved hydro-environmental systems management

    NASA Astrophysics Data System (ADS)

    Castelletti, A.; Galelli, S.; Restelli, M.; Soncini-Sessa, R.

    2010-12-01

    Despite the great progresses made in the last decades, the optimal management of hydro-environmental systems still remains a very active and challenging research area. The combination of multiple, often conflicting interests, high non-linearities of the physical processes and the management objectives, strong uncertainties in the inputs, and high dimensional state makes the problem challenging and intriguing. Stochastic Dynamic Programming (SDP) is one of the most suitable methods for designing (Pareto) optimal management policies preserving the original problem complexity. However, it suffers from a dual curse, which, de facto, prevents its practical application to even reasonably complex water systems. (i) Computational requirement grows exponentially with state and control dimension (Bellman's curse of dimensionality), so that SDP can not be used with water systems where the state vector includes more than few (2-3) units. (ii) An explicit model of each system's component is required (curse of modelling) to anticipate the effects of the system transitions, i.e. any information included into the SDP framework can only be either a state variable described by a dynamic model or a stochastic disturbance, independent in time, with the associated pdf. Any exogenous information that could effectively improve the system operation cannot be explicitly considered in taking the management decision, unless a dynamic model is identified for each additional information, thus adding to the problem complexity through the curse of dimensionality (additional state variables). To mitigate this dual curse, the combined use of batch-mode Reinforcement Learning (bRL) and Dynamic Model Reduction (DMR) techniques is explored in this study. bRL overcomes the curse of modelling by replacing explicit modelling with an external simulator and/or historical observations. The curse of dimensionality is averted using a functional approximation of the SDP value function based on proper non-linear regressors. DMR reduces the complexity and the associated computational requirements of non-linear distributed process based models, making them suitable for being included into optimization schemes. Results from real world applications of the approach are also presented, including reservoir operation with both quality and quantity targets.

  2. Spectral wave dissipation by submerged aquatic vegetation in a back-barrier estuary

    USGS Publications Warehouse

    Nowacki, Daniel J.; Beudin, Alexis; Ganju, Neil K.

    2017-01-01

    Submerged aquatic vegetation is generally thought to attenuate waves, but this interaction remains poorly characterized in shallow-water field settings with locally generated wind waves. Better quantification of wave–vegetation interaction can provide insight to morphodynamic changes in a variety of environments and also is relevant to the planning of nature-based coastal protection measures. Toward that end, an instrumented transect was deployed across a Zostera marina (common eelgrass) meadow in Chincoteague Bay, Maryland/Virginia, U.S.A., to characterize wind-wave transformation within the vegetated region. Field observations revealed wave-height reduction, wave-period transformation, and wave-energy dissipation with distance into the meadow, and the data informed and calibrated a spectral wave model of the study area. The field observations and model results agreed well when local wind forcing and vegetation-induced drag were included in the model, either explicitly as rigid vegetation elements or implicitly as large bed-roughness values. Mean modeled parameters were similar for both the explicit and implicit approaches, but the spectral performance of the explicit approach was poor compared to the implicit approach. The explicit approach over-predicted low-frequency energy within the meadow because the vegetation scheme determines dissipation using mean wavenumber and frequency, in contrast to the bed-friction formulations, which dissipate energy in a variable fashion across frequency bands. Regardless of the vegetation scheme used, vegetation was the most important component of wave dissipation within much of the study area. These results help to quantify the influence of submerged aquatic vegetation on wave dynamics in future model parameterizations, field efforts, and coastal-protection measures.

  3. Microphysical response of cloud droplets in a fluctuating updraft. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Harding, D. D.

    1977-01-01

    The effect of a fluctuating updraft upon a distribution of cloud droplets is examined. Computations are performed for fourteen vertical velocity patterns; each allows a closed parcel of cloud air to undergo downward as well as upward motion. Droplet solution and curvature effects are included. The classical equations for the growth rate of an individual droplet by vapor condensation relies on simplifying assumptions. Those assumptions are isolated and examined. A unique approach is presented in which all energy sources and sinks of a droplet may be considered and is termed the explicit model. It is speculated that the explicit model may enhance the growth of large droplets at greater heights. Such a model is beneficial to the studies of pollution scavenging and acid rain.

  4. Some aspects of algorithm performance and modeling in transient analysis of structures

    NASA Technical Reports Server (NTRS)

    Adelman, H. M.; Haftka, R. T.; Robinson, J. C.

    1981-01-01

    The status of an effort to increase the efficiency of calculating transient temperature fields in complex aerospace vehicle structures is described. The advantages and disadvantages of explicit algorithms with variable time steps, known as the GEAR package, is described. Four test problems, used for evaluating and comparing various algorithms, were selected and finite-element models of the configurations are described. These problems include a space shuttle frame component, an insulated cylinder, a metallic panel for a thermal protection system, and a model of the wing of the space shuttle orbiter. Results generally indicate a preference for implicit over explicit algorithms for solution of transient structural heat transfer problems when the governing equations are stiff (typical of many practical problems such as insulated metal structures).

  5. Exact models for isotropic matter

    NASA Astrophysics Data System (ADS)

    Thirukkanesh, S.; Maharaj, S. D.

    2006-04-01

    We study the Einstein-Maxwell system of equations in spherically symmetric gravitational fields for static interior spacetimes. The condition for pressure isotropy is reduced to a recurrence equation with variable, rational coefficients. We demonstrate that this difference equation can be solved in general using mathematical induction. Consequently, we can find an explicit exact solution to the Einstein-Maxwell field equations. The metric functions, energy density, pressure and the electric field intensity can be found explicitly. Our result contains models found previously, including the neutron star model of Durgapal and Bannerji. By placing restrictions on parameters arising in the general series, we show that the series terminate and there exist two linearly independent solutions. Consequently, it is possible to find exact solutions in terms of elementary functions, namely polynomials and algebraic functions.

  6. Five challenges for spatial epidemic models

    PubMed Central

    Riley, Steven; Eames, Ken; Isham, Valerie; Mollison, Denis; Trapman, Pieter

    2015-01-01

    Infectious disease incidence data are increasingly available at the level of the individual and include high-resolution spatial components. Therefore, we are now better able to challenge models that explicitly represent space. Here, we consider five topics within spatial disease dynamics: the construction of network models; characterising threshold behaviour; modelling long-distance interactions; the appropriate scale for interventions; and the representation of population heterogeneity. PMID:25843387

  7. A Knowledge Navigation Method for the Domain of Customers' Services of Mobile Communication Corporations in China

    NASA Astrophysics Data System (ADS)

    Wu, Jiangning; Wang, Xiaohuan

    Rapidly increasing amount of mobile phone users and types of services leads to a great accumulation of complaining information. How to use this information to enhance the quality of customers' services is a big issue at present. To handle this kind of problem, the paper presents an approach to construct a domain knowledge map for navigating the explicit and tacit knowledge in two ways: building the Topic Map-based explicit knowledge navigation model, which includes domain TM construction, a semantic topic expansion algorithm and VSM-based similarity calculation; building Social Network Analysis-based tacit knowledge navigation model, which includes a multi-relational expert navigation algorithm and the criterions to evaluate the performance of expert networks. In doing so, both the customer managers and operators in call centers can find the appropriate knowledge and experts quickly and exactly. The experimental results show that the above method is very powerful for knowledge navigation.

  8. The Environment Makes a Difference: The Impact of Explicit and Implicit Attitudes as Precursors in Different Food Choice Tasks

    PubMed Central

    König, Laura M.; Giese, Helge; Schupp, Harald T.; Renner, Britta

    2016-01-01

    Studies show that implicit and explicit attitudes influence food choice. However, precursors of food choice often are investigated using tasks offering a very limited number of options despite the comparably complex environment surrounding real life food choice. In the present study, we investigated how the assortment impacts the relationship between implicit and explicit attitudes and food choice (confectionery and fruit), assuming that a more complex choice architecture is more taxing on cognitive resources. Specifically, a binary and a multiple option choice task based on the same stimulus set (fake food items) were presented to ninety-seven participants. Path modeling revealed that both explicit and implicit attitudes were associated with relative food choice (confectionery vs. fruit) in both tasks. In the binary option choice task, both explicit and implicit attitudes were significant precursors of food choice, with explicit attitudes having a greater impact. Conversely, in the multiple option choice task, the additive impact of explicit and implicit attitudes was qualified by an interaction indicating that, even if explicit and implicit attitudes toward confectionery were inconsistent, more confectionery was chosen than fruit if either was positive. This compensatory ‘one is sufficient’-effect indicates that the structure of the choice environment modulates the relationship between attitudes and choice. The study highlights that environmental constraints, such as the number of choice options, are an important boundary condition that need to be included when investigating the relationship between psychological precursors and behavior. PMID:27621719

  9. The Environment Makes a Difference: The Impact of Explicit and Implicit Attitudes as Precursors in Different Food Choice Tasks.

    PubMed

    König, Laura M; Giese, Helge; Schupp, Harald T; Renner, Britta

    2016-01-01

    Studies show that implicit and explicit attitudes influence food choice. However, precursors of food choice often are investigated using tasks offering a very limited number of options despite the comparably complex environment surrounding real life food choice. In the present study, we investigated how the assortment impacts the relationship between implicit and explicit attitudes and food choice (confectionery and fruit), assuming that a more complex choice architecture is more taxing on cognitive resources. Specifically, a binary and a multiple option choice task based on the same stimulus set (fake food items) were presented to ninety-seven participants. Path modeling revealed that both explicit and implicit attitudes were associated with relative food choice (confectionery vs. fruit) in both tasks. In the binary option choice task, both explicit and implicit attitudes were significant precursors of food choice, with explicit attitudes having a greater impact. Conversely, in the multiple option choice task, the additive impact of explicit and implicit attitudes was qualified by an interaction indicating that, even if explicit and implicit attitudes toward confectionery were inconsistent, more confectionery was chosen than fruit if either was positive. This compensatory 'one is sufficient'-effect indicates that the structure of the choice environment modulates the relationship between attitudes and choice. The study highlights that environmental constraints, such as the number of choice options, are an important boundary condition that need to be included when investigating the relationship between psychological precursors and behavior.

  10. Medical School Factors Associated with Changes in Implicit and Explicit Bias Against Gay and Lesbian People among 3492 Graduating Medical Students.

    PubMed

    Phelan, Sean M; Burke, Sara E; Hardeman, Rachel R; White, Richard O; Przedworski, Julia; Dovidio, John F; Perry, Sylvia P; Plankey, Michael; A Cunningham, Brooke; Finstad, Deborah; W Yeazel, Mark; van Ryn, Michelle

    2017-11-01

    Implicit and explicit bias among providers can influence the quality of healthcare. Efforts to address sexual orientation bias in new physicians are hampered by a lack of knowledge of school factors that influence bias among students. To determine whether medical school curriculum, role modeling, diversity climate, and contact with sexual minorities predict bias among graduating students against gay and lesbian people. Prospective cohort study. A sample of 4732 first-year medical students was recruited from a stratified random sample of 49 US medical schools in the fall of 2010 (81% response; 55% of eligible), of which 94.5% (4473) identified as heterosexual. Seventy-eight percent of baseline respondents (3492) completed a follow-up survey in their final semester (spring 2014). Medical school predictors included formal curriculum, role modeling, diversity climate, and contact with sexual minorities. Outcomes were year 4 implicit and explicit bias against gay men and lesbian women, adjusted for bias at year 1. In multivariate models, lower explicit bias against gay men and lesbian women was associated with more favorable contact with LGBT faculty, residents, students, and patients, and perceived skill and preparedness for providing care to LGBT patients. Greater explicit bias against lesbian women was associated with discrimination reported by sexual minority students (b = 1.43 [0.16, 2.71]; p = 0.03). Lower implicit sexual orientation bias was associated with more frequent contact with LGBT faculty, residents, students, and patients (b = -0.04 [-0.07, -0.01); p = 0.008). Greater implicit bias was associated with more faculty role modeling of discriminatory behavior (b = 0.34 [0.11, 0.57); p = 0.004). Medical schools may reduce bias against sexual minority patients by reducing negative role modeling, improving the diversity climate, and improving student preparedness to care for this population.

  11. The Role of Sexually Explicit Material (SEM) in the Sexual Development of Black Young Same-Sex-Attracted Men

    PubMed Central

    Morgan, Anthony; Ogunbajo, Adedotun; Trent, Maria; Harper, Gary W.; Fortenberry, J. Dennis

    2015-01-01

    Sexually explicit material (SEM) (including Internet, video, and print) may play a key role in the lives of Black same-sex sexually active youth by providing the only information to learn about sexual development. There is limited school-and/or family-based sex education to serve as models for sexual behaviors for Black youth. We describe the role SEM plays in the sexual development of a sample of Black same-sex attracted (SSA) young adolescent men ages 15–19. Adolescents recruited from clinics, social networking sites, and through snowball sampling were invited to participate in a 90-min, semi-structured qualitative interview. Most participants described using SEM prior to their first same-sex sexual experience. Participants described using SEM primarily for sexual development, including learning about sexual organs and function, the mechanics of same-gender sex, and to negotiate one’s sexual identity. Secondary functions were to determine readiness for sex; to learn about sexual performance, including understanding sexual roles and responsibilities (e.g., “top” or “bottom”); to introduce sexual performance scripts; and to develop models for how sex should feel (e.g., pleasure and pain). Youth also described engaging in sexual behaviors (including condom non-use and/or swallowing ejaculate) that were modeled on SEM. Comprehensive sexuality education programs should be designed to address the unmet needs of young, Black SSA young men, with explicit focus on sexual roles and behaviors that may be inaccurately portrayed and/or involve sexual risk-taking (such as unprotected anal intercourse and swallowing ejaculate) in SEM. This work also calls for development of Internet-based HIV/STI prevention strategies targeting young Black SSA men who maybe accessing SEM. PMID:25677334

  12. REMOTE SENSING AND SPATIALLY EXPLICIT LANDSCAPE-BASED NITROGEN MODELING METHODS DEVELOPMENT IN THE NEUSE RIVER BASIN, NC

    EPA Science Inventory

    The objective of this research was to model and map the spatial patterns of excess nitrogen (N) sources across the landscape within the Neuse River Basin (NRB) of North
    Carolina. The process included an initial land cover characterization effort to map landscape "patches" at ...

  13. Toward a Predictive Model of Arctic Coastal Retreat in a Warming Climate, Beaufort Sea, Alaska

    DTIC Science & Technology

    2011-09-30

    level by waves and surge and tide. Melt rate is governed by an empirically based iceberg melting algorithm that includes explicitly the roles of wave...Thermal erosion of a permafrost coastline: Improving process-based models using time-lapse photography, Arctic Alpine Antarctic Research 43(3): 474

  14. A Model for Effective Implementation of Flexible Programme Delivery

    ERIC Educational Resources Information Center

    Normand, Carey; Littlejohn, Allison; Falconer, Isobel

    2008-01-01

    The model developed here is the outcome of a project funded by the Quality Assurance Agency Scotland to support implementation of flexible programme delivery (FPD) in post-compulsory education. We highlight key features of FPD, including explicit and implicit assumptions about why flexibility is needed and the perceived barriers and solutions to…

  15. Simulation modeling of forest landscape disturbances: Where do we go from here?

    Treesearch

    Ajith H. Perera; Brian R. Sturtevant; Lisa J. Buse

    2015-01-01

    It was nearly a quarter-century ago when Turner and Gardner (1991) drew attention to methods of quantifying landscape patterns and processes, including simulation modeling. The many authors who contributed to that seminal text collectively signaled the emergence of a new field—spatially explicit simulation modeling of broad-scale ecosystem dynamics. Of particular note...

  16. Constant pH molecular dynamics of proteins in explicit solvent with proton tautomerism.

    PubMed

    Goh, Garrett B; Hulbert, Benjamin S; Zhou, Huiqing; Brooks, Charles L

    2014-07-01

    pH is a ubiquitous regulator of biological activity, including protein-folding, protein-protein interactions, and enzymatic activity. Existing constant pH molecular dynamics (CPHMD) models that were developed to address questions related to the pH-dependent properties of proteins are largely based on implicit solvent models. However, implicit solvent models are known to underestimate the desolvation energy of buried charged residues, increasing the error associated with predictions that involve internal ionizable residue that are important in processes like hydrogen transport and electron transfer. Furthermore, discrete water and ions cannot be modeled in implicit solvent, which are important in systems like membrane proteins and ion channels. We report on an explicit solvent constant pH molecular dynamics framework based on multi-site λ-dynamics (CPHMD(MSλD)). In the CPHMD(MSλD) framework, we performed seamless alchemical transitions between protonation and tautomeric states using multi-site λ-dynamics, and designed novel biasing potentials to ensure that the physical end-states are predominantly sampled. We show that explicit solvent CPHMD(MSλD) simulations model realistic pH-dependent properties of proteins such as the Hen-Egg White Lysozyme (HEWL), binding domain of 2-oxoglutarate dehydrogenase (BBL) and N-terminal domain of ribosomal protein L9 (NTL9), and the pKa predictions are in excellent agreement with experimental values, with a RMSE ranging from 0.72 to 0.84 pKa units. With the recent development of the explicit solvent CPHMD(MSλD) framework for nucleic acids, accurate modeling of pH-dependent properties of both major class of biomolecules-proteins and nucleic acids is now possible. © 2013 Wiley Periodicals, Inc.

  17. Effects of Explicit Instructions, Metacognition, and Motivation on Creative Performance

    ERIC Educational Resources Information Center

    Hong, Eunsook; O'Neil, Harold F.; Peng, Yun

    2016-01-01

    Effects of explicit instructions, metacognition, and intrinsic motivation on creative homework performance were examined in 303 Chinese 10th-grade students. Models that represent hypothesized relations among these constructs and trait covariates were tested using structural equation modelling. Explicit instructions geared to originality were…

  18. Fuselage Versus Subcomponent Panel Response Correlation Based on ABAQUS Explicit Progressive Damage Analysis Tools

    NASA Technical Reports Server (NTRS)

    Gould, Kevin E.; Satyanarayana, Arunkumar; Bogert, Philip B.

    2016-01-01

    Analysis performed in this study substantiates the need for high fidelity vehicle level progressive damage analyses (PDA) structural models for use in the verification and validation of proposed sub-scale structural models and to support required full-scale vehicle level testing. PDA results are presented that capture and correlate the responses of sub-scale 3-stringer and 7-stringer panel models and an idealized 8-ft diameter fuselage model, which provides a vehicle level environment for the 7-stringer sub-scale panel model. Two unique skin-stringer attachment assumptions are considered and correlated in the models analyzed: the TIE constraint interface versus the cohesive element (COH3D8) interface. Evaluating different interfaces allows for assessing a range of predicted damage modes, including delamination and crack propagation responses. Damage models considered in this study are the ABAQUS built-in Hashin procedure and the COmplete STress Reduction (COSTR) damage procedure implemented through a VUMAT user subroutine using the ABAQUS/Explicit code.

  19. Modeled and monitored variation in space and time of PCB-153 concentrations in air, sediment, soil and aquatic biota on a European scale.

    PubMed

    Hauck, Mara; Huijbregts, Mark A J; Hollander, Anne; Hendriks, A Jan; van de Meent, Dik

    2010-08-15

    We evaluated various modeling options for estimating concentrations of PCB-153 in the environment and in biota across Europe, using a nested multimedia fate model coupled with a bioaccumulation model. The most detailed model set up estimates concentrations in air, soil, fresh water sediment and fresh water biota with spatially explicit environmental characteristics and spatially explicit emissions to air and water in the period 1930-2005. Model performance was evaluated with the root mean square error (RMSE(log)), based on the difference between estimated and measured concentrations. The RMSE(log) was 5.4 for air, 5.6-6.3 for sediment and biota, and 5.5 for soil in the most detailed model scenario. Generally, model estimations tended to underestimate observed values for all compartments, except air. The decline in observed concentrations was also slightly underestimated by the model for the period where measurements were available (1989-2002). Applying a generic model setup with averaged emissions and averaged environmental characteristics, the RMSE(log) increased to 21 for air and 49 for sediment. For soil the RMSE(log) decreased to 3.5. We found that including spatial variation in emissions was most relevant for all compartments, except soil, while including spatial variation in environmental characteristics was less influential. For improving predictions of concentrations in sediment and aquatic biota, including emissions to water was found to be relevant as well. Copyright 2009 Elsevier B.V. All rights reserved.

  20. Modeling of outgassing and matrix decomposition in carbon-phenolic composites

    NASA Technical Reports Server (NTRS)

    Mcmanus, Hugh L.

    1994-01-01

    Work done in the period Jan. - June 1994 is summarized. Two threads of research have been followed. First, the thermodynamics approach was used to model the chemical and mechanical responses of composites exposed to high temperatures. The thermodynamics approach lends itself easily to the usage of variational principles. This thermodynamic-variational approach has been applied to the transpiration cooling problem. The second thread is the development of a better algorithm to solve the governing equations resulting from the modeling. Explicit finite difference method is explored for solving the governing nonlinear, partial differential equations. The method allows detailed material models to be included and solution on massively parallel supercomputers. To demonstrate the feasibility of the explicit scheme in solving nonlinear partial differential equations, a transpiration cooling problem was solved. Some interesting transient behaviors were captured such as stress waves and small spatial oscillations of transient pressure distribution.

  1. Implicit and explicit subgrid-scale modeling in discontinuous Galerkin methods for large-eddy simulation

    NASA Astrophysics Data System (ADS)

    Fernandez, Pablo; Nguyen, Ngoc-Cuong; Peraire, Jaime

    2017-11-01

    Over the past few years, high-order discontinuous Galerkin (DG) methods for Large-Eddy Simulation (LES) have emerged as a promising approach to solve complex turbulent flows. Despite the significant research investment, the relation between the discretization scheme, the Riemann flux, the subgrid-scale (SGS) model and the accuracy of the resulting LES solver remains unclear. In this talk, we investigate the role of the Riemann solver and the SGS model in the ability to predict a variety of flow regimes, including transition to turbulence, wall-free turbulence, wall-bounded turbulence, and turbulence decay. The Taylor-Green vortex problem and the turbulent channel flow at various Reynolds numbers are considered. Numerical results show that DG methods implicitly introduce numerical dissipation in under-resolved turbulence simulations and, even in the high Reynolds number limit, this implicit dissipation provides a more accurate representation of the actual subgrid-scale dissipation than that by explicit models.

  2. Improvement, Verification, and Refinement of Spatially-Explicit Exposure Models in Risk Assessment - FishRand Spatially-Explicit Bioaccumulation Model Demonstration

    DTIC Science & Technology

    2015-08-01

    21  Figure 4. Data-based proportion of DDD , DDE and DDT in total DDx in fish and sediment by... DDD dichlorodiphenyldichloroethane DDE dichlorodiphenyldichloroethylene DDT dichlorodiphenyltrichloroethane DoD Department of Defense ERM... DDD ) at the other site. The spatially-explicit model consistently predicts tissue concentrations that closely match both the average and the

  3. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE PAGES

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; ...

    2018-04-17

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  4. Implicit-explicit (IMEX) Runge-Kutta methods for non-hydrostatic atmospheric models

    NASA Astrophysics Data System (ADS)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; Reynolds, Daniel R.; Ullrich, Paul A.; Woodward, Carol S.

    2018-04-01

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit-explicit (IMEX) additive Runge-Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit - vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored. The accuracy and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.

  5. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  6. Improving Land-Surface Model Hydrology: Is an Explicit Aquifer Model Better than a Deeper Soil Profile?

    NASA Technical Reports Server (NTRS)

    Gulden, L. E.; Rosero, E.; Yang, Z.-L.; Rodell, Matthew; Jackson, C. S.; Niu, G.-Y.; Yeh, P. J.-F.; Famiglietti, J. S.

    2007-01-01

    Land surface models (LSMs) are computer programs, similar to weather and climate prediction models, which simulate the storage and movement of water (including soil moisture, snow, evaporation, and runoff) after it falls to the ground as precipitation. It is not currently possible to measure all of the variables of interest everywhere on Earth with sufficient accuracy. Hence LSMs have been developed to integrate the available information, including satellite observations, using powerful computers, in order to track water storage and redistribution. The maps are used to improve weather forecasts, support water resources and agricultural applications, and study the Earth's water cycle and climate variability. Recently, the models have begun to simulate groundwater storage. In this paper, we compare several possible approaches, and examine the pitfalls associated with trying to estimate aquifer parameters (such as porosity) that are required by the models. We find that explicit representation of groundwater, as opposed to the addition of deeper soil layers, considerably decreases the sensitivity of modeled terrestrial water storage to aquifer parameter choices. We also show that approximate knowledge of parameter values is not sufficient to guarantee realistic model performance: because interaction among parameters is significant, they must be prescribed as a harmonious set.

  7. Electrostatic Origin of Salt-Induced Nucleosome Array Compaction

    PubMed Central

    Korolev, Nikolay; Allahverdi, Abdollah; Yang, Ye; Fan, Yanping; Lyubartsev, Alexander P.; Nordenskiöld, Lars

    2010-01-01

    The physical mechanism of the folding and unfolding of chromatin is fundamentally related to transcription but is incompletely characterized and not fully understood. We experimentally and theoretically studied chromatin compaction by investigating the salt-mediated folding of an array made of 12 positioning nucleosomes with 177 bp repeat length. Sedimentation velocity measurements were performed to monitor the folding provoked by addition of cations Na+, K+, Mg2+, Ca2+, spermidine3+, Co(NH3)63+, and spermine4+. We found typical polyelectrolyte behavior, with the critical concentration of cation needed to bring about maximal folding covering a range of almost five orders of magnitude (from 2 μM for spermine4+ to 100 mM for Na+). A coarse-grained model of the nucleosome array based on a continuum dielectric description and including the explicit presence of mobile ions and charged flexible histone tails was used in computer simulations to investigate the cation-mediated compaction. The results of the simulations with explicit ions are in general agreement with the experimental data, whereas simple Debye-Hückel models are intrinsically incapable of describing chromatin array folding by multivalent cations. We conclude that the theoretical description of the salt-induced chromatin folding must incorporate explicit mobile ions that include ion correlation and ion competition effects. PMID:20858435

  8. Prediction of Complex Aerodynamic Flows with Explicit Algebraic Stress Models

    NASA Technical Reports Server (NTRS)

    Abid, Ridha; Morrison, Joseph H.; Gatski, Thomas B.; Speziale, Charles G.

    1996-01-01

    An explicit algebraic stress equation, developed by Gatski and Speziale, is used in the framework of K-epsilon formulation to predict complex aerodynamic turbulent flows. The nonequilibrium effects are modeled through coefficients that depend nonlinearly on both rotational and irrotational strains. The proposed model was implemented in the ISAAC Navier-Stokes code. Comparisons with the experimental data are presented which clearly demonstrate that explicit algebraic stress models can predict the correct response to nonequilibrium flow.

  9. Program SPACECAP: software for estimating animal density using spatially explicit capture-recapture models

    USGS Publications Warehouse

    Gopalaswamy, Arjun M.; Royle, J. Andrew; Hines, James E.; Singh, Pallavi; Jathanna, Devcharan; Kumar, N. Samba; Karanth, K. Ullas

    2012-01-01

    1. The advent of spatially explicit capture-recapture models is changing the way ecologists analyse capture-recapture data. However, the advantages offered by these new models are not fully exploited because they can be difficult to implement. 2. To address this need, we developed a user-friendly software package, created within the R programming environment, called SPACECAP. This package implements Bayesian spatially explicit hierarchical models to analyse spatial capture-recapture data. 3. Given that a large number of field biologists prefer software with graphical user interfaces for analysing their data, SPACECAP is particularly useful as a tool to increase the adoption of Bayesian spatially explicit capture-recapture methods in practice.

  10. Role of seasonality on predator-prey-subsidy population dynamics.

    PubMed

    Levy, Dorian; Harrington, Heather A; Van Gorder, Robert A

    2016-05-07

    The role of seasonality on predator-prey interactions in the presence of a resource subsidy is examined using a system of non-autonomous ordinary differential equations (ODEs). The problem is motivated by the Arctic, inhabited by the ecological system of arctic foxes (predator), lemmings (prey), and seal carrion (subsidy). We construct two nonlinear, nonautonomous systems of ODEs named the Primary Model, and the n-Patch Model. The Primary Model considers spatial factors implicitly, and the n-Patch Model considers space explicitly as a "Stepping Stone" system. We establish the boundedness of the dynamics, as well as the necessity of sufficiently nutritional food for the survival of the predator. We investigate the importance of including the resource subsidy explicitly in the model, and the importance of accounting for predator mortality during migration. We find a variety of non-equilibrium dynamics for both systems, obtaining both limit cycles and chaotic oscillations. We were then able to discuss relevant implications for biologically interesting predator-prey systems including subsidy under seasonal effects. Notably, we can observe the extinction or persistence of a species when the corresponding autonomous system might predict the opposite. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Explicit 2-D Hydrodynamic FEM Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Jerry

    1996-08-07

    DYNA2D* is a vectorized, explicit, two-dimensional, axisymmetric and plane strain finite element program for analyzing the large deformation dynamic and hydrodynamic response of inelastic solids. DYNA2D* contains 13 material models and 9 equations of state (EOS) to cover a wide range of material behavior. The material models implemented in all machine versions are: elastic, orthotropic elastic, kinematic/isotropic elastic plasticity, thermoelastoplastic, soil and crushable foam, linear viscoelastic, rubber, high explosive burn, isotropic elastic-plastic, temperature-dependent elastic-plastic. The isotropic and temperature-dependent elastic-plastic models determine only the deviatoric stresses. Pressure is determined by one of 9 equations of state including linear polynomial, JWL highmore » explosive, Sack Tuesday high explosive, Gruneisen, ratio of polynomials, linear polynomial with energy deposition, ignition and growth of reaction in HE, tabulated compaction, and tabulated.« less

  12. Modeling carbon stocks in a secondary tropical dry forest in the Yucatan Peninsula, Mexico

    Treesearch

    Zhaohua Dai; Richard A. Birdsey; Kristofer D. Johnson; Juan Manuel Dupuy; Jose Luis Hernandez-Stefanoni; Karen Richardson

    2014-01-01

    The carbon balance of secondary dry tropical forests of Mexico’s Yucatan Peninsula is sensitive to human and natural disturbances and climate change. The spatially explicit process model Forest-DeNitrification-DeComposition (DNDC) was used to estimate forest carbon dynamics in this region, including the effects of disturbance on carbon stocks. Model evaluation using...

  13. Model Drawing Strategy for Fraction Word Problem Solving of Fourth-Grade Students with Learning Disabilities

    ERIC Educational Resources Information Center

    Sharp, Emily; Shih Dennis, Minyi

    2017-01-01

    This study used a multiple probe across participants design to examine the effects of a model drawing strategy (MDS) intervention package on fraction comparing and ordering word problem-solving performance of three Grade 4 students. MDS is a form of cognitive strategy instruction for teaching word problem solving that includes explicit instruction…

  14. Models of social evolution: can we do better to predict 'who helps whom to achieve what'?

    PubMed

    Rodrigues, António M M; Kokko, Hanna

    2016-02-05

    Models of social evolution and the evolution of helping have been classified in numerous ways. Two categorical differences have, however, escaped attention in the field. Models tend not to justify why they use a particular assumption structure about who helps whom: a large number of authors model peer-to-peer cooperation of essentially identical individuals, probably for reasons of mathematical convenience; others are inspired by particular cooperatively breeding species, and tend to assume unidirectional help where subordinates help a dominant breed more efficiently. Choices regarding what the help achieves (i.e. which life-history trait of the helped individual is improved) are similarly made without much comment: fecundity benefits are much more commonly modelled than survival enhancements, despite evidence that these may interact when the helped individual can perform life-history reallocations (load-lightening and related phenomena). We review our current theoretical understanding of effects revealed when explicitly asking 'who helps whom to achieve what', from models of mutual aid in partnerships to the very few models that explicitly contrast the strength of selection to help enhance another individual's fecundity or survival. As a result of idiosyncratic modelling choices in contemporary literature, including the varying degree to which demographic consequences are made explicit, there is surprisingly little agreement on what types of help are predicted to evolve most easily. We outline promising future directions to fill this gap. © 2016 The Author(s).

  15. Models of social evolution: can we do better to predict ‘who helps whom to achieve what’?

    PubMed Central

    Rodrigues, António M. M.; Kokko, Hanna

    2016-01-01

    Models of social evolution and the evolution of helping have been classified in numerous ways. Two categorical differences have, however, escaped attention in the field. Models tend not to justify why they use a particular assumption structure about who helps whom: a large number of authors model peer-to-peer cooperation of essentially identical individuals, probably for reasons of mathematical convenience; others are inspired by particular cooperatively breeding species, and tend to assume unidirectional help where subordinates help a dominant breed more efficiently. Choices regarding what the help achieves (i.e. which life-history trait of the helped individual is improved) are similarly made without much comment: fecundity benefits are much more commonly modelled than survival enhancements, despite evidence that these may interact when the helped individual can perform life-history reallocations (load-lightening and related phenomena). We review our current theoretical understanding of effects revealed when explicitly asking ‘who helps whom to achieve what’, from models of mutual aid in partnerships to the very few models that explicitly contrast the strength of selection to help enhance another individual's fecundity or survival. As a result of idiosyncratic modelling choices in contemporary literature, including the varying degree to which demographic consequences are made explicit, there is surprisingly little agreement on what types of help are predicted to evolve most easily. We outline promising future directions to fill this gap. PMID:26729928

  16. Mass balance modelling of contaminants in river basins: a flexible matrix approach.

    PubMed

    Warren, Christopher; Mackay, Don; Whelan, Mick; Fox, Kay

    2005-12-01

    A novel and flexible approach is described for simulating the behaviour of chemicals in river basins. A number (n) of river reaches are defined and their connectivity is described by entries in an n x n matrix. Changes in segmentation can be readily accommodated by altering the matrix entries, without the need for model revision. Two models are described. The simpler QMX-R model only considers advection and an overall loss due to the combined processes of volatilization, net transfer to sediment and degradation. The rate constant for the overall loss is derived from fugacity calculations for a single segment system. The more rigorous QMX-F model performs fugacity calculations for each segment and explicitly includes the processes of advection, evaporation, water-sediment exchange and degradation in both water and sediment. In this way chemical exposure in all compartments (including equilibrium concentrations in biota) can be estimated. Both models are designed to serve as intermediate-complexity exposure assessment tools for river basins with relatively low data requirements. By considering the spatially explicit nature of emission sources and the changes in concentration which occur with transport in the channel system, the approach offers significant advantages over simple one-segment simulations while being more readily applicable than more sophisticated, highly segmented, GIS-based models.

  17. Trouble in Paradise: A Study of Who Is Included in an Inclusion Classroom

    ERIC Educational Resources Information Center

    Zindler, Rachel

    2009-01-01

    Background/Context: This study is based on prior research regarding the need for explicit social instruction for children with special needs, cooperative educational models, and the goals and relative successes of inclusive educational practices. The author refers to several studies on these subjects, including those by Kavale and Forness; Salend;…

  18. Spatial taxation effects on regional coal economic activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, C.W.; Labys, W.C.

    1982-01-01

    Taxation effects on resource production, consumption and prices are seldom evaluated especially in the field of spatial commodity modeling. The most commonly employed linear programming model has fixed-point estimated demands and capacity constraints; hence it makes taxation effects difficult to be modeled. The second type of resource allocation model, the interregional input-output models does not include a direct and explicit price mechanism. Therefore, it is not suitable for analyzing taxation effects. The third type or spatial commodity model has been econometric in nature. While such an approach has a good deal of flexibility in modeling political and non-economic variables, itmore » treats taxation (or tariff) effects loosely using only dummy variables, and, in many cases, must sacrifice the consistency criterion important for spatial commodity modeling. This leaves model builders only one legitimate choice for analyzing taxation effects: the quadratic programming model which explicitly allows the interplay of regional demand and supply relations via a continuous spatial price constructed by the authors related to the regional demand for and supply of coal from Appalachian markets.« less

  19. Molecular modelling of protein-protein/protein-solvent interactions

    NASA Astrophysics Data System (ADS)

    Luchko, Tyler

    The inner workings of individual cells are based on intricate networks of protein-protein interactions. However, each of these individual protein interactions requires a complex physical interaction between proteins and their aqueous environment at the atomic scale. In this thesis, molecular dynamics simulations are used in three theoretical studies to gain insight at the atomic scale about protein hydration, protein structure and tubulin-tubulin (protein-protein) interactions, as found in microtubules. Also presented, in a fourth project, is a molecular model of solvation coupled with the Amber molecular modelling package, to facilitate further studies without the need of explicitly modelled water. Basic properties of a minimally solvated protein were calculated through an extended study of myoglobin hydration with explicit solvent, directly investigating water and protein polarization. Results indicate a close correlation between polarization of both water and protein and the onset of protein function. The methodology of explicit solvent molecular dynamics was further used to study tubulin and microtubules. Extensive conformational sampling of the carboxy-terminal tails of 8-tubulin was performed via replica exchange molecular dynamics, allowing the characterisation of the flexibility, secondary structure and binding domains of the C-terminal tails through statistical analysis methods. Mechanical properties of tubulin and microtubules were calculated with adaptive biasing force molecular dynamics. The function of the M-loop in microtubule stability was demonstrated in these simulations. The flexibility of this loop allowed constant contacts between the protofilaments to be maintained during simulations while the smooth deformation provided a spring-like restoring force. Additionally, calculating the free energy profile between the straight and bent tubulin configurations was used to test the proposed conformational change in tubulin, thought to cause microtubule destabilization. No conformational change was observed but a nucleotide dependent 'softening' of the interaction was found instead, suggesting that an entropic force in a microtubule configuration could be the mechanism of microtubule collapse. Finally, to overcome much of the computational costs associated with explicit soIvent calculations, a new combination of molecular dynamics with the 3D-reference interaction site model (3D-RISM) of solvation was integrated into the Amber molecular dynamics package. Our implementation of 3D-RISM shows excellent agreement with explicit solvent free energy calculations. Several optimisation techniques, including a new multiple time step method, provide a nearly 100 fold performance increase, giving similar computational performance to explicit solvent.

  20. On the solution of evolution equations based on multigrid and explicit iterative methods

    NASA Astrophysics Data System (ADS)

    Zhukov, V. T.; Novikova, N. D.; Feodoritova, O. B.

    2015-08-01

    Two schemes for solving initial-boundary value problems for three-dimensional parabolic equations are studied. One is implicit and is solved using the multigrid method, while the other is explicit iterative and is based on optimal properties of the Chebyshev polynomials. In the explicit iterative scheme, the number of iteration steps and the iteration parameters are chosen as based on the approximation and stability conditions, rather than on the optimization of iteration convergence to the solution of the implicit scheme. The features of the multigrid scheme include the implementation of the intergrid transfer operators for the case of discontinuous coefficients in the equation and the adaptation of the smoothing procedure to the spectrum of the difference operators. The results produced by these schemes as applied to model problems with anisotropic discontinuous coefficients are compared.

  1. Five challenges for spatial epidemic models.

    PubMed

    Riley, Steven; Eames, Ken; Isham, Valerie; Mollison, Denis; Trapman, Pieter

    2015-03-01

    Infectious disease incidence data are increasingly available at the level of the individual and include high-resolution spatial components. Therefore, we are now better able to challenge models that explicitly represent space. Here, we consider five topics within spatial disease dynamics: the construction of network models; characterising threshold behaviour; modelling long-distance interactions; the appropriate scale for interventions; and the representation of population heterogeneity. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  2. An explicitly solvated full atomistic model of the cardiac thin filament and application on the calcium binding affinity effects from familial hypertrophic cardiomyopathy linked mutations

    NASA Astrophysics Data System (ADS)

    Williams, Michael; Schwartz, Steven

    2015-03-01

    The previous version of our cardiac thin filament (CTF) model consisted of the troponin complex (cTn), two coiled-coil dimers of tropomyosin (Tm), and 29 actin units. We now present the newest revision of the model to include explicit solvation. The model was developed to continue our study of genetic mutations in the CTF proteins which are linked to familial hypertrophic cardiomyopathies. Binding of calcium to the cTnC subunit causes subtle conformational changes to propagate through the cTnC to the cTnI subunit which then detaches from actin. Conformational changes propagate through to the cTnT subunit, which allows Tm to move into the open position along actin, leading to muscle contraction. Calcium disassociation allows for the reverse to occur, which results in muscle relaxation. The inclusion of explicit TIP3 water solvation allows for the model to get better individual local solvent to protein interactions; which are important when observing the N-lobe calcium binding pocket of the cTnC. We are able to compare in silica and in vitro experimental results to better understand the physiological effects from mutants, such as the R92L/W and F110V/I of the cTnT, on the calcium binding affinity compared to the wild type.

  3. Multiscale modeling of a rectifying bipolar nanopore: explicit-water versus implicit-water simulations.

    PubMed

    Ható, Zoltán; Valiskó, Mónika; Kristóf, Tamás; Gillespie, Dirk; Boda, Dezsö

    2017-07-21

    In a multiscale modeling approach, we present computer simulation results for a rectifying bipolar nanopore at two modeling levels. In an all-atom model, we use explicit water to simulate ion transport directly with the molecular dynamics technique. In a reduced model, we use implicit water and apply the Local Equilibrium Monte Carlo method together with the Nernst-Planck transport equation. This hybrid method makes the fast calculation of ion transport possible at the price of lost details. We show that the implicit-water model is an appropriate representation of the explicit-water model when we look at the system at the device (i.e., input vs. output) level. The two models produce qualitatively similar behavior of the electrical current for different voltages and model parameters. Looking at the details of concentration and potential profiles, we find profound differences between the two models. These differences, however, do not influence the basic behavior of the model as a device because they do not influence the z-dependence of the concentration profiles which are the main determinants of current. These results then address an old paradox: how do reduced models, whose assumptions should break down in a nanoscale device, predict experimental data? Our simulations show that reduced models can still capture the overall device physics correctly, even though they get some important aspects of the molecular-scale physics quite wrong; reduced models work because they include the physics that is necessary from the point of view of device function. Therefore, reduced models can suffice for general device understanding and device design, but more detailed models might be needed for molecular level understanding.

  4. Empirical evaluation of spatial and non-spatial European-scale multimedia fate models: results and implications for chemical risk assessment.

    PubMed

    Armitage, James M; Cousins, Ian T; Hauck, Mara; Harbers, Jasper V; Huijbregts, Mark A J

    2007-06-01

    Multimedia environmental fate models are commonly-applied tools for assessing the fate and distribution of contaminants in the environment. Owing to the large number of chemicals in use and the paucity of monitoring data, such models are often adopted as part of decision-support systems for chemical risk assessment. The purpose of this study was to evaluate the performance of three multimedia environmental fate models (spatially- and non-spatially-explicit) at a European scale. The assessment was conducted for four polycyclic aromatic hydrocarbons (PAHs) and hexachlorobenzene (HCB) and compared predicted and median observed concentrations using monitoring data collected for air, water, sediments and soils. Model performance in the air compartment was reasonable for all models included in the evaluation exercise as predicted concentrations were typically within a factor of 3 of the median observed concentrations. Furthermore, there was good correspondence between predictions and observations in regions that had elevated median observed concentrations for both spatially-explicit models. On the other hand, all three models consistently underestimated median observed concentrations in sediment and soil by 1-3 orders of magnitude. Although regions with elevated median observed concentrations in these environmental media were broadly identified by the spatially-explicit models, the magnitude of the discrepancy between predicted and median observed concentrations is of concern in the context of chemical risk assessment. These results were discussed in terms of factors influencing model performance such as the steady-state assumption, inaccuracies in emission estimates and the representativeness of monitoring data.

  5. Three dimensional, non-linear, finite element analysis of compactable soil interaction with a hyperelastic wheel

    NASA Astrophysics Data System (ADS)

    Chiroux, Robert Charles

    The objective of this research was to produce a three dimensional, non-linear, dynamic simulation of the interaction between a hyperelastic wheel rolling over compactable soil. The finite element models developed to produce the simulation utilized the ABAQUS/Explicit computer code. Within the simulation two separate bodies were modeled, the hyperelastic wheel and a compactable soil-bed. Interaction between the bodies was achieved by allowing them to come in contact but not to penetrate the contact surface. The simulation included dynamic loading of a hyperelastic, rubber tire in contact with compactable soil with an applied constant angular velocity or torque, including a tow load, applied to the wheel hub. The constraints on the wheel model produced a straight and curved path. In addition the simulation included a shear limit between the tire and soil allowing for the introduction of slip. Soil properties were simulated using the Drucker-Prager, Cap Plasticity model available within the ABAQUS/Explicit program. Numerical results obtained from the three dimensional model were compared with related experimental data and showed good correlation for similar conditions. Numerical and experimental data compared well for both stress and wheel rut formation depth under a weight of 5.8 kN and a constant angular velocity applied to the wheel hub. The simulation results provided a demonstration of the benefit of three-dimensional simulation in comparison to previous two-dimensional, plane strain simulations.

  6. Explicit formulation of second and third order optical nonlinearity in the FDTD framework

    NASA Astrophysics Data System (ADS)

    Varin, Charles; Emms, Rhys; Bart, Graeme; Fennel, Thomas; Brabec, Thomas

    2018-01-01

    The finite-difference time-domain (FDTD) method is a flexible and powerful technique for rigorously solving Maxwell's equations. However, three-dimensional optical nonlinearity in current commercial and research FDTD softwares requires solving iteratively an implicit form of Maxwell's equations over the entire numerical space and at each time step. Reaching numerical convergence demands significant computational resources and practical implementation often requires major modifications to the core FDTD engine. In this paper, we present an explicit method to include second and third order optical nonlinearity in the FDTD framework based on a nonlinear generalization of the Lorentz dispersion model. A formal derivation of the nonlinear Lorentz dispersion equation is equally provided, starting from the quantum mechanical equations describing nonlinear optics in the two-level approximation. With the proposed approach, numerical integration of optical nonlinearity and dispersion in FDTD is intuitive, transparent, and fully explicit. A strong-field formulation is also proposed, which opens an interesting avenue for FDTD-based modelling of the extreme nonlinear optics phenomena involved in laser filamentation and femtosecond micromachining of dielectrics.

  7. The interactions between soil-biosphere-atmosphere land surface model with a multi-energy balance (ISBA-MEB) option in SURFEXv8 - Part 1: Model description

    NASA Astrophysics Data System (ADS)

    Boone, Aaron; Samuelsson, Patrick; Gollvik, Stefan; Napoly, Adrien; Jarlan, Lionel; Brun, Eric; Decharme, Bertrand

    2017-02-01

    Land surface models (LSMs) are pushing towards improved realism owing to an increasing number of observations at the local scale, constantly improving satellite data sets and the associated methodologies to best exploit such data, improved computing resources, and in response to the user community. As a part of the trend in LSM development, there have been ongoing efforts to improve the representation of the land surface processes in the interactions between the soil-biosphere-atmosphere (ISBA) LSM within the EXternalized SURFace (SURFEX) model platform. The force-restore approach in ISBA has been replaced in recent years by multi-layer explicit physically based options for sub-surface heat transfer, soil hydrological processes, and the composite snowpack. The representation of vegetation processes in SURFEX has also become much more sophisticated in recent years, including photosynthesis and respiration and biochemical processes. It became clear that the conceptual limits of the composite soil-vegetation scheme within ISBA had been reached and there was a need to explicitly separate the canopy vegetation from the soil surface. In response to this issue, a collaboration began in 2008 between the high-resolution limited area model (HIRLAM) consortium and Météo-France with the intention to develop an explicit representation of the vegetation in ISBA under the SURFEX platform. A new parameterization has been developed called the ISBA multi-energy balance (MEB) in order to address these issues. ISBA-MEB consists in a fully implicit numerical coupling between a multi-layer physically based snowpack model, a variable-layer soil scheme, an explicit litter layer, a bulk vegetation scheme, and the atmosphere. It also includes a feature that permits a coupling transition of the snowpack from the canopy air to the free atmosphere. It shares many of the routines and physics parameterizations with the standard version of ISBA. This paper is the first of two parts; in part one, the ISBA-MEB model equations, numerical schemes, and theoretical background are presented. In part two (Napoly et al., 2016), which is a separate companion paper, a local scale evaluation of the new scheme is presented along with a detailed description of the new forest litter scheme.

  8. Implicit and explicit self-esteem and their reciprocal relationship with symptoms of depression and social anxiety: a longitudinal study in adolescents.

    PubMed

    van Tuijl, Lonneke A; de Jong, Peter J; Sportel, B Esther; de Hullu, Eva; Nauta, Maaike H

    2014-03-01

    A negative self-view is a prominent factor in most cognitive vulnerability models of depression and anxiety. Recently, there has been increased attention to differentiate between the implicit (automatic) and the explicit (reflective) processing of self-related evaluations. This longitudinal study aimed to test the association between implicit and explicit self-esteem and symptoms of adolescent depression and social anxiety disorder. Two complementary models were tested: the vulnerability model and the scarring effect model. Participants were 1641 first and second year pupils of secondary schools in the Netherlands. The Rosenberg Self-Esteem Scale, self-esteem Implicit Association Test and Revised Child Anxiety and Depression Scale were completed to measure explicit self-esteem, implicit self-esteem and symptoms of social anxiety disorder (SAD) and major depressive disorder (MDD), respectively, at baseline and two-year follow-up. Explicit self-esteem at baseline was associated with symptoms of MDD and SAD at follow-up. Symptomatology at baseline was not associated with explicit self-esteem at follow-up. Implicit self-esteem was not associated with symptoms of MDD or SAD in either direction. We relied on self-report measures of MDD and SAD symptomatology. Also, findings are based on a non-clinical sample. Our findings support the vulnerability model, and not the scarring effect model. The implications of these findings suggest support of an explicit self-esteem intervention to prevent increases in MDD and SAD symptomatology in non-clinical adolescents. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Effects of electrostatic interactions on ligand dissociation kinetics

    NASA Astrophysics Data System (ADS)

    Erbaş, Aykut; de la Cruz, Monica Olvera; Marko, John F.

    2018-02-01

    We study unbinding of multivalent cationic ligands from oppositely charged polymeric binding sites sparsely grafted on a flat neutral substrate. Our molecular dynamics simulations are suggested by single-molecule studies of protein-DNA interactions. We consider univalent salt concentrations spanning roughly a 1000-fold range, together with various concentrations of excess ligands in solution. To reveal the ionic effects on unbinding kinetics of spontaneous and facilitated dissociation mechanisms, we treat electrostatic interactions both at a Debye-Hückel (DH) (or implicit ions, i.e., use of an electrostatic potential with a prescribed decay length) level and by the more precise approach of considering all ionic species explicitly in the simulations. We find that the DH approach systematically overestimates unbinding rates, relative to the calculations where all ion pairs are present explicitly in solution, although many aspects of the two types of calculation are qualitatively similar. For facilitated dissociation (FD) (acceleration of unbinding by free ligands in solution) explicit-ion simulations lead to unbinding at lower free-ligand concentrations. Our simulations predict a variety of FD regimes as a function of free-ligand and ion concentrations; a particularly interesting regime is at intermediate concentrations of ligands where nonelectrostatic binding strength controls FD. We conclude that explicit-ion electrostatic modeling is an essential component to quantitatively tackle problems in molecular ligand dissociation, including nucleic-acid-binding proteins.

  10. Factors Affecting Energy Absorption of a Plate during Shock Wave Impact Using a Damage Material Model

    DTIC Science & Technology

    2010-08-07

    51 5.3.2 Abaqus VDLOAD Subroutine ............................................. 55 VI. INTERPRETATION OF RESULTS AND DISCUSSION...VDLOAD SUBROUTINE ........................................................... 91 C. PYTHON SCRIPT TO CONVERT ABAQUS INPUT FILE TO LS-DYNA INPUT FILE...all of the simulations, which are the pressures applied from the Abaqus /Explicit VDLOAD subroutine . The entire model 22 including the boundary

  11. Embedded-explicit emergent literacy intervention I: Background and description of approach.

    PubMed

    Justice, Laura M; Kaderavek, Joan N

    2004-07-01

    This article, the first of a two-part series, provides background information and a general description of an emergent literacy intervention model for at-risk preschoolers and kindergartners. The embedded-explicit intervention model emphasizes the dual importance of providing young children with socially embedded opportunities for meaningful, naturalistic literacy experiences throughout the day, in addition to regular structured therapeutic interactions that explicitly target critical emergent literacy goals. The role of the speech-language pathologist (SLP) in the embedded-explicit model encompasses both indirect and direct service delivery: The SLP consults and collaborates with teachers and parents to ensure the highest quality and quantity of socially embedded literacy-focused experiences and serves as a direct provider of explicit interventions using structured curricula and/or lesson plans. The goal of this integrated model is to provide comprehensive emergent literacy interventions across a spectrum of early literacy skills to ensure the successful transition of at-risk children from prereaders to readers.

  12. Developing Spatially Explicit Habitat Models for Grassland Bird Conservation Planning in the Prairie Pothole Region of North Dakota

    Treesearch

    Neal D. Niemuth; Michael E. Estey; Charles R. Loesch

    2005-01-01

    Conservation planning for birds is increasingly focused on landscapes. However, little spatially explicit information is available to guide landscape-level conservation planning for many species of birds. We used georeferenced 1995 Breeding Bird Survey (BBS) data in conjunction with land-cover information to develop a spatially explicit habitat model predicting the...

  13. Explicit robust schemes for implementation of general principal value-based constitutive models

    NASA Technical Reports Server (NTRS)

    Arnold, S. M.; Saleeb, A. F.; Tan, H. Q.; Zhang, Y.

    1993-01-01

    The issue of developing effective and robust schemes to implement general hyperelastic constitutive models is addressed. To this end, special purpose functions are used to symbolically derive, evaluate, and automatically generate the associated FORTRAN code for the explicit forms of the corresponding stress function and material tangent stiffness tensors. These explicit forms are valid for the entire deformation range. The analytical form of these explicit expressions is given here for the case in which the strain-energy potential is taken as a nonseparable polynomial function of the principle stretches.

  14. A spatially explicit model for estimating risks of pesticide exposure to bird populations

    EPA Science Inventory

    Pesticides are used widely in US agriculture and may affect non-target organisms, including birds. Some pesticide classes (e.g., acetylcholinesterase inhibitors) are known or suspected to cause direct mortality to birds, while others (e.g., synthetic pyrethroids, neonicotinoids) ...

  15. Size-dependent error of the density functional theory ionization potential in vacuum and solution

    DOE PAGES

    Sosa Vazquez, Xochitl A.; Isborn, Christine M.

    2015-12-22

    Density functional theory is often the method of choice for modeling the energetics of large molecules and including explicit solvation effects. It is preferable to use a method that treats systems of different sizes and with different amounts of explicit solvent on equal footing. However, recent work suggests that approximate density functional theory has a size-dependent error in the computation of the ionization potential. We here investigate the lack of size-intensivity of the ionization potential computed with approximate density functionals in vacuum and solution. We show that local and semi-local approximations to exchange do not yield a constant ionization potentialmore » for an increasing number of identical isolated molecules in vacuum. Instead, as the number of molecules increases, the total energy required to ionize the system decreases. Rather surprisingly, we find that this is still the case in solution, whether using a polarizable continuum model or with explicit solvent that breaks the degeneracy of each solute, and we find that explicit solvent in the calculation can exacerbate the size-dependent delocalization error. We demonstrate that increasing the amount of exact exchange changes the character of the polarization of the solvent molecules; for small amounts of exact exchange the solvent molecules contribute a fraction of their electron density to the ionized electron, but for larger amounts of exact exchange they properly polarize in response to the cationic solute. As a result, in vacuum and explicit solvent, the ionization potential can be made size-intensive by optimally tuning a long-range corrected hybrid functional.« less

  16. Size-dependent error of the density functional theory ionization potential in vacuum and solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sosa Vazquez, Xochitl A.; Isborn, Christine M., E-mail: cisborn@ucmerced.edu

    2015-12-28

    Density functional theory is often the method of choice for modeling the energetics of large molecules and including explicit solvation effects. It is preferable to use a method that treats systems of different sizes and with different amounts of explicit solvent on equal footing. However, recent work suggests that approximate density functional theory has a size-dependent error in the computation of the ionization potential. We here investigate the lack of size-intensivity of the ionization potential computed with approximate density functionals in vacuum and solution. We show that local and semi-local approximations to exchange do not yield a constant ionization potentialmore » for an increasing number of identical isolated molecules in vacuum. Instead, as the number of molecules increases, the total energy required to ionize the system decreases. Rather surprisingly, we find that this is still the case in solution, whether using a polarizable continuum model or with explicit solvent that breaks the degeneracy of each solute, and we find that explicit solvent in the calculation can exacerbate the size-dependent delocalization error. We demonstrate that increasing the amount of exact exchange changes the character of the polarization of the solvent molecules; for small amounts of exact exchange the solvent molecules contribute a fraction of their electron density to the ionized electron, but for larger amounts of exact exchange they properly polarize in response to the cationic solute. In vacuum and explicit solvent, the ionization potential can be made size-intensive by optimally tuning a long-range corrected hybrid functional.« less

  17. Technical Note: Effect of explicit M and N-shell atomic transitions on a low-energy x-ray source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watson, Peter G. F., E-mail: peter.watson@mail.mcgill.ca; Seuntjens, Jan

    Purpose: In EGSnrc, atomic transitions to and from the M and N-shells are treated in an average way by default. This approach is justified in which the energy difference between explicit and average M and N-shell binding energies is less than 1 keV, and for most applications can be considered negligible. However, for simulations of low energy x-ray sources on thin, high-Z targets, characteristic x-rays can make up a significant portion of the source spectra. As of release V4-2.4.0, EGSnrc has included an option to enable a more complete algorithm of all atomic transitions available in the EADL compilation. Inmore » this paper, the effect of M and N-shell averaging on the calculation of half-value layer (HVL) and relative depth dose (RDD) curve of a 50 kVp intraoperative x-ray tube with a thin gold target was investigated. Methods: A 50 kVp miniature x-ray source with a gold target (The INTRABEAM System, Carl Zeiss, Germany) was modeled with the EGSnrc user code cavity, both with and without M and N-shell averaging. From photon fluence spectra simulations, the source HVLs were determined analytically. The same source model was then used with egs-chamber to calculate RDD curves in water. Results: A 4% increase of HVL was reported when accounting for explicit M and N-shell transitions, and up to a 9% decrease in local relative dose for normalization at 3 mm depth in water. Conclusions: The EGSnrc default of using averaged M and N-shell binding energies has an observable effect on the HVL and RDD of a low energy x-ray source with high-Z target. For accurate modeling of this class of devices, explicit atomic transitions should be included.« less

  18. Lattice Supersymmetry and Order-Disorder Coexistence in the Tricritical Ising Model

    NASA Astrophysics Data System (ADS)

    O'Brien, Edward; Fendley, Paul

    2018-05-01

    We introduce and analyze a quantum spin or Majorana chain with a tricritical Ising point separating a critical phase from a gapped phase with order-disorder coexistence. We show that supersymmetry is not only an emergent property of the scaling limit but also manifests itself on the lattice. Namely, we find explicit lattice expressions for the supersymmetry generators and currents. Writing the Hamiltonian in terms of these generators allows us to find the ground states exactly at a frustration-free coupling. These confirm the coexistence between two (topologically) ordered ground states and a disordered one in the gapped phase. Deforming the model by including explicit chiral symmetry breaking, we find the phases persist up to an unusual chiral phase transition where the supersymmetry becomes exact even on the lattice.

  19. A logical foundation for representation of clinical data.

    PubMed Central

    Campbell, K E; Das, A K; Musen, M A

    1994-01-01

    OBJECTIVE: A general framework for representation of clinical data that provides a declarative semantics of terms and that allows developers to define explicitly the relationships among both terms and combinations of terms. DESIGN: Use of conceptual graphs as a standard representation of logic and of an existing standardized vocabulary, the Systematized Nomenclature of Medicine (SNOMED International), for lexical elements. Concepts such as time, anatomy, and uncertainty must be modeled explicitly in a way that allows relation of these foundational concepts to surface-level clinical descriptions in a uniform manner. RESULTS: The proposed framework was used to model a simple radiology report, which included temporal references. CONCLUSION: Formal logic provides a framework for formalizing the representation of medical concepts. Actual implementations will be required to evaluate the practicality of this approach. PMID:7719805

  20. Uncertainty in spatially explicit animal dispersal models

    USGS Publications Warehouse

    Mooij, Wolf M.; DeAngelis, Donald L.

    2003-01-01

    Uncertainty in estimates of survival of dispersing animals is a vexing difficulty in conservation biology. The current notion is that this uncertainty decreases the usefulness of spatially explicit population models in particular. We examined this problem by comparing dispersal models of three levels of complexity: (1) an event-based binomial model that considers only the occurrence of mortality or arrival, (2) a temporally explicit exponential model that employs mortality and arrival rates, and (3) a spatially explicit grid-walk model that simulates the movement of animals through an artificial landscape. Each model was fitted to the same set of field data. A first objective of the paper is to illustrate how the maximum-likelihood method can be used in all three cases to estimate the means and confidence limits for the relevant model parameters, given a particular set of data on dispersal survival. Using this framework we show that the structure of the uncertainty for all three models is strikingly similar. In fact, the results of our unified approach imply that spatially explicit dispersal models, which take advantage of information on landscape details, suffer less from uncertainly than do simpler models. Moreover, we show that the proposed strategy of model development safeguards one from error propagation in these more complex models. Finally, our approach shows that all models related to animal dispersal, ranging from simple to complex, can be related in a hierarchical fashion, so that the various approaches to modeling such dispersal can be viewed from a unified perspective.

  1. Nonadiabatic dynamics of electron transfer in solution: Explicit and implicit solvent treatments that include multiple relaxation time scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwerdtfeger, Christine A.; Soudackov, Alexander V.; Hammes-Schiffer, Sharon, E-mail: shs3@illinois.edu

    2014-01-21

    The development of efficient theoretical methods for describing electron transfer (ET) reactions in condensed phases is important for a variety of chemical and biological applications. Previously, dynamical dielectric continuum theory was used to derive Langevin equations for a single collective solvent coordinate describing ET in a polar solvent. In this theory, the parameters are directly related to the physical properties of the system and can be determined from experimental data or explicit molecular dynamics simulations. Herein, we combine these Langevin equations with surface hopping nonadiabatic dynamics methods to calculate the rate constants for thermal ET reactions in polar solvents formore » a wide range of electronic couplings and reaction free energies. Comparison of explicit and implicit solvent calculations illustrates that the mapping from explicit to implicit solvent models is valid even for solvents exhibiting complex relaxation behavior with multiple relaxation time scales and a short-time inertial response. The rate constants calculated for implicit solvent models with a single solvent relaxation time scale corresponding to water, acetonitrile, and methanol agree well with analytical theories in the Golden rule and solvent-controlled regimes, as well as in the intermediate regime. The implicit solvent models with two relaxation time scales are in qualitative agreement with the analytical theories but quantitatively overestimate the rate constants compared to these theories. Analysis of these simulations elucidates the importance of multiple relaxation time scales and the inertial component of the solvent response, as well as potential shortcomings of the analytical theories based on single time scale solvent relaxation models. This implicit solvent approach will enable the simulation of a wide range of ET reactions via the stochastic dynamics of a single collective solvent coordinate with parameters that are relevant to experimentally accessible systems.« less

  2. Improvements and validation of the erythropoiesis control model for bed rest simulation

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    The most significant improvement in the model is the explicit formulation of separate elements representing erythropoietin production and red cell production. Other modifications include bone marrow time-delays, capability to shift oxyhemoglobin affinity and an algorithm for entering experimental data as time-varying driving functions. An area of model development is suggested by applying the model to simulating onset, diagnosis and treatment of a hematologic disorder. Recommendations for further improvements in the model and suggestions for experimental application are also discussed. A detailed analysis of the hematologic response to bed rest including simulation of the recent Baylor Medical College bed rest studies is also presented.

  3. Representing functions/procedures and processes/structures for analysis of effects of failures on functions and operations

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Leifker, Daniel B.

    1991-01-01

    Current qualitative device and process models represent only the structure and behavior of physical systems. However, systems in the real world include goal-oriented activities that generally cannot be easily represented using current modeling techniques. An extension of a qualitative modeling system, known as functional modeling, which captures goal-oriented activities explicitly is proposed and how they may be used to support intelligent automation and fault management is shown.

  4. Multivariable Parametric Cost Model for Ground Optical Telescope Assembly

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Rowell, Ginger Holmes; Reese, Gayle; Byberg, Alicia

    2005-01-01

    A parametric cost model for ground-based telescopes is developed using multivariable statistical analysis of both engineering and performance parameters. While diameter continues to be the dominant cost driver, diffraction-limited wavelength is found to be a secondary driver. Other parameters such as radius of curvature are examined. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e., multi-telescope phased-array systems). Additionally, single variable models Based on aperture diameter are derived.

  5. Design and application of a technologically explicit hybrid energy-economy policy model with micro and macro economic dynamics

    NASA Astrophysics Data System (ADS)

    Bataille, Christopher G. F.

    2005-11-01

    Are further energy efficiency gains, or more recently greenhouse gas reductions, expensive or cheap? Analysts provide conflicting advice to policy makers based on divergent modelling perspectives, a 'top-down/bottom-up debate' in which economists use equation based models that equilibrate markets by maximizing consumer welfare, and technologists use technology simulation models that minimize the financial cost of providing energy services. This thesis summarizes a long term research project to find a middle ground between these two positions that is more useful to policy makers. Starting with the individual components of a behaviourally realistic and technologically explicit simulation model (ISTUM---Inter Sectoral Technology Use Model), or "hybrid", the individual sectors of the economy are linked using a framework of micro and macro economic feedbacks. These feedbacks are taken from the economic theory that informs the computable general equilibrium (CGE) family of models. Speaking in the languages of both economists and engineers, the resulting "physical" equilibrium model of Canada (CIMS---Canadian Integrated Modeling System), equilibrates energy and end-product markets, including imports and exports, for seven regions and 15 economic sectors, including primary industry, manufacturing, transportation, commerce, residences, governmental infrastructure and the energy supply sectors. Several different policy experiments demonstrate the value-added of the model and how its results compare to top-down and bottom-up practice. In general, the results show that technical adjustments make up about half the response to simulated energy policy, and macroeconomic demand adjustments the other half. Induced technical adjustments predominate with minor policies, while the importance of macroeconomic demand adjustment increases with the strength of the policy. Results are also shown for an experiment to derive estimates of future elasticity of substitution (ESUB) and autonomous energy efficiency indices (AEEI) from the model, parameters that could be used in long-run computable general equilibrium (CGE) analysis. The thesis concludes with a summary of the strengths and weakness of the new model as a policy tool, a work plan for its further improvement, and a discussion of the general potential for technologically explicit general equilibrium modelling.

  6. Phosphorus in global agricultural soils: spatially explicit modelling of soil phosphorus and crop uptake for 1900 to 2010

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Beusen, A.; Bouwman, L.; Apeldoorn, D. V.; Yu, C.

    2016-12-01

    Phosphorus (P) plays a vital role in global crop production and food security. To explore the global P status of soils, in this study we developed a spatially explicit version of a two-pool dynamic soil P model at 0.5°resolution. With this model, we analyzed the historical changes of soil P inputs (including manure and inorganic P fertilizer) from 1900 to 2010, reproduced the historical crop P uptake, calculated the phosphorus use efficiency (PUE) and conducted a comprehensive inventory of soil P pools and P budgets (deficit and surplus) in global soils under croplands. Our results suggest that the spatially explicit model is capable of simulating the long-term soil P budget changes and crop uptake, with model simulations closely matching historical P uptake for cropland in all countries. The global P inputs from fertilizers and manure increased from 2 Tg P in 1900 to 23 Tg P in 2010 with great variation across different regions and countries of the world. The magnitude of crop uptake has also changed rapidly over the 20th century: according to our model, crop P uptake per hectare in Western Europe increased by more than three times while the total soil P stock per hectare increased by close to 37% due to long-term P surplus application, with a slight decrease in recent years. Croplands in China (total P per hectare slight decline during 1900-1970, +34% since 1970) and India (total P per hectare gradual increase by 14% since 1900, 6% since 1970) are currently in the phase of accumulation.The total soil P content per hectare in Sub-Saharan Africa has slightly decreased since 1900.Our model is a promising tool to analyze the changes in the soil P status and the capacity of soils to supply P to crops, including future projections of required nutrient inputs.

  7. Effective Reading and Writing Instruction: A Focus on Modeling

    ERIC Educational Resources Information Center

    Regan, Kelley; Berkeley, Sheri

    2012-01-01

    When providing effective reading and writing instruction, teachers need to provide explicit modeling. Modeling is particularly important when teaching students to use cognitive learning strategies. Examples of how teachers can provide specific, explicit, and flexible instructional modeling is presented in the context of two evidence-based…

  8. Modeling Wood Encroachment in Abandoned Grasslands in the Eifel National Park – Model Description and Testing

    PubMed Central

    Hudjetz, Silvana; Lennartz, Gottfried; Krämer, Klara; Roß-Nickoll, Martina; Gergs, André; Preuss, Thomas G.

    2014-01-01

    The degradation of natural and semi-natural landscapes has become a matter of global concern. In Germany, semi-natural grasslands belong to the most species-rich habitat types but have suffered heavily from changes in land use. After abandonment, the course of succession at a specific site is often difficult to predict because many processes interact. In order to support decision making when managing semi-natural grasslands in the Eifel National Park, we built the WoodS-Model (Woodland Succession Model). A multimodeling approach was used to integrate vegetation dynamics in both the herbaceous and shrub/tree layer. The cover of grasses and herbs was simulated in a compartment model, whereas bushes and trees were modelled in an individual-based manner. Both models worked and interacted in a spatially explicit, raster-based landscape. We present here the model description, parameterization and testing. We show highly detailed projections of the succession of a semi-natural grassland including the influence of initial vegetation composition, neighborhood interactions and ungulate browsing. We carefully weighted the single processes against each other and their relevance for landscape development under different scenarios, while explicitly considering specific site conditions. Model evaluation revealed that the model is able to emulate successional patterns as observed in the field as well as plausible results for different population densities of red deer. Important neighborhood interactions such as seed dispersal, the protection of seedlings from browsing ungulates by thorny bushes, and the inhibition of wood encroachment by the herbaceous layer, have been successfully reproduced. Therefore, not only a detailed model but also detailed initialization turned out to be important for spatially explicit projections of a given site. The advantage of the WoodS-Model is that it integrates these many mutually interacting processes of succession. PMID:25494057

  9. Explicit Global Simulation of Gravity Waves up to the Lower Thermosphere

    NASA Astrophysics Data System (ADS)

    Becker, E.

    2016-12-01

    At least for short-term simulations, middle atmosphere general circulation models (GCMs) can be run with sufficiently high resolution in order to describe a good part of the gravity wave spectrum explicitly. Nevertheless, the parameterization of unresolved dynamical scales remains an issue, especially when the scales of parameterized gravity waves (GWs) and resolved GWs become comparable. In addition, turbulent diffusion must always be parameterized along with other subgrid-scale dynamics. A practical solution to the combined closure problem for GWs and turbulent diffusion is to dispense with a parameterization of GWs, apply a high spatial resolution, and to represent the unresolved scales by a macro-turbulent diffusion scheme that gives rise to wave damping in a self-consistent fashion. This is the approach of a few GCMs that extend from the surface to the lower thermosphere and simulate a realistic GW drag and summer-to-winter-pole residual circulation in the upper mesosphere. In this study we describe a new version of the Kuehlungsborn Mechanistic general Circulation Model (KMCM), which includes explicit (though idealized) computations of radiative transfer and the tropospheric moisture cycle. Particular emphasis is spent on 1) the turbulent diffusion scheme, 2) the attenuation of resolved GWs at critical levels, 3) the generation of GWs in the middle atmosphere from body forces, and 4) GW-tidal interactions (including the energy deposition of GWs and tides).

  10. From Cycle Rooted Spanning Forests to the Critical Ising Model: an Explicit Construction

    NASA Astrophysics Data System (ADS)

    de Tilière, Béatrice

    2013-04-01

    Fisher established an explicit correspondence between the 2-dimensional Ising model defined on a graph G and the dimer model defined on a decorated version {{G}} of this graph (Fisher in J Math Phys 7:1776-1781, 1966). In this paper we explicitly relate the dimer model associated to the critical Ising model and critical cycle rooted spanning forests (CRSFs). This relation is established through characteristic polynomials, whose definition only depends on the respective fundamental domains, and which encode the combinatorics of the model. We first show a matrix-tree type theorem establishing that the dimer characteristic polynomial counts CRSFs of the decorated fundamental domain {{G}_1}. Our main result consists in explicitly constructing CRSFs of {{G}_1} counted by the dimer characteristic polynomial, from CRSFs of G 1, where edges are assigned Kenyon's critical weight function (Kenyon in Invent Math 150(2):409-439, 2002); thus proving a relation on the level of configurations between two well known 2-dimensional critical models.

  11. An explicit microphysics thunderstorm model.

    Treesearch

    R. Solomon; C.M. Medaglia; C. Adamo; S. Dietrick; A. Mugnai; U. Biader Ceipidor

    2005-01-01

    The authors present a brief description of a 1.5-dimensional thunderstorm model with a lightning parameterization that utilizes an explicit microphysical scheme to model lightning-producing clouds. The main intent of this work is to describe the basic microphysical and electrical properties of the model, with a small illustrative section to show how the model may be...

  12. Systematic review of model-based analyses reporting the cost-effectiveness and cost-utility of cardiovascular disease management programs.

    PubMed

    Maru, Shoko; Byrnes, Joshua; Whitty, Jennifer A; Carrington, Melinda J; Stewart, Simon; Scuffham, Paul A

    2015-02-01

    The reported cost effectiveness of cardiovascular disease management programs (CVD-MPs) is highly variable, potentially leading to different funding decisions. This systematic review evaluates published modeled analyses to compare study methods and quality. Articles were included if an incremental cost-effectiveness ratio (ICER) or cost-utility ratio (ICUR) was reported, it is a multi-component intervention designed to manage or prevent a cardiovascular disease condition, and it addressed all domains specified in the American Heart Association Taxonomy for Disease Management. Nine articles (reporting 10 clinical outcomes) were included. Eight cost-utility and two cost-effectiveness analyses targeted hypertension (n=4), coronary heart disease (n=2), coronary heart disease plus stoke (n=1), heart failure (n=2) and hyperlipidemia (n=1). Study perspectives included the healthcare system (n=5), societal and fund holders (n=1), a third party payer (n=3), or was not explicitly stated (n=1). All analyses were modeled based on interventions of one to two years' duration. Time horizon ranged from two years (n=1), 10 years (n=1) and lifetime (n=8). Model structures included Markov model (n=8), 'decision analytic models' (n=1), or was not explicitly stated (n=1). Considerable variation was observed in clinical and economic assumptions and reporting practices. Of all ICERs/ICURs reported, including those of subgroups (n=16), four were above a US$50,000 acceptability threshold, six were below and six were dominant. The majority of CVD-MPs was reported to have favorable economic outcomes, but 25% were at unacceptably high cost for the outcomes. Use of standardized reporting tools should increase transparency and inform what drives the cost-effectiveness of CVD-MPs. © The European Society of Cardiology 2014.

  13. Are baboons learning "orthographic" representations? Probably not

    PubMed Central

    Bröker, Franziska; Ramscar, Michael; Baayen, Harald

    2017-01-01

    The ability of Baboons (papio papio) to distinguish between English words and nonwords has been modeled using a deep learning convolutional network model that simulates a ventral pathway in which lexical representations of different granularity develop. However, given that pigeons (columba livia), whose brain morphology is drastically different, can also be trained to distinguish between English words and nonwords, it appears that a less species-specific learning algorithm may be required to explain this behavior. Accordingly, we examined whether the learning model of Rescorla and Wagner, which has proved to be amazingly fruitful in understanding animal and human learning could account for these data. We show that a discrimination learning network using gradient orientation features as input units and word and nonword units as outputs succeeds in predicting baboon lexical decision behavior—including key lexical similarity effects and the ups and downs in accuracy as learning unfolds—with surprising precision. The models performance, in which words are not explicitly represented, is remarkable because it is usually assumed that lexicality decisions, including the decisions made by baboons and pigeons, are mediated by explicit lexical representations. By contrast, our results suggest that in learning to perform lexical decision tasks, baboons and pigeons do not construct a hierarchy of lexical units. Rather, they make optimal use of low-level information obtained through the massively parallel processing of gradient orientation features. Accordingly, we suggest that reading in humans first involves initially learning a high-level system building on letter representations acquired from explicit instruction in literacy, which is then integrated into a conventionalized oral communication system, and that like the latter, fluent reading involves the massively parallel processing of the low-level features encoding semantic contrasts. PMID:28859134

  14. Are baboons learning "orthographic" representations? Probably not.

    PubMed

    Linke, Maja; Bröker, Franziska; Ramscar, Michael; Baayen, Harald

    2017-01-01

    The ability of Baboons (papio papio) to distinguish between English words and nonwords has been modeled using a deep learning convolutional network model that simulates a ventral pathway in which lexical representations of different granularity develop. However, given that pigeons (columba livia), whose brain morphology is drastically different, can also be trained to distinguish between English words and nonwords, it appears that a less species-specific learning algorithm may be required to explain this behavior. Accordingly, we examined whether the learning model of Rescorla and Wagner, which has proved to be amazingly fruitful in understanding animal and human learning could account for these data. We show that a discrimination learning network using gradient orientation features as input units and word and nonword units as outputs succeeds in predicting baboon lexical decision behavior-including key lexical similarity effects and the ups and downs in accuracy as learning unfolds-with surprising precision. The models performance, in which words are not explicitly represented, is remarkable because it is usually assumed that lexicality decisions, including the decisions made by baboons and pigeons, are mediated by explicit lexical representations. By contrast, our results suggest that in learning to perform lexical decision tasks, baboons and pigeons do not construct a hierarchy of lexical units. Rather, they make optimal use of low-level information obtained through the massively parallel processing of gradient orientation features. Accordingly, we suggest that reading in humans first involves initially learning a high-level system building on letter representations acquired from explicit instruction in literacy, which is then integrated into a conventionalized oral communication system, and that like the latter, fluent reading involves the massively parallel processing of the low-level features encoding semantic contrasts.

  15. An image-based skeletal dosimetry model for the ICRP reference newborn—internal electron sources

    NASA Astrophysics Data System (ADS)

    Pafundi, Deanna; Rajon, Didier; Jokisch, Derek; Lee, Choonsik; Bolch, Wesley

    2010-04-01

    In this study, a comprehensive electron dosimetry model of newborn skeletal tissues is presented. The model is constructed using the University of Florida newborn hybrid phantom of Lee et al (2007 Phys. Med. Biol. 52 3309-33), the newborn skeletal tissue model of Pafundi et al (2009 Phys. Med. Biol. 54 4497-531) and the EGSnrc-based Paired Image Radiation Transport code of Shah et al (2005 J. Nucl. Med. 46 344-53). Target tissues include the active bone marrow (surrogate tissue for hematopoietic stem cells), shallow marrow (surrogate tissue for osteoprogenitor cells) and unossified cartilage (surrogate tissue for chondrocytes). Monoenergetic electron emissions are considered over the energy range 1 keV to 10 MeV for the following source tissues: active marrow, trabecular bone (surfaces and volumes), cortical bone (surfaces and volumes) and cartilage. Transport results are reported as specific absorbed fractions according to the MIRD schema and are given as skeletal-averaged values in the paper with bone-specific values reported in both tabular and graphic format as electronic annexes (supplementary data). The method utilized in this work uniquely includes (1) explicit accounting for the finite size and shape of newborn ossification centers (spongiosa regions), (2) explicit accounting for active and shallow marrow dose from electron emissions in cortical bone as well as sites of unossified cartilage, (3) proper accounting of the distribution of trabecular and cortical volumes and surfaces in the newborn skeleton when considering mineral bone sources and (4) explicit consideration of the marrow cellularity changes for active marrow self-irradiation as applicable to radionuclide therapy of diseased marrow in the newborn child.

  16. Teaching Reading Sourcebook, Second Edition

    ERIC Educational Resources Information Center

    Honig, Bill; Diamond, Linda; Gutlohn, Linda

    2008-01-01

    The "Teaching Reading Sourcebook, Second Edition" is a comprehensive reference about reading instruction. Organized according to the elements of explicit instruction (what? why? when? and how?), the "Sourcebook" includes both a research-informed knowledge base and practical sample lesson models. It teaches the key elements of an effective reading…

  17. A spatially and temporally explicit, individual-based, life-history and productivity modeling approach for aquatic species

    EPA Science Inventory

    Realized life history expression and productivity in aquatic species, and salmonid fishes in particular, is the result of multiple interacting factors including genetics, habitat, growth potential and condition, and the thermal regime individuals experience, both at critical stag...

  18. Graph-based analysis of connectivity in spatially-explicit population models: HexSim and the Connectivity Analysis Toolkit

    EPA Science Inventory

    Background / Question / Methods Planning for the recovery of threatened species is increasingly informed by spatially-explicit population models. However, using simulation model results to guide land management decisions can be difficult due to the volume and complexity of model...

  19. The impact of ARM on climate modeling

    DOE PAGES

    Randall, David A.; Del Genio, Anthony D.; Donner, Lee J.; ...

    2016-07-15

    Climate models are among humanity’s most ambitious and elaborate creations. They are designed to simulate the interactions of the atmosphere, ocean, land surface, and cryosphere on time scales far beyond the limits of deterministic predictability and including the effects of time-dependent external forcings. The processes involved include radiative transfer, fluid dynamics, microphysics, and some aspects of geochemistry, biology, and ecology. The models explicitly simulate processes on spatial scales ranging from the circumference of Earth down to 100 km or smaller and implicitly include the effects of processes on even smaller scales down to a micron or so. In addition, themore » atmospheric component of a climate model can be called an atmospheric global circulation model (AGCM).« less

  20. Modelling zwitterions in solution: 3-fluoro-γ-aminobutyric acid (3F-GABA).

    PubMed

    Cao, Jie; Bjornsson, Ragnar; Bühl, Michael; Thiel, Walter; van Mourik, Tanja

    2012-01-02

    The conformations and relative stabilities of folded and extended 3-fluoro-γ-aminobutyric acid (3F-GABA) conformers were studied using explicit solvation models. Geometry optimisations in the gas phase with one or two explicit water molecules favour folded and neutral structures containing intramolecular NH···O-C hydrogen bonds. With three or five explicit water molecules zwitterionic minima are obtained, with folded structures being preferred over extended conformers. The stability of folded versus extended zwitterionic conformers increases on going from a PCM continuum solvation model to the microsolvated complexes, though extended structures become less disfavoured with the inclusion of more water molecules. Full explicit solvation was studied with a hybrid quantum-mechanical/molecular-mechanical (QM/MM) scheme and molecular dynamics simulations, including more than 6000 TIP3P water molecules. According to free energies obtained from thermodynamic integration at the PM3/MM level and corrected for B3LYP/MM total energies, the fully extended conformer is more stable than folded ones by about -4.5 kJ mol(-1). B3LYP-computed (3)J(F,H) NMR spin-spin coupling constants, averaged over PM3/MM-MD trajectories, agree best with experiment for this fully extended form, in accordance with the original NMR analysis. The seeming discrepancy between static PCM calculations and experiment noted previously is now resolved. That the inexpensive semiempirical PM3 method performs so well for this archetypical zwitterion is encouraging for further QM/MM studies of biomolecular systems. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. FATE-HD: A spatially and temporally explicit integrated model for predicting vegetation structure and diversity at regional scale

    PubMed Central

    Isabelle, Boulangeat; Damien, Georges; Wilfried, Thuiller

    2014-01-01

    During the last decade, despite strenuous efforts to develop new models and compare different approaches, few conclusions have been drawn on their ability to provide robust biodiversity projections in an environmental change context. The recurring suggestions are that models should explicitly (i) include spatiotemporal dynamics; (ii) consider multiple species in interactions; and (iii) account for the processes shaping biodiversity distribution. This paper presents a biodiversity model (FATE-HD) that meets this challenge at regional scale by combining phenomenological and process-based approaches and using well-defined plant functional groups. FATE-HD has been tested and validated in a French National Park, demonstrating its ability to simulate vegetation dynamics, structure and diversity in response to disturbances and climate change. The analysis demonstrated the importance of considering biotic interactions, spatio-temporal dynamics, and disturbances in addition to abiotic drivers to simulate vegetation dynamics. The distribution of pioneer trees was particularly improved, as were all undergrowth functional groups. PMID:24214499

  2. CONSTRUCTING, PERTURBATION ANALYSIIS AND TESTING OF A MULTI-HABITAT PERIODIC MATRIX POPULATION MODEL

    EPA Science Inventory

    We present a matrix model that explicitly incorporates spatial habitat structure and seasonality and discuss preliminary results from a landscape level experimental test. Ecological risk to populations is often modeled without explicit treatment of spatially or temporally distri...

  3. An implicit dispersive transport algorithm for the US Geological Survey MOC3D solute-transport model

    USGS Publications Warehouse

    Kipp, K.L.; Konikow, Leonard F.; Hornberger, G.Z.

    1998-01-01

    This report documents an extension to the U.S. Geological Survey MOC3D transport model that incorporates an implicit-in-time difference approximation for the dispersive transport equation, including source/sink terms. The original MOC3D transport model (Version 1) uses the method of characteristics to solve the transport equation on the basis of the velocity field. The original MOC3D solution algorithm incorporates particle tracking to represent advective processes and an explicit finite-difference formulation to calculate dispersive fluxes. The new implicit procedure eliminates several stability criteria required for the previous explicit formulation. This allows much larger transport time increments to be used in dispersion-dominated problems. The decoupling of advective and dispersive transport in MOC3D, however, is unchanged. With the implicit extension, the MOC3D model is upgraded to Version 2. A description of the numerical method of the implicit dispersion calculation, the data-input requirements and output options, and the results of simulator testing and evaluation are presented. Version 2 of MOC3D was evaluated for the same set of problems used for verification of Version 1. These test results indicate that the implicit calculation of Version 2 matches the accuracy of Version 1, yet is more efficient than the explicit calculation for transport problems that are characterized by a grid Peclet number less than about 1.0.

  4. Explicit Oral Narrative Intervention for Students with Williams Syndrome

    PubMed Central

    Diez-Itza, Eliseo; Martínez, Verónica; Pérez, Vanesa; Fernández-Urquiza, Maite

    2018-01-01

    Narrative skills play a crucial role in organizing experience, facilitating social interaction and building academic discourse and literacy. They are at the interface of cognitive, social, and linguistic abilities related to school engagement. Despite their relative strengths in social and grammatical skills, students with Williams syndrome (WS) do not show parallel cognitive and pragmatic performance in narrative generation tasks. The aim of the present study was to assess retelling of a TV cartoon tale and the effect of an individualized explicit instruction of the narrative structure. Participants included eight students with WS who attended different special education levels. Narratives were elicited in two sessions (pre and post intervention), and were transcribed, coded and analyzed using the tools of the CHILDES Project. Narratives were coded for productivity and complexity at the microstructure and macrostructure levels. Microstructure productivity (i.e., length of narratives) included number of utterances, clauses, and tokens. Microstructure complexity included mean length of utterances, lexical diversity and use of discourse markers as cohesive devices. Narrative macrostructure was assessed for textual coherence through the Pragmatic Evaluation Protocol for Speech Corpora (PREP-CORP). Macrostructure productivity and complexity included, respectively, the recall and sequential order of scenarios, episodes, events and characters. A total of four intervention sessions, lasting approximately 20 min, were delivered individually once a week. This brief intervention addressed explicit instruction about the narrative structure and the use of specific discourse markers to improve cohesion of story retellings. Intervention strategies included verbal scaffolding and modeling, conversational context for retelling the story and visual support with pictures printed from the cartoon. Results showed significant changes in WS students’ retelling of the story, both at macro- and microstructure levels, when assessed following a 2-week interval. Outcomes were better in microstructure than in macrostructure, where sequential order (i.e., complexity) did not show significant improvement. These findings are consistent with previous research supporting the use of explicit oral narrative intervention with participants who are at risk of school failure due to communication impairments. Discussion focuses on how assessment and explicit instruction of narrative skills might contribute to effective intervention programs enhancing school engagement in WS students. PMID:29379455

  5. Green-Ampt approximations: A comprehensive analysis

    NASA Astrophysics Data System (ADS)

    Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.

    2016-04-01

    Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.

  6. Modeling of fatigue crack induced nonlinear ultrasonics using a highly parallelized explicit local interaction simulation approach

    NASA Astrophysics Data System (ADS)

    Shen, Yanfeng; Cesnik, Carlos E. S.

    2016-04-01

    This paper presents a parallelized modeling technique for the efficient simulation of nonlinear ultrasonics introduced by the wave interaction with fatigue cracks. The elastodynamic wave equations with contact effects are formulated using an explicit Local Interaction Simulation Approach (LISA). The LISA formulation is extended to capture the contact-impact phenomena during the wave damage interaction based on the penalty method. A Coulomb friction model is integrated into the computation procedure to capture the stick-slip contact shear motion. The LISA procedure is coded using the Compute Unified Device Architecture (CUDA), which enables the highly parallelized supercomputing on powerful graphic cards. Both the explicit contact formulation and the parallel feature facilitates LISA's superb computational efficiency over the conventional finite element method (FEM). The theoretical formulations based on the penalty method is introduced and a guideline for the proper choice of the contact stiffness is given. The convergence behavior of the solution under various contact stiffness values is examined. A numerical benchmark problem is used to investigate the new LISA formulation and results are compared with a conventional contact finite element solution. Various nonlinear ultrasonic phenomena are successfully captured using this contact LISA formulation, including the generation of nonlinear higher harmonic responses. Nonlinear mode conversion of guided waves at fatigue cracks is also studied.

  7. Direct versus Indirect Explicit Methods of Enhancing EFL Students' English Grammatical Competence: A Concept Checking-Based Consciousness-Raising Tasks Model

    ERIC Educational Resources Information Center

    Dang, Trang Thi Doan; Nguyen, Huong Thu

    2013-01-01

    Two approaches to grammar instruction are often discussed in the ESL literature: direct explicit grammar instruction (DEGI) (deduction) and indirect explicit grammar instruction (IEGI) (induction). This study aims to explore the effects of indirect explicit grammar instruction on EFL learners' mastery of English tenses. Ninety-four…

  8. A multiphysics and multiscale model for low frequency electromagnetic direct-chill casting

    NASA Astrophysics Data System (ADS)

    Košnik, N.; Guštin, A. Z.; Mavrič, B.; Šarler, B.

    2016-03-01

    Simulation and control of macrosegregation, deformation and grain size in low frequency electromagnetic (EM) direct-chill casting (LFEMC) is important for downstream processing. Respectively, a multiphysics and multiscale model is developed for solution of Lorentz force, temperature, velocity, concentration, deformation and grain structure of LFEMC processed aluminum alloys, with focus on axisymmetric billets. The mixture equations with lever rule, linearized phase diagram, and stationary thermoelastic solid phase are assumed, together with EM induction equation for the field imposed by the coil. Explicit diffuse approximate meshless solution procedure [1] is used for solving the EM field, and the explicit local radial basis function collocation method [2] is used for solving the coupled transport phenomena and thermomechanics fields. Pressure-velocity coupling is performed by the fractional step method [3]. The point automata method with modified KGT model is used to estimate the grain structure [4] in a post-processing mode. Thermal, mechanical, EM and grain structure outcomes of the model are demonstrated. A systematic study of the complicated influences of the process parameters can be investigated by the model, including intensity and frequency of the electromagnetic field. The meshless solution framework, with the implemented simplest physical models, will be further extended by including more sophisticated microsegregation and grain structure models, as well as a more realistic solid and solid-liquid phase rheology.

  9. Memory and cognitive control in an integrated theory of language processing.

    PubMed

    Slevc, L Robert; Novick, Jared M

    2013-08-01

    Pickering & Garrod's (P&G's) integrated model of production and comprehension includes no explicit role for nonlinguistic cognitive processes. Yet, how domain-general cognitive functions contribute to language processing has become clearer with well-specified theories and supporting data. We therefore believe that their account can benefit by incorporating functions like working memory and cognitive control into a unified model of language processing.

  10. Locally-Adaptive, Spatially-Explicit Projection of U.S. Population for 2030 and 2050

    DOE PAGES

    McKee, Jacob J.; Rose, Amy N.; Bright, Eddie A.; ...

    2015-02-03

    Localized adverse events, including natural hazards, epidemiological events, and human conflict, underscore the criticality of quantifying and mapping current population. Moreover, knowing the spatial distribution of future population allows for increased preparation in the event of an emergency. Building on the spatial interpolation technique previously developed for high resolution population distribution data (LandScan Global and LandScan USA), we have constructed an empirically-informed spatial distribution of the projected population of the contiguous U.S. for 2030 and 2050. Whereas most current large-scale, spatially explicit population projections typically rely on a population gravity model to determine areas of future growth, our projection modelmore » departs from these by accounting for multiple components that affect population distribution. Modelled variables, which included land cover, slope, distances to larger cities, and a moving average of current population, were locally adaptive and geographically varying. The resulting weighted surface was used to determine which areas had the greatest likelihood for future population change. Population projections of county level numbers were developed using a modified version of the U.S. Census s projection methodology with the U.S. Census s official projection as the benchmark. Applications of our model include, but are not limited to, suitability modelling, service area planning for governmental agencies, consequence assessment, mitigation planning and implementation, and assessment of spatially vulnerable populations.« less

  11. Jet Noise Physics and Modeling Using First-principles Simulations

    NASA Technical Reports Server (NTRS)

    Freund, Jonathan B.

    2003-01-01

    An extensive analysis of our jet DNS database has provided for the first time the complex correlations that are the core of many statistical jet noise models, including MGBK. We have also for the first time explicitly computed the noise from different components of a commonly used noise source as proposed in many modeling approaches. Key findings are: (1) While two-point (space and time) velocity statistics are well-fitted by decaying exponentials, even for our low-Reynolds-number jet, spatially integrated fourth-order space/retarded-time correlations, which constitute the noise "source" in MGBK, are instead well-fitted by Gaussians. The width of these Gaussians depends (by a factor of 2) on which components are considered. This is counter to current modeling practice, (2) A standard decomposition of the Lighthill source is shown by direct evaluation to be somewhat artificial since the noise from these nominally separate components is in fact highly correlated. We anticipate that the same will be the case for the Lilley source, and (3) The far-field sound is computed in a way that explicitly includes all quadrupole cancellations, yet evaluating the Lighthill integral for only a small part of the jet yields a far-field noise far louder than that from the whole jet due to missing nonquadrupole cancellations. Details of this study are discussed in a draft of a paper included as appendix A.

  12. Locally-Adaptive, Spatially-Explicit Projection of U.S. Population for 2030 and 2050

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKee, Jacob J.; Rose, Amy N.; Bright, Eddie A.

    Localized adverse events, including natural hazards, epidemiological events, and human conflict, underscore the criticality of quantifying and mapping current population. Moreover, knowing the spatial distribution of future population allows for increased preparation in the event of an emergency. Building on the spatial interpolation technique previously developed for high resolution population distribution data (LandScan Global and LandScan USA), we have constructed an empirically-informed spatial distribution of the projected population of the contiguous U.S. for 2030 and 2050. Whereas most current large-scale, spatially explicit population projections typically rely on a population gravity model to determine areas of future growth, our projection modelmore » departs from these by accounting for multiple components that affect population distribution. Modelled variables, which included land cover, slope, distances to larger cities, and a moving average of current population, were locally adaptive and geographically varying. The resulting weighted surface was used to determine which areas had the greatest likelihood for future population change. Population projections of county level numbers were developed using a modified version of the U.S. Census s projection methodology with the U.S. Census s official projection as the benchmark. Applications of our model include, but are not limited to, suitability modelling, service area planning for governmental agencies, consequence assessment, mitigation planning and implementation, and assessment of spatially vulnerable populations.« less

  13. A compressible near-wall turbulence model for boundary layer calculations

    NASA Technical Reports Server (NTRS)

    So, R. M. C.; Zhang, H. S.; Lai, Y. G.

    1992-01-01

    A compressible near-wall two-equation model is derived by relaxing the assumption of dynamical field similarity between compressible and incompressible flows. This requires justifications for extending the incompressible models to compressible flows and the formulation of the turbulent kinetic energy equation in a form similar to its incompressible counterpart. As a result, the compressible dissipation function has to be split into a solenoidal part, which is not sensitive to changes of compressibility indicators, and a dilational part, which is directly affected by these changes. This approach isolates terms with explicit dependence on compressibility so that they can be modeled accordingly. An equation that governs the transport of the solenoidal dissipation rate with additional terms that are explicitly dependent on the compressibility effects is derived similarly. A model with an explicit dependence on the turbulent Mach number is proposed for the dilational dissipation rate. Thus formulated, all near-wall incompressible flow models could be expressed in terms of the solenoidal dissipation rate and straight-forwardly extended to compressible flows. Therefore, the incompressible equations are recovered correctly in the limit of constant density. The two-equation model and the assumption of constant turbulent Prandtl number are used to calculate compressible boundary layers on a flat plate with different wall thermal boundary conditions and free-stream Mach numbers. The calculated results, including the near-wall distributions of turbulence statistics and their limiting behavior, are in good agreement with measurements. In particular, the near-wall asymptotic properties are found to be consistent with incompressible behavior; thus suggesting that turbulent flows in the viscous sublayer are not much affected by compressibility effects.

  14. A mechanistic soil biogeochemistry model with explicit representation of microbial and macrofaunal activities and nutrient cycles

    NASA Astrophysics Data System (ADS)

    Fatichi, Simone; Manzoni, Stefano; Or, Dani; Paschalis, Athanasios

    2016-04-01

    The potential of a given ecosystem to store and release carbon is inherently linked to soil biogeochemical processes. These processes are deeply connected to the water, energy, and vegetation dynamics above and belowground. Recently, it has been advocated that a mechanistic representation of soil biogeochemistry require: (i) partitioning of soil organic carbon (SOC) pools according to their functional role; (ii) an explicit representation of microbial dynamics; (iii) coupling of carbon and nutrient cycles. While some of these components have been introduced in specialized models, they have been rarely implemented in terrestrial biosphere models and tested in real cases. In this study, we combine a new soil biogeochemistry model with an existing model of land-surface hydrology and vegetation dynamics (T&C). Specifically the soil biogeochemistry component explicitly separates different litter pools and distinguishes SOC in particulate, dissolved and mineral associated fractions. Extracellular enzymes and microbial pools are explicitly represented differentiating the functional roles of bacteria, saprotrophic and mycorrhizal fungi. Microbial activity depends on temperature, soil moisture and litter or SOC stoichiometry. The activity of macrofauna is also modeled. Nutrient dynamics include the cycles of nitrogen, phosphorous and potassium. The model accounts for feedbacks between nutrient limitations and plant growth as well as for plant stoichiometric flexibility. In turn, litter input is a function of the simulated vegetation dynamics. Root exudation and export to mycorrhiza are computed based on a nutrient uptake cost function. The combined model is tested to reproduce respiration dynamics and nitrogen cycle in few sites where data were available to test plausibility of results across a range of different metrics. For instance in a Swiss grassland ecosystem, fine root, bacteria, fungal and macrofaunal respiration account for 40%, 23%, 33% and 4% of total belowground respiration, respectively. Root exudation and carbon export to mycorrhizal represent about 7% of plant Net Primary Production. The model allows exploring the temporal dynamics of respiration fluxes from the different ecosystem components and designing virtual experiments on the controls exerted by environmental variables and/or soil microbes and mycorrhizal associations on soil carbon storage, plant growth, and nutrient leaching.

  15. DEFINING RECOVERY GOALS AND STRATEGIES FOR ENDANGERED SPECIES USING SPATIALLY-EXPLICIT POPULATION MODELS

    EPA Science Inventory

    We used a spatially explicit population model of wolves (Canis lupus) to propose a framework for defining rangewide recovery priorities and finer-scale strategies for regional reintroductions. The model predicts that Yellowstone and central Idaho, where wolves have recently been ...

  16. Corona graphs as a model of small-world networks

    NASA Astrophysics Data System (ADS)

    Lv, Qian; Yi, Yuhao; Zhang, Zhongzhi

    2015-11-01

    We introduce recursive corona graphs as a model of small-world networks. We investigate analytically the critical characteristics of the model, including order and size, degree distribution, average path length, clustering coefficient, and the number of spanning trees, as well as Kirchhoff index. Furthermore, we study the spectra for the adjacency matrix and the Laplacian matrix for the model. We obtain explicit results for all the quantities of the recursive corona graphs, which are similar to those observed in real-life networks.

  17. Multivariable Parametric Cost Model for Ground Optical: Telescope Assembly

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Rowell, Ginger Holmes; Reese, Gayle; Byberg, Alicia

    2004-01-01

    A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis of both engineering and performance parameters. While diameter continues to be the dominant cost driver, diffraction limited wavelength is found to be a secondary driver. Other parameters such as radius of curvature were examined. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter were derived.

  18. Estimating forest canopy fuel parameters using LIDAR data.

    Treesearch

    Hans-Erik Andersen; Robert J. McGaughey; Stephen E. Reutebuch

    2005-01-01

    Fire researchers and resource managers are dependent upon accurate, spatially-explicit forest structure information to support the application of forest fire behavior models. In particular, reliable estimates of several critical forest canopy structure metrics, including canopy bulk density, canopy height, canopy fuel weight, and canopy base height, are required to...

  19. An open and extensible framework for spatially explicit land use change modelling in R: the lulccR package (0.1.0)

    NASA Astrophysics Data System (ADS)

    Moulds, S.; Buytaert, W.; Mijic, A.

    2015-04-01

    Land use change has important consequences for biodiversity and the sustainability of ecosystem services, as well as for global environmental change. Spatially explicit land use change models improve our understanding of the processes driving change and make predictions about the quantity and location of future and past change. Here we present the lulccR package, an object-oriented framework for land use change modelling written in the R programming language. The contribution of the work is to resolve the following limitations associated with the current land use change modelling paradigm: (1) the source code for model implementations is frequently unavailable, severely compromising the reproducibility of scientific results and making it impossible for members of the community to improve or adapt models for their own purposes; (2) ensemble experiments to capture model structural uncertainty are difficult because of fundamental differences between implementations of different models; (3) different aspects of the modelling procedure must be performed in different environments because existing applications usually only perform the spatial allocation of change. The package includes a stochastic ordered allocation procedure as well as an implementation of the widely used CLUE-S algorithm. We demonstrate its functionality by simulating land use change at the Plum Island Ecosystems site, using a dataset included with the package. It is envisaged that lulccR will enable future model development and comparison within an open environment.

  20. Modelling the influence of tides on ice-shelf melt rates in the Amundsen Sea, Antarctica.

    NASA Astrophysics Data System (ADS)

    Jourdain, Nicolas C.; Molines, Jean-Marc; Le Sommer, Julien; Mathiot, Pierre; Chanut, Jérome; Madec, Gurvan

    2017-04-01

    Variations in melt beneath ice- shelves may trigger ice-sheet instabilities, in particular in West Antarctica. Therefore, improving the understanding and modelling of ice-shelf basal melt rates has been a major focus over the last decades. In this presentation, we provide further insight into the role of tides on basal melt rates, and we assess several methods to account for tides in models that do not include an explicit representation of tides. First, we use an explicit representation of tides in a regional configuration of the NEMO-3.6 model deployed over the Amundsen Sea. We show that most of the tidal influence on ice-shelf melt is explained by four tidal constituents. Tides enhance melt by more than 30% in some cavities like Abbot, Cosgrove and Dotson, but by less than 10% in others like Thwaites and Pine Island. Over the entire Amundsen Sea sector, tides enhance melt by 92 Gt/yr, which is mostly induced by tidal velocities along ice drafts (+148 Gt/yr), partly compensated by tide-induced change in thermal forcing (-31 Gt/yr) and co-variations between tidal velocities and thermal forcing (-26 Gt/yr). In the second part of this presentation, we show that using uniform tidal velocities to account for tides effects in ocean models with no explicit tides produces large biases in melt rates. By contrast, prescribing non-uniform tidal velocities allows an accurate representation of the dynamical effects of tides on melt rates.

  1. Low Cloud Feedback to Surface Warming in the World's First Global Climate Model with Explicit Embedded Boundary Layer Turbulence

    NASA Astrophysics Data System (ADS)

    Parishani, H.; Pritchard, M. S.; Bretherton, C. S.; Wyant, M. C.; Khairoutdinov, M.; Singh, B.

    2017-12-01

    Biases and parameterization formulation uncertainties in the representation of boundary layer clouds remain a leading source of possible systematic error in climate projections. Here we show the first results of cloud feedback to +4K SST warming in a new experimental climate model, the ``Ultra-Parameterized (UP)'' Community Atmosphere Model, UPCAM. We have developed UPCAM as an unusually high-resolution implementation of cloud superparameterization (SP) in which a global set of cloud resolving arrays is embedded in a host global climate model. In UP, the cloud-resolving scale includes sufficient internal resolution to explicitly generate the turbulent eddies that form marine stratocumulus and trade cumulus clouds. This is computationally costly but complements other available approaches for studying low clouds and their climate interaction, by avoiding parameterization of the relevant scales. In a recent publication we have shown that UP, while not without its own complexity trade-offs, can produce encouraging improvements in low cloud climatology in multi-month simulations of the present climate and is a promising target for exascale computing (Parishani et al. 2017). Here we show results of its low cloud feedback to warming in multi-year simulations for the first time. References: Parishani, H., M. S. Pritchard, C. S. Bretherton, M. C. Wyant, and M. Khairoutdinov (2017), Toward low-cloud-permitting cloud superparameterization with explicit boundary layer turbulence, J. Adv. Model. Earth Syst., 9, doi:10.1002/2017MS000968.

  2. Stochastic modeling of soundtrack for efficient segmentation and indexing of video

    NASA Astrophysics Data System (ADS)

    Naphade, Milind R.; Huang, Thomas S.

    1999-12-01

    Tools for efficient and intelligent management of digital content are essential for digital video data management. An extremely challenging research area in this context is that of multimedia analysis and understanding. The capabilities of audio analysis in particular for video data management are yet to be fully exploited. We present a novel scheme for indexing and segmentation of video by analyzing the audio track. This analysis is then applied to the segmentation and indexing of movies. We build models for some interesting events in the motion picture soundtrack. The models built include music, human speech and silence. We propose the use of hidden Markov models to model the dynamics of the soundtrack and detect audio-events. Using these models we segment and index the soundtrack. A practical problem in motion picture soundtracks is that the audio in the track is of a composite nature. This corresponds to the mixing of sounds from different sources. Speech in foreground and music in background are common examples. The coexistence of multiple individual audio sources forces us to model such events explicitly. Experiments reveal that explicit modeling gives better result than modeling individual audio events separately.

  3. Equation-oriented specification of neural models for simulations

    PubMed Central

    Stimberg, Marcel; Goodman, Dan F. M.; Benichoux, Victor; Brette, Romain

    2013-01-01

    Simulating biological neuronal networks is a core method of research in computational neuroscience. A full specification of such a network model includes a description of the dynamics and state changes of neurons and synapses, as well as the synaptic connectivity patterns and the initial values of all parameters. A standard approach in neuronal modeling software is to build network models based on a library of pre-defined components and mechanisms; if a model component does not yet exist, it has to be defined in a special-purpose or general low-level language and potentially be compiled and linked with the simulator. Here we propose an alternative approach that allows flexible definition of models by writing textual descriptions based on mathematical notation. We demonstrate that this approach allows the definition of a wide range of models with minimal syntax. Furthermore, such explicit model descriptions allow the generation of executable code for various target languages and devices, since the description is not tied to an implementation. Finally, this approach also has advantages for readability and reproducibility, because the model description is fully explicit, and because it can be automatically parsed and transformed into formatted descriptions. The presented approach has been implemented in the Brian2 simulator. PMID:24550820

  4. Spatially explicit watershed modeling: tracking water, mercury and nitrogen in multiple systems under diverse conditions

    EPA Science Inventory

    Environmental decision-making and the influences of various stressors, such as landscape and climate changes on water quantity and quality, requires the application of environmental modeling. Spatially explicit environmental and watershed-scale models using GIS as a base framewor...

  5. HexSim - A general purpose framework for spatially-explicit, individual-based modeling

    EPA Science Inventory

    HexSim is a framework for constructing spatially-explicit, individual-based computer models designed for simulating terrestrial wildlife population dynamics and interactions. HexSim is useful for a broad set of modeling applications. This talk will focus on a subset of those ap...

  6. Simulation of a severe convective storm using a numerical model with explicitly incorporated aerosols

    NASA Astrophysics Data System (ADS)

    Lompar, Miloš; Ćurić, Mladjen; Romanic, Djordje

    2017-09-01

    Despite an important role the aerosols play in all stages of cloud lifecycle, their representation in numerical weather prediction models is often rather crude. This paper investigates the effects the explicit versus implicit inclusion of aerosols in a microphysics parameterization scheme in Weather Research and Forecasting (WRF) - Advanced Research WRF (WRF-ARW) model has on cloud dynamics and microphysics. The testbed selected for this study is a severe mesoscale convective system with supercells that struck west and central parts of Serbia in the afternoon of July 21, 2014. Numerical products of two model runs, i.e. one with aerosols explicitly (WRF-AE) included and another with aerosols implicitly (WRF-AI) assumed, are compared against precipitation measurements from surface network of rain gauges, as well as against radar and satellite observations. The WRF-AE model accurately captured the transportation of dust from the north Africa over the Mediterranean and to the Balkan region. On smaller scales, both models displaced the locations of clouds situated above west and central Serbia towards southeast and under-predicted the maximum values of composite radar reflectivity. Similar to satellite images, WRF-AE shows the mesoscale convective system as a merged cluster of cumulonimbus clouds. Both models over-predicted the precipitation amounts; WRF-AE over-predictions are particularly pronounced in the zones of light rain, while WRF-AI gave larger outliers. Unlike WRF-AI, the WRF-AE approach enables the modelling of time evolution and influx of aerosols into the cloud which could be of practical importance in weather forecasting and weather modification. Several likely causes for discrepancies between models and observations are discussed and prospects for further research in this field are outlined.

  7. Antiferromagnetism and DX2-Y2-WAVE Pairing in the Colored Hubbard Model

    NASA Astrophysics Data System (ADS)

    Baier, Tobias; Bick, Eike

    2001-08-01

    We introduce a new formulation of the 2d Hubbard model on a square lattice (the "colored" Hubbard model). In this formulation interesting physical nonlocal properties as antiferromagnetic or dx2-y2-wave superconducting behavior are included in an explicit way. Analyzing the phase diagram in a mean field approximation numerically, we show that our approach yields results which are in qualitative agreement with experiment.

  8. Including gauge-group parameters into the theory of interactions: an alternative mass-generating mechanism for gauge fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aldaya, V.; Lopez-Ruiz, F. F.; Sanchez-Sastre, E.

    2006-11-03

    We reformulate the gauge theory of interactions by introducing the gauge group parameters into the model. The dynamics of the new 'Goldstone-like' bosons is accomplished through a non-linear {sigma}-model Lagrangian. They are minimally coupled according to a proper prescription which provides mass terms to the intermediate vector bosons without spoiling gauge invariance. The present formalism is explicitly applied to the Standard Model of electroweak interactions.

  9. On explicit algebraic stress models for complex turbulent flows

    NASA Technical Reports Server (NTRS)

    Gatski, T. B.; Speziale, C. G.

    1992-01-01

    Explicit algebraic stress models that are valid for three-dimensional turbulent flows in noninertial frames are systematically derived from a hierarchy of second-order closure models. This represents a generalization of the model derived by Pope who based his analysis on the Launder, Reece, and Rodi model restricted to two-dimensional turbulent flows in an inertial frame. The relationship between the new models and traditional algebraic stress models -- as well as anistropic eddy visosity models -- is theoretically established. The need for regularization is demonstrated in an effort to explain why traditional algebraic stress models have failed in complex flows. It is also shown that these explicit algebraic stress models can shed new light on what second-order closure models predict for the equilibrium states of homogeneous turbulent flows and can serve as a useful alternative in practical computations.

  10. Coupling large scale hydrologic-reservoir-hydraulic models for impact studies in data sparse regions

    NASA Astrophysics Data System (ADS)

    O'Loughlin, Fiachra; Neal, Jeff; Wagener, Thorsten; Bates, Paul; Freer, Jim; Woods, Ross; Pianosi, Francesca; Sheffied, Justin

    2017-04-01

    As hydraulic modelling moves to increasingly large spatial domains it has become essential to take reservoirs and their operations into account. Large-scale hydrological models have been including reservoirs for at least the past two decades, yet they cannot explicitly model the variations in spatial extent of reservoirs, and many reservoirs operations in hydrological models are not undertaken during the run-time operation. This requires a hydraulic model, yet to-date no continental scale hydraulic model has directly simulated reservoirs and their operations. In addition to the need to include reservoirs and their operations in hydraulic models as they move to global coverage, there is also a need to link such models to large scale hydrology models or land surface schemes. This is especially true for Africa where the number of river gauges has consistently declined since the middle of the twentieth century. In this study we address these two major issues by developing: 1) a coupling methodology for the VIC large-scale hydrological model and the LISFLOOD-FP hydraulic model, and 2) a reservoir module for the LISFLOOD-FP model, which currently includes four sets of reservoir operating rules taken from the major large-scale hydrological models. The Volta Basin, West Africa, was chosen to demonstrate the capability of the modelling framework as it is a large river basin ( 400,000 km2) and contains the largest man-made lake in terms of area (8,482 km2), Lake Volta, created by the Akosombo dam. Lake Volta also experiences a seasonal variation in water levels of between two and six metres that creates a dynamic shoreline. In this study, we first run our coupled VIC and LISFLOOD-FP model without explicitly modelling Lake Volta and then compare these results with those from model runs where the dam operations and Lake Volta are included. The results show that we are able to obtain variation in the Lake Volta water levels and that including the dam operations and Lake Volta has significant impacts on the water levels across the domain.

  11. Simulation of streamflow in the McTier Creek watershed, South Carolina

    USGS Publications Warehouse

    Feaster, Toby D.; Golden, Heather E.; Odom, Kenneth R.; Lowery, Mark A.; Conrads, Paul; Bradley, Paul M.

    2010-01-01

    The McTier Creek watershed is located in the Sand Hills ecoregion of South Carolina and is a small catchment within the Edisto River Basin. Two watershed hydrology models were applied to the McTier Creek watershed as part of a larger scientific investigation to expand the understanding of relations among hydrologic, geochemical, and ecological processes that affect fish-tissue mercury concentrations within the Edisto River Basin. The two models are the topography-based hydrological model (TOPMODEL) and the grid-based mercury model (GBMM). TOPMODEL uses the variable-source area concept for simulating streamflow, and GBMM uses a spatially explicit modified curve-number approach for simulating streamflow. The hydrologic output from TOPMODEL can be used explicitly to simulate the transport of mercury in separate applications, whereas the hydrology output from GBMM is used implicitly in the simulation of mercury fate and transport in GBMM. The modeling efforts were a collaboration between the U.S. Geological Survey and the U.S. Environmental Protection Agency, National Exposure Research Laboratory. Calibrations of TOPMODEL and GBMM were done independently while using the same meteorological data and the same period of record of observed data. Two U.S. Geological Survey streamflow-gaging stations were available for comparison of observed daily mean flow with simulated daily mean flow-station 02172300, McTier Creek near Monetta, South Carolina, and station 02172305, McTier Creek near New Holland, South Carolina. The period of record at the Monetta gage covers a broad range of hydrologic conditions, including a drought and a significant wet period. Calibrating the models under these extreme conditions along with the normal flow conditions included in the record enhances the robustness of the two models. Several quantitative assessments of the goodness of fit between model simulations and the observed daily mean flows were done. These included the Nash-Sutcliffe coefficient of model-fit efficiency index, Pearson's correlation coefficient, the root mean square error, the bias, and the mean absolute error. In addition, a number of graphical tools were used to assess how well the models captured the characteristics of the observed data at the Monetta and New Holland streamflow-gaging stations. The graphical tools included temporal plots of simulated and observed daily mean flows, flow-duration curves, single-mass curves, and various residual plots. The results indicated that TOPMODEL and GBMM generally produced simulations that reasonably capture the quantity, variability, and timing of the observed streamflow. For the periods modeled, the total volume of simulated daily mean flows as compared to the total volume of the observed daily mean flow from TOPMODEL was within 1 to 5 percent, and the total volume from GBMM was within 1 to 10 percent. A noticeable characteristic of the simulated hydrographs from both models is the complexity of balancing groundwater recession and flow at the streamgage when flows peak and recede rapidly. However, GBMM results indicate that groundwater recession, which affects the receding limb of the hydrograph, was more difficult to estimate with the spatially explicit curve number approach. Although the purpose of this report is not to directly compare both models, given the characteristics of the McTier Creek watershed and the fact that GBMM uses the spatially explicit curve number approach as compared to the variable-source-area concept in TOPMODEL, GBMM was able to capture the flow characteristics reasonably well.

  12. Modelling individual tree height to crown base of Norway spruce (Picea abies (L.) Karst.) and European beech (Fagus sylvatica L.)

    PubMed Central

    Jansa, Václav

    2017-01-01

    Height to crown base (HCB) of a tree is an important variable often included as a predictor in various forest models that serve as the fundamental tools for decision-making in forestry. We developed spatially explicit and spatially inexplicit mixed-effects HCB models using measurements from a total 19,404 trees of Norway spruce (Picea abies (L.) Karst.) and European beech (Fagus sylvatica L.) on the permanent sample plots that are located across the Czech Republic. Variables describing site quality, stand density or competition, and species mixing effects were included into the HCB model with use of dominant height (HDOM), basal area of trees larger in diameters than a subject tree (BAL- spatially inexplicit measure) or Hegyi’s competition index (HCI—spatially explicit measure), and basal area proportion of a species of interest (BAPOR), respectively. The parameters describing sample plot-level random effects were included into the HCB model by applying the mixed-effects modelling approach. Among several functional forms evaluated, the logistic function was found most suited to our data. The HCB model for Norway spruce was tested against the data originated from different inventory designs, but model for European beech was tested using partitioned dataset (a part of the main dataset). The variance heteroscedasticity in the residuals was substantially reduced through inclusion of a power variance function into the HCB model. The results showed that spatially explicit model described significantly a larger part of the HCB variations [R2adj = 0.86 (spruce), 0.85 (beech)] than its spatially inexplicit counterpart [R2adj = 0.84 (spruce), 0.83 (beech)]. The HCB increased with increasing competitive interactions described by tree-centered competition measure: BAL or HCI, and species mixing effects described by BAPOR. A test of the mixed-effects HCB model with the random effects estimated using at least four trees per sample plot in the validation data confirmed that the model was precise enough for the prediction of HCB for a range of site quality, tree size, stand density, and stand structure. We therefore recommend measuring of HCB on four randomly selected trees of a species of interest on each sample plot for localizing the mixed-effects model and predicting HCB of the remaining trees on the plot. Growth simulations can be made from the data that lack the values for either crown ratio or HCB using the HCB models. PMID:29049391

  13. Dynamical discrete/continuum linear response shells theory of solvation: convergence test for NH4+ and OH- ions in water solution using DFT and DFTB methods.

    PubMed

    de Lima, Guilherme Ferreira; Duarte, Hélio Anderson; Pliego, Josefredo R

    2010-12-09

    A new dynamical discrete/continuum solvation model was tested for NH(4)(+) and OH(-) ions in water solvent. The method is similar to continuum solvation models in a sense that the linear response approximation is used. However, different from pure continuum models, explicit solvent molecules are included in the inner shell, which allows adequate treatment of specific solute-solvent interactions present in the first solvation shell, the main drawback of continuum models. Molecular dynamics calculations coupled with SCC-DFTB method are used to generate the configurations of the solute in a box with 64 water molecules, while the interaction energies are calculated at the DFT level. We have tested the convergence of the method using a variable number of explicit water molecules and it was found that even a small number of waters (as low as 14) are able to produce converged values. Our results also point out that the Born model, often used for long-range correction, is not reliable and our method should be applied for more accurate calculations.

  14. Spatially explicit shallow landslide susceptibility mapping over large areas

    USGS Publications Warehouse

    Bellugi, Dino; Dietrich, William E.; Stock, Jonathan D.; McKean, Jim; Kazian, Brian; Hargrove, Paul

    2011-01-01

    Recent advances in downscaling climate model precipitation predictions now yield spatially explicit patterns of rainfall that could be used to estimate shallow landslide susceptibility over large areas. In California, the United States Geological Survey is exploring community emergency response to the possible effects of a very large simulated storm event and to do so it has generated downscaled precipitation maps for the storm. To predict the corresponding pattern of shallow landslide susceptibility across the state, we have used the model Shalstab (a coupled steady state runoff and infinite slope stability model) which susceptibility spatially explicit estimates of relative potential instability. Such slope stability models that include the effects of subsurface runoff on potentially destabilizing pore pressure evolution require water routing and hence the definition of upslope drainage area to each potential cell. To calculate drainage area efficiently over a large area we developed a parallel framework to scale-up Shalstab and specifically introduce a new efficient parallel drainage area algorithm which produces seamless results. The single seamless shallow landslide susceptibility map for all of California was accomplished in a short run time, and indicates that much larger areas can be efficiently modelled. As landslide maps generally over predict the extent of instability for any given storm. Local empirical data on the fraction of predicted unstable cells that failed for observed rainfall intensity can be used to specify the likely extent of hazard for a given storm. This suggests that campaigns to collect local precipitation data and detailed shallow landslide location maps after major storms could be used to calibrate models and improve their use in hazard assessment for individual storms.

  15. The Radius and Entropy of a Magnetized, Rotating, Fully Convective Star: Analysis with Depth-dependent Mixing Length Theories

    NASA Astrophysics Data System (ADS)

    Ireland, Lewis G.; Browning, Matthew K.

    2018-04-01

    Some low-mass stars appear to have larger radii than predicted by standard 1D structure models; prior work has suggested that inefficient convective heat transport, due to rotation and/or magnetism, may ultimately be responsible. We examine this issue using 1D stellar models constructed using Modules for Experiments in Stellar Astrophysics (MESA). First, we consider standard models that do not explicitly include rotational/magnetic effects, with convective inhibition modeled by decreasing a depth-independent mixing length theory (MLT) parameter α MLT. We provide formulae linking changes in α MLT to changes in the interior specific entropy, and hence to the stellar radius. Next, we modify the MLT formulation in MESA to mimic explicitly the influence of rotation and magnetism, using formulations suggested by Stevenson and MacDonald & Mullan, respectively. We find rapid rotation in these models has a negligible impact on stellar structure, primarily because a star’s adiabat, and hence its radius, is predominantly affected by layers near the surface; convection is rapid and largely uninfluenced by rotation there. Magnetic fields, if they influenced convective transport in the manner described by MacDonald & Mullan, could lead to more noticeable radius inflation. Finally, we show that these non-standard effects on stellar structure can be fabricated using a depth-dependent α MLT: a non-magnetic, non-rotating model can be produced that is virtually indistinguishable from one that explicitly parameterizes rotation and/or magnetism using the two formulations above. We provide formulae linking the radially variable α MLT to these putative MLT reformulations.

  16. Explicit least squares system parameter identification for exact differential input/output models

    NASA Technical Reports Server (NTRS)

    Pearson, A. E.

    1993-01-01

    The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.

  17. Fitts' Law in the Control of Isometric Grip Force With Naturalistic Targets.

    PubMed

    Thumser, Zachary C; Slifkin, Andrew B; Beckler, Dylan T; Marasco, Paul D

    2018-01-01

    Fitts' law models the relationship between amplitude, precision, and speed of rapid movements. It is widely used to quantify performance in pointing tasks, study human-computer interaction, and generally to understand perceptual-motor information processes, including research to model performance in isometric force production tasks. Applying Fitts' law to an isometric grip force task would allow for quantifying grasp performance in rehabilitative medicine and may aid research on prosthetic control and design. We examined whether Fitts' law would hold when participants attempted to accurately produce their intended force output while grasping a manipulandum when presented with images of various everyday objects (we termed this the implicit task). Although our main interest was the implicit task, to benchmark it and establish validity, we examined performance against a more standard visual feedback condition via a digital force-feedback meter on a video monitor (explicit task). Next, we progressed from visual force feedback with force meter targets to the same targets without visual force feedback (operating largely on feedforward control with tactile feedback). This provided an opportunity to see if Fitts' law would hold without vision, and allowed us to progress toward the more naturalistic implicit task (which does not include visual feedback). Finally, we changed the nature of the targets from requiring explicit force values presented as arrows on a force-feedback meter (explicit targets) to the more naturalistic and intuitive target forces implied by images of objects (implicit targets). With visual force feedback the relation between task difficulty and the time to produce the target grip force was predicted by Fitts' law (average r 2 = 0.82). Without vision, average grip force scaled accurately although force variability was insensitive to the target presented. In contrast, images of everyday objects generated more reliable grip forces without the visualized force meter. In sum, population means were well-described by Fitts' law for explicit targets with vision ( r 2 = 0.96) and implicit targets ( r 2 = 0.89), but not as well-described for explicit targets without vision ( r 2 = 0.54). Implicit targets should provide a realistic see-object-squeeze-object test using Fitts' law to quantify the relative speed-accuracy relationship of any given grasper.

  18. Ocean-Atmosphere Coupled Model Simulations of Precipitation in the Central Andes

    NASA Technical Reports Server (NTRS)

    Nicholls, Stephen D.; Mohr, Karen I.

    2015-01-01

    The meridional extent and complex orography of the South American continent contributes to a wide diversity of climate regimes ranging from hyper-arid deserts to tropical rainforests to sub-polar highland regions. In addition, South American meteorology and climate are also made further complicated by ENSO, a powerful coupled ocean-atmosphere phenomenon. Modelling studies in this region have typically resorted to either atmospheric mesoscale or atmosphere-ocean coupled global climate models. The latter offers full physics and high spatial resolution, but it is computationally inefficient typically lack an interactive ocean, whereas the former offers high computational efficiency and ocean-atmosphere coupling, but it lacks adequate spatial and temporal resolution to adequate resolve the complex orography and explicitly simulate precipitation. Explicit simulation of precipitation is vital in the Central Andes where rainfall rates are light (0.5-5 mm hr-1), there is strong seasonality, and most precipitation is associated with weak mesoscale-organized convection. Recent increases in both computational power and model development have led to the advent of coupled ocean-atmosphere mesoscale models for both weather and climate study applications. These modelling systems, while computationally expensive, include two-way ocean-atmosphere coupling, high resolution, and explicit simulation of precipitation. In this study, we use the Coupled Ocean-Atmosphere-Wave-Sediment Transport (COAWST), a fully-coupled mesoscale atmosphere-ocean modeling system. Previous work has shown COAWST to reasonably simulate the entire 2003-2004 wet season (Dec-Feb) as validated against both satellite and model analysis data when ECMWF interim analysis data were used for boundary conditions on a 27-9-km grid configuration (Outer grid extent: 60.4S to 17.7N and 118.6W to 17.4W).

  19. Flexible explicit but rigid implicit learning in a visuomotor adaptation task

    PubMed Central

    Bond, Krista M.

    2015-01-01

    There is mounting evidence for the idea that performance in a visuomotor rotation task can be supported by both implicit and explicit forms of learning. The implicit component of learning has been well characterized in previous experiments and is thought to arise from the adaptation of an internal model driven by sensorimotor prediction errors. However, the role of explicit learning is less clear, and previous investigations aimed at characterizing the explicit component have relied on indirect measures such as dual-task manipulations, posttests, and descriptive computational models. To address this problem, we developed a new method for directly assaying explicit learning by having participants verbally report their intended aiming direction on each trial. While our previous research employing this method has demonstrated the possibility of measuring explicit learning over the course of training, it was only tested over a limited scope of manipulations common to visuomotor rotation tasks. In the present study, we sought to better characterize explicit and implicit learning over a wider range of task conditions. We tested how explicit and implicit learning change as a function of the specific visual landmarks used to probe explicit learning, the number of training targets, and the size of the rotation. We found that explicit learning was remarkably flexible, responding appropriately to task demands. In contrast, implicit learning was strikingly rigid, with each task condition producing a similar degree of implicit learning. These results suggest that explicit learning is a fundamental component of motor learning and has been overlooked or conflated in previous visuomotor tasks. PMID:25855690

  20. Effects of explicit instruction on the acquisition of students' science inquiry skills in grades 5 and 6 of primary education

    NASA Astrophysics Data System (ADS)

    Kruit, P. M.; Oostdam, R. J.; van den Berg, E.; Schuitema, J. A.

    2018-03-01

    In most primary science classes, students are taught science inquiry skills by way of learning by doing. Research shows that explicit instruction may be more effective. The aim of this study was to investigate the effects of explicit instruction on the acquisition of inquiry skills. Participants included 705 Dutch fifth and sixth graders. Students in an explicit instruction condition received an eight-week intervention of explicit instruction on inquiry skills. In the lessons of the implicit condition, all aspects of explicit instruction were absent. Students in the baseline condition followed their regular science curriculum. In a quasi-experimental pre-test-post-test design, two paper-and-pencil tests and three performance assessments were used to examine the acquisition and transfer of inquiry skills. Additionally, questionnaires were used to measure metacognitive skills. The results of a multilevel analysis controlling for pre-tests, general cognitive ability, age, gender and grade level indicated that explicit instruction facilitates the acquisition of science inquiry skills. Specifically on the performance assessment with an unfamiliar topic, students in the explicit condition outperformed students of both the implicit and baseline condition. Therefore, this study provides a strong argument for including an explicit teaching method for developing inquiry skills in primary science education.

  1. Sintering of Pt nanoparticles via volatile PtO 2: Simulation and comparison with experiments

    DOE PAGES

    Plessow, Philipp N.; Abild-Pedersen, Frank

    2016-09-23

    It is a longstanding question whether sintering of platinum under oxidizing conditions is mediated by surface migration of Pt species or through the gas phase, by PtO 2(g). Clearly, a rational approach to avoid sintering requires understanding the underlying mechanism. A basic theory for the simulation of ripening through the vapor phase has been derived by Wynblatt and Gjostein. Recent modeling efforts, however, have focused entirely on surface-mediated ripening. In this work, we explicitly model ripening through PtO 2(g) and study how oxygen pressure, temperature, and shape of the particle size distribution affect sintering. On the basis of the availablemore » data on α-quartz, adsorption of monomeric Pt species on the support is extremely weak and has therefore not been explicitly simulated, while this may be important for more strongly interacting supports. Our simulations clearly show that ripening through the gas phase is predicted to be relevant. Assuming clean Pt particles, sintering is generally overestimated. This can be remedied by explicitly including oxygen coverage effects that lower both surface free energies and the sticking coefficient of PtO 2(g). Additionally, mass-transport limitations in the gas phase may play a role. Using a parameterization that accounts for these effects, we can quantitatively reproduce a number of experiments from the literature, including pressure and temperature dependence. Lastly, this substantiates the hypothesis of ripening via PtO 2(g) as an alternative to surface-mediated ripening.« less

  2. Bethe vectors for XXX-spin chain

    NASA Astrophysics Data System (ADS)

    Burdík, Čestmír; Fuksa, Jan; Isaev, Alexei

    2014-11-01

    The paper deals with algebraic Bethe ansatz for XXX-spin chain. Generators of Yang-Baxter algebra are expressed in basis of free fermions and used to calculate explicit form of Bethe vectors. Their relation to N-component models is used to prove conjecture about their form in general. Some remarks on inhomogeneous XXX-spin chain are included.

  3. Monitoring and modeling terrestrial arthropod diversity on the Kenai National Wildlife Refuge

    Treesearch

    Matthew L. Bowser; John M. Morton

    2009-01-01

    The primary purpose of the Kenai National Wildlife Refuge (KENWR) is to "conserve fish and wildlife populations in their natural diversity," where "fish and wildlife" explicitly includes arthropods. To this end, we developed a Long Term Ecological Monitoring Program (LTEMP), a collaborative effort with the USDA Forest Inventory and Analysis (FIA)...

  4. Learning for Keeps: Teaching the Strategies Essential for Creating Independent Learners

    ERIC Educational Resources Information Center

    Koenig, Rhoda

    2010-01-01

    How can teachers ensure instruction is aligned with 21st century demands for self-directed, collaborative problem solvers? Practice exercises are not the answer. Instead, here's a book that explains why the key is to use explicit instruction that includes proficient models, specific feedback, and supportive coaching. Rhoda Koenig gives you insight…

  5. Competency Based Teaching of College Physics: The Philosophy and The Practice

    ERIC Educational Resources Information Center

    Rajapaksha, Ajith; Hirsch, Andrew S.

    2017-01-01

    The practice of learning physics contributes to the development of many transdisciplinary skills learners are able to exercise independent of the physics discipline. However, the standard practices of physics instruction do not explicitly include the monitoring or evaluation of these skills. In a competency-based (CB) learning model, the skills…

  6. Modelling the nonlinear behaviour of an underplatform damper test rig for turbine applications

    NASA Astrophysics Data System (ADS)

    Pesaresi, L.; Salles, L.; Jones, A.; Green, J. S.; Schwingshackl, C. W.

    2017-02-01

    Underplatform dampers (UPD) are commonly used in aircraft engines to mitigate the risk of high-cycle fatigue failure of turbine blades. The energy dissipated at the friction contact interface of the damper reduces the vibration amplitude significantly, and the couplings of the blades can also lead to significant shifts of the resonance frequencies of the bladed disk. The highly nonlinear behaviour of bladed discs constrained by UPDs requires an advanced modelling approach to ensure that the correct damper geometry is selected during the design of the turbine, and that no unexpected resonance frequencies and amplitudes will occur in operation. Approaches based on an explicit model of the damper in combination with multi-harmonic balance solvers have emerged as a promising way to predict the nonlinear behaviour of UPDs correctly, however rigorous experimental validations are required before approaches of this type can be used with confidence. In this study, a nonlinear analysis based on an updated explicit damper model having different levels of detail is performed, and the results are evaluated against a newly-developed UPD test rig. Detailed linear finite element models are used as input for the nonlinear analysis, allowing the inclusion of damper flexibility and inertia effects. The nonlinear friction interface between the blades and the damper is described with a dense grid of 3D friction contact elements which allow accurate capturing of the underlying nonlinear mechanism that drives the global nonlinear behaviour. The introduced explicit damper model showed a great dependence on the correct contact pressure distribution. The use of an accurate, measurement based, distribution, better matched the nonlinear dynamic behaviour of the test rig. Good agreement with the measured frequency response data could only be reached when the zero harmonic term (constant term) was included in the multi-harmonic expansion of the nonlinear problem, highlighting its importance when the contact interface experiences large normal load variation. The resulting numerical damper kinematics with strong translational and rotational motion, and the global blades frequency response were fully validated experimentally, showing the accuracy of the suggested high detailed explicit UPD modelling approach.

  7. Ancient numerical daemons of conceptual hydrological modeling: 1. Fidelity and efficiency of time stepping schemes

    NASA Astrophysics Data System (ADS)

    Clark, Martyn P.; Kavetski, Dmitri

    2010-10-01

    A major neglected weakness of many current hydrological models is the numerical method used to solve the governing model equations. This paper thoroughly evaluates several classes of time stepping schemes in terms of numerical reliability and computational efficiency in the context of conceptual hydrological modeling. Numerical experiments are carried out using 8 distinct time stepping algorithms and 6 different conceptual rainfall-runoff models, applied in a densely gauged experimental catchment, as well as in 12 basins with diverse physical and hydroclimatic characteristics. Results show that, over vast regions of the parameter space, the numerical errors of fixed-step explicit schemes commonly used in hydrology routinely dwarf the structural errors of the model conceptualization. This substantially degrades model predictions, but also, disturbingly, generates fortuitously adequate performance for parameter sets where numerical errors compensate for model structural errors. Simply running fixed-step explicit schemes with shorter time steps provides a poor balance between accuracy and efficiency: in some cases daily-step adaptive explicit schemes with moderate error tolerances achieved comparable or higher accuracy than 15 min fixed-step explicit approximations but were nearly 10 times more efficient. From the range of simple time stepping schemes investigated in this work, the fixed-step implicit Euler method and the adaptive explicit Heun method emerge as good practical choices for the majority of simulation scenarios. In combination with the companion paper, where impacts on model analysis, interpretation, and prediction are assessed, this two-part study vividly highlights the impact of numerical errors on critical performance aspects of conceptual hydrological models and provides practical guidelines for robust numerical implementation.

  8. Empirical methods for modeling landscape change, ecosystem services, and biodiversity

    Treesearch

    David Lewis; Ralph Alig

    2009-01-01

    The purpose of this paper is to synthesize recent economics research aimed at integrating discrete-choice econometric models of land-use change with spatially-explicit landscape simulations and quantitative ecology. This research explicitly models changes in the spatial pattern of landscapes in two steps: 1) econometric estimation of parcel-scale transition...

  9. SPATIALLY EXPLICIT MICRO-LEVEL MODELLING OF LAND USE CHANGE AT THE RURAL-URBAN INTERFACE. (R828012)

    EPA Science Inventory

    This paper describes micro-economic models of land use change applicable to the rural–urban interface in the US. Use of a spatially explicit micro-level modelling approach permits the analysis of regional patterns of land use as the aggregate outcomes of many, disparate...

  10. Skyshine study for next generation of fusion devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gohar, Y.; Yang, S.

    1987-02-01

    A shielding analysis for next generation of fusion devices (ETR/INTOR) was performed to study the dose equivalent outside the reactor building during operation including the contribution from neutrons and photons scattered back by collisions with air nuclei (skyshine component). Two different three-dimensional geometrical models for a tokamak fusion reactor based on INTOR design parameters were developed for this study. In the first geometrical model, the reactor geometry and the spatial distribution of the deuterium-tritium neutron source were simplified for a parametric survey. The second geometrical model employed an explicit representation of the toroidal geometry of the reactor chamber and themore » spatial distribution of the neutron source. The MCNP general Monte Carlo code for neutron and photon transport was used to perform all the calculations. The energy distribution of the neutron source was used explicitly in the calculations with ENDF/B-V data. The dose equivalent results were analyzed as a function of the concrete roof thickness of the reactor building and the location outside the reactor building.« less

  11. Fast Proton Titration Scheme for Multiscale Modeling of Protein Solutions.

    PubMed

    Teixeira, Andre Azevedo Reis; Lund, Mikael; da Silva, Fernando Luís Barroso

    2010-10-12

    Proton exchange between titratable amino acid residues and the surrounding solution gives rise to exciting electric processes in proteins. We present a proton titration scheme for studying acid-base equilibria in Metropolis Monte Carlo simulations where salt is treated at the Debye-Hückel level. The method, rooted in the Kirkwood model of impenetrable spheres, is applied on the three milk proteins α-lactalbumin, β-lactoglobulin, and lactoferrin, for which we investigate the net-charge, molecular dipole moment, and charge capacitance. Over a wide range of pH and salt conditions, excellent agreement is found with more elaborate simulations where salt is explicitly included. The implicit salt scheme is orders of magnitude faster than the explicit analog and allows for transparent interpretation of physical mechanisms. It is shown how the method can be expanded to multiscale modeling of aqueous salt solutions of many biomolecules with nonstatic charge distributions. Important examples are protein-protein aggregation, protein-polyelectrolyte complexation, and protein-membrane association.

  12. Higher-derivative operators and effective field theory for general scalar-tensor theories

    NASA Astrophysics Data System (ADS)

    Solomon, Adam R.; Trodden, Mark

    2018-02-01

    We discuss the extent to which it is necessary to include higher-derivative operators in the effective field theory of general scalar-tensor theories. We explore the circumstances under which it is correct to restrict to second-order operators only, and demonstrate this using several different techniques, such as reduction of order and explicit field redefinitions. These methods are applied, in particular, to the much-studied Horndeski theories. The goal is to clarify the application of effective field theory techniques in the context of popular cosmological models, and to explicitly demonstrate how and when higher-derivative operators can be cast into lower-derivative forms suitable for numerical solution techniques.

  13. A Galilean Invariant Explicit Algebraic Reynolds Stress Model for Curved Flows

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath

    1996-01-01

    A Galilean invariant weak-equilbrium hypothesis that is sensitive to streamline curvature is proposed. The hypothesis leads to an algebraic Reynolds stress model for curved flows that is fully explicit and self-consistent. The model is tested in curved homogeneous shear flow: the agreement is excellent with Reynolds stress closure model and adequate with available experimental data.

  14. On the physical Hilbert space of loop quantum cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noui, Karim; Perez, Alejandro; Vandersloot, Kevin

    2005-02-15

    In this paper we present a model of Riemannian loop quantum cosmology with a self-adjoint quantum scalar constraint. The physical Hilbert space is constructed using refined algebraic quantization. When matter is included in the form of a cosmological constant, the model is exactly solvable and we show explicitly that the physical Hilbert space is separable, consisting of a single physical state. We extend the model to the Lorentzian sector and discuss important implications for standard loop quantum cosmology.

  15. Hamilton's Equations with Euler Parameters for Rigid Body Dynamics Modeling. Chapter 3

    NASA Technical Reports Server (NTRS)

    Shivarama, Ravishankar; Fahrenthold, Eric P.

    2004-01-01

    A combination of Euler parameter kinematics and Hamiltonian mechanics provides a rigid body dynamics model well suited for use in strongly nonlinear problems involving arbitrarily large rotations. The model is unconstrained, free of singularities, includes a general potential energy function and a minimum set of momentum variables, and takes an explicit state space form convenient for numerical implementation. The general formulation may be specialized to address particular applications, as illustrated in several three dimensional example problems.

  16. A functional-dynamic reflection on participatory processes in modeling projects.

    PubMed

    Seidl, Roman

    2015-12-01

    The participation of nonscientists in modeling projects/studies is increasingly employed to fulfill different functions. However, it is not well investigated if and how explicitly these functions and the dynamics of a participatory process are reflected by modeling projects in particular. In this review study, I explore participatory modeling projects from a functional-dynamic process perspective. The main differences among projects relate to the functions of participation-most often, more than one per project can be identified, along with the degree of explicit reflection (i.e., awareness and anticipation) on the dynamic process perspective. Moreover, two main approaches are revealed: participatory modeling covering diverse approaches and companion modeling. It becomes apparent that the degree of reflection on the participatory process itself is not always explicit and perfectly visible in the descriptions of the modeling projects. Thus, the use of common protocols or templates is discussed to facilitate project planning, as well as the publication of project results. A generic template may help, not in providing details of a project or model development, but in explicitly reflecting on the participatory process. It can serve to systematize the particular project's approach to stakeholder collaboration, and thus quality management.

  17. A multi-band, multi-level, multi-electron model for efficient FDTD simulations of electromagnetic interactions with semiconductor quantum wells

    NASA Astrophysics Data System (ADS)

    Ravi, Koustuban; Wang, Qian; Ho, Seng-Tiong

    2015-08-01

    We report a new computational model for simulations of electromagnetic interactions with semiconductor quantum well(s) (SQW) in complex electromagnetic geometries using the finite-difference time-domain method. The presented model is based on an approach of spanning a large number of electron transverse momentum states in each SQW sub-band (multi-band) with a small number of discrete multi-electron states (multi-level, multi-electron). This enables accurate and efficient two-dimensional (2-D) and three-dimensional (3-D) simulations of nanophotonic devices with SQW active media. The model includes the following features: (1) Optically induced interband transitions between various SQW conduction and heavy-hole or light-hole sub-bands are considered. (2) Novel intra sub-band and inter sub-band transition terms are derived to thermalize the electron and hole occupational distributions to the correct Fermi-Dirac distributions. (3) The terms in (2) result in an explicit update scheme which circumvents numerically cumbersome iterative procedures. This significantly augments computational efficiency. (4) Explicit update terms to account for carrier leakage to unconfined states are derived, which thermalize the bulk and SQW populations to a common quasi-equilibrium Fermi-Dirac distribution. (5) Auger recombination and intervalence band absorption are included. The model is validated by comparisons to analytic band-filling calculations, simulations of SQW optical gain spectra, and photonic crystal lasers.

  18. Explicit modeling of organic chemistry and secondary organic aerosol partitioning for Mexico City and its outflow plume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee-Taylor, J.; Madronich, Sasha; Aumont, B.

    2011-12-21

    The evolution of organic aerosols (OA) in Mexico City and its outflow is investigated with the nearly explicit gas phase photochemistry model GECKO-A (Generator of Explicit Chemistry and Kinetics of Organics in the Atmosphere), wherein precursor hydrocarbons are oxidized to numerous intermediate species for which vapor pressures are computed and used to determine gas/particle partitioning in a chemical box model. Precursor emissions included observed C3-10 alkanes, alkenes, and light aromatics, as well as larger n-alkanes (up to C25) not directly observed but estimated by scaling to particulate emissions according to their volatility. Conditions were selected for comparison with observations mademore » in March 2006 (MILAGRO). The model successfully reproduces the magnitude and diurnal shape for both primary (POA) and secondary (SOA) organic aerosols, with POA peaking in the early morning at 15-20 ug m-3, and SOA peaking at 10-15 μg m-3 during mid-day. The majority (> 75%) of the model SOA stems from the large n-alkanes, with the remainder mostly from the light aromatics. Simulated OA elemental composition reproduces observed H/C and O/C ratios reasonably well, although modeled ratios develop more slowly than observations suggest. SOA chemical composition is initially dominated by *- hydroxy ketones and nitrates from the large alkanes, with contributions from peroxy acyl nitrates and, at later times when NOx is lower, organic hydroperoxides. The simulated plume-integrated OA mass continues to increase for several days downwind despite dilution-induced particle evaporation, since oxidation chemistry leading to SOA formation remains strong. In this model, the plume SOA burden several days downwind exceeds that leaving the city by a factor of >3. These results suggest significant regional radiative impacts of SOA.« less

  19. A realizable explicit algebraic Reynolds stress model for compressible turbulent flow with significant mean dilatation

    NASA Astrophysics Data System (ADS)

    Grigoriev, I. A.; Wallin, S.; Brethouwer, G.; Johansson, A. V.

    2013-10-01

    The explicit algebraic Reynolds stress model of Wallin and Johansson [J. Fluid Mech. 403, 89 (2000)] is extended to compressible and variable-density turbulent flows. This is achieved by correctly taking into account the influence of the mean dilatation on the rapid pressure-strain correlation. The resulting model is formally identical to the original model in the limit of constant density. For two-dimensional mean flows the model is analyzed and the physical root of the resulting quartic equation is identified. Using a fixed-point analysis of homogeneously sheared and strained compressible flows, we show that the new model is realizable, unlike the previous model. Application of the model together with a K - ω model to quasi one-dimensional plane nozzle flow, transcending from subsonic to supersonic regime, also demonstrates realizability. Negative "dilatational" production of turbulence kinetic energy competes with positive "incompressible" production, eventually making the total production negative during the spatial evolution of the nozzle flow. Finally, an approach to include the baroclinic effect into the dissipation equation is proposed and an algebraic model for density-velocity correlations is outlined to estimate the corrections associated with density fluctuations. All in all, the new model can become a significant tool for CFD (computational fluid dynamics) of compressible flows.

  20. High Performance Programming Using Explicit Shared Memory Model on Cray T3D1

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Saini, Subhash; Grassi, Charles

    1994-01-01

    The Cray T3D system is the first-phase system in Cray Research, Inc.'s (CRI) three-phase massively parallel processing (MPP) program. This system features a heterogeneous architecture that closely couples DEC's Alpha microprocessors and CRI's parallel-vector technology, i.e., the Cray Y-MP and Cray C90. An overview of the Cray T3D hardware and available programming models is presented. Under Cray Research adaptive Fortran (CRAFT) model four programming methods (data parallel, work sharing, message-passing using PVM, and explicit shared memory model) are available to the users. However, at this time data parallel and work sharing programming models are not available to the user community. The differences between standard PVM and CRI's PVM are highlighted with performance measurements such as latencies and communication bandwidths. We have found that the performance of neither standard PVM nor CRI s PVM exploits the hardware capabilities of the T3D. The reasons for the bad performance of PVM as a native message-passing library are presented. This is illustrated by the performance of NAS Parallel Benchmarks (NPB) programmed in explicit shared memory model on Cray T3D. In general, the performance of standard PVM is about 4 to 5 times less than obtained by using explicit shared memory model. This degradation in performance is also seen on CM-5 where the performance of applications using native message-passing library CMMD on CM-5 is also about 4 to 5 times less than using data parallel methods. The issues involved (such as barriers, synchronization, invalidating data cache, aligning data cache etc.) while programming in explicit shared memory model are discussed. Comparative performance of NPB using explicit shared memory programming model on the Cray T3D and other highly parallel systems such as the TMC CM-5, Intel Paragon, Cray C90, IBM-SP1, etc. is presented.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aartsen, M.G.; Abraham, K.; Ackermann, M.

    We present an improved event-level likelihood formalism for including neutrino telescope data in global fits to new physics. We derive limits on spin-dependent dark matter-proton scattering by employing the new formalism in a re-analysis of data from the 79-string IceCube search for dark matter annihilation in the Sun, including explicit energy information for each event. The new analysis excludes a number of models in the weak-scale minimal supersymmetric standard model (MSSM) for the first time. This work is accompanied by the public release of the 79-string IceCube data, as well as an associated computer code for applying the new likelihoodmore » to arbitrary dark matter models.« less

  2. Universality of Critically Pinned Interfaces in Two-Dimensional Isotropic Random Media

    NASA Astrophysics Data System (ADS)

    Grassberger, Peter

    2018-05-01

    Based on extensive simulations, we conjecture that critically pinned interfaces in two-dimensional isotropic random media with short-range correlations are always in the universality class of ordinary percolation. Thus, in contrast to interfaces in >2 dimensions, there is no distinction between fractal (i.e., percolative) and rough but nonfractal interfaces. Our claim includes interfaces in zero-temperature random field Ising models (both with and without spontaneous nucleation), in heterogeneous bootstrap percolation, and in susceptible-weakened-infected-removed epidemics. It does not include models with long-range correlations in the randomness and models where overhangs are explicitly forbidden (which would imply nonisotropy of the medium).

  3. Towards an Understanding of Atmospheric Balance

    NASA Technical Reports Server (NTRS)

    Errico, Ronald M.

    2015-01-01

    During a 35 year period I published 30+ pear-reviewed papers and technical reports concerning, in part or whole, the topic of atmospheric balance. Most used normal modes, either implicitly or explicitly, as the appropriate diagnostic tool. This included examination of nonlinear balance in several different global and regional models using a variety of novel metrics as well as development of nonlinear normal mode initialization schemes for particular global and regional models. Recent studies also included the use of adjoint models and OSSEs to answer some questions regarding balance. lwill summarize what I learned through those many works, but also present what l see as remaining issues to be considered or investigated.

  4. Explicit integration of Friedmann's equation with nonlinear equations of state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Shouxin; Gibbons, Gary W.; Yang, Yisong, E-mail: chensx@henu.edu.cn, E-mail: gwg1@damtp.cam.ac.uk, E-mail: yisongyang@nyu.edu

    2015-05-01

    In this paper we study the integrability of the Friedmann equations, when the equation of state for the perfect-fluid universe is nonlinear, in the light of the Chebyshev theorem. A series of important, yet not previously touched, problems will be worked out which include the generalized Chaplygin gas, two-term energy density, trinomial Friedmann, Born-Infeld, two-fluid models, and Chern-Simons modified gravity theory models. With the explicit integration, we are able to understand exactly the roles of the physical parameters in various models play in the cosmological evolution which may also offer clues to a profound understanding of the problems in generalmore » settings. For example, in the Chaplygin gas universe, a few integrable cases lead us to derive a universal formula for the asymptotic exponential growth rate of the scale factor, of an explicit form, whether the Friedmann equation is integrable or not, which reveals the coupled roles played by various physical sectors and it is seen that, as far as there is a tiny presence of nonlinear matter, conventional linear matter makes contribution to the dark matter, which becomes significant near the phantom divide line. The Friedmann equations also arise in areas of physics not directly related to cosmology. We provide some examples ranging from geometric optics and central orbits to soap films and the shape of glaciated valleys to which our results may be applied.« less

  5. Three Dimensional Explicit Model for Cometary Tail Ions Interactions with Solar Wind

    NASA Astrophysics Data System (ADS)

    Al Bermani, M. J. F.; Alhamed, S. A.; Khalaf, S. Z.; Ali, H. Sh.; Selman, A. A.

    2009-06-01

    The different interactions between cometary tail and solar wind ions are studied in the present paper based on three-dimensional Lax explicit method. The model used in this research is based on the continuity equations describing the cometary tail-solar wind interactions. Three dimensional system was considered in this paper. Simulation of the physical system was achieved using computer code written using Matlab 7.0. The parameters studied here assumed Halley comet type and include the particle density rho, the particles velocity v, the magnetic field strength B, dynamic pressure p and internal energy E. The results of the present research showed that the interaction near the cometary nucleus is mainly affected by the new ions added to the plasma of the solar wind, which increases the average molecular weight and result in many unique characteristics of the cometary tail. These characteristics were explained in the presence of the IMF.

  6. Simulating ectomycorrhiza in boreal forests: implementing ectomycorrhizal fungi model MYCOFON in CoupModel (v5)

    NASA Astrophysics Data System (ADS)

    He, Hongxing; Meyer, Astrid; Jansson, Per-Erik; Svensson, Magnus; Rütting, Tobias; Klemedtsson, Leif

    2018-02-01

    The symbiosis between plants and Ectomycorrhizal fungi (ECM) is shown to considerably influence the carbon (C) and nitrogen (N) fluxes between the soil, rhizosphere, and plants in boreal forest ecosystems. However, ECM are either neglected or presented as an implicit, undynamic term in most ecosystem models, which can potentially reduce the predictive power of models.

    In order to investigate the necessity of an explicit consideration of ECM in ecosystem models, we implement the previously developed MYCOFON model into a detailed process-based, soil-plant-atmosphere model, Coup-MYCOFON, which explicitly describes the C and N fluxes between ECM and roots. This new Coup-MYCOFON model approach (ECM explicit) is compared with two simpler model approaches: one containing ECM implicitly as a dynamic uptake of organic N considering the plant roots to represent the ECM (ECM implicit), and the other a static N approach in which plant growth is limited to a fixed N level (nonlim). Parameter uncertainties are quantified using Bayesian calibration in which the model outputs are constrained to current forest growth and soil C / N ratio for four forest sites along a climate and N deposition gradient in Sweden and simulated over a 100-year period.

    The nonlim approach could not describe the soil C / N ratio due to large overestimation of soil N sequestration but simulate the forest growth reasonably well. The ECM implicit and explicit approaches both describe the soil C / N ratio well but slightly underestimate the forest growth. The implicit approach simulated lower litter production and soil respiration than the explicit approach. The ECM explicit Coup-MYCOFON model provides a more detailed description of internal ecosystem fluxes and feedbacks of C and N between plants, soil, and ECM. Our modeling highlights the need to incorporate ECM and organic N uptake into ecosystem models, and the nonlim approach is not recommended for future long-term soil C and N predictions. We also provide a key set of posterior fungal parameters that can be further investigated and evaluated in future ECM studies.

  7. Combining Model-driven and Schema-based Program Synthesis

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Whittle, John

    2004-01-01

    We describe ongoing work which aims to extend the schema-based program synthesis paradigm with explicit models. In this context, schemas can be considered as model-to-model transformations. The combination of schemas with explicit models offers a number of advantages, namely, that building synthesis systems becomes much easier since the models can be used in verification and in adaptation of the synthesis systems. We illustrate our approach using an example from signal processing.

  8. Cohen's Kappa and classification table metrics 2.0: An ArcView 3.x extension for accuracy assessment of spatially explicit models

    Treesearch

    Jeff Jenness; J. Judson Wynne

    2005-01-01

    In the field of spatially explicit modeling, well-developed accuracy assessment methodologies are often poorly applied. Deriving model accuracy metrics have been possible for decades, but these calculations were made by hand or with the use of a spreadsheet application. Accuracy assessments may be useful for: (1) ascertaining the quality of a model; (2) improving model...

  9. Integration of an Individual-Based Fish Bioenergetics Model into a Spatially Explicit Water Quality Model (CE-QUAL-ICM)

    DTIC Science & Technology

    2010-04-01

    energy a fish can devote to growth being the difference between consumption in the form of food and the sum of life process expenditures , including...can incur an elemental deficit, and subsequently retain higher fractions of that element when it is in abun- dance to regain the target composition...Organic nitrogen and caloric content of detritus. Estuarine, Coastal, and Shelf Science 12: 39-47

  10. Variable selection in discrete survival models including heterogeneity.

    PubMed

    Groll, Andreas; Tutz, Gerhard

    2017-04-01

    Several variable selection procedures are available for continuous time-to-event data. However, if time is measured in a discrete way and therefore many ties occur models for continuous time are inadequate. We propose penalized likelihood methods that perform efficient variable selection in discrete survival modeling with explicit modeling of the heterogeneity in the population. The method is based on a combination of ridge and lasso type penalties that are tailored to the case of discrete survival. The performance is studied in simulation studies and an application to the birth of the first child.

  11. Discrete ordinates solutions of nongray radiative transfer with diffusely reflecting walls

    NASA Technical Reports Server (NTRS)

    Menart, J. A.; Lee, Haeok S.; Kim, Tae-Kuk

    1993-01-01

    Nongray gas radiation in a plane parallel slab bounded by gray, diffusely reflecting walls is studied using the discrete ordinates method. The spectral equation of transfer is averaged over a narrow wavenumber interval preserving the spectral correlation effect. The governing equations are derived by considering the history of multiple reflections between two reflecting wails. A closure approximation is applied so that only a finite number of reflections have to be explicitly included. The closure solutions express the physics of the problem to a very high degree and show relatively little error. Numerical solutions are obtained by applying a statistical narrow-band model for gas properties and a discrete ordinates code. The net radiative wail heat fluxes and the radiative source distributions are obtained for different temperature profiles. A zeroth-degree formulation, where no wall reflection is handled explicitly, is sufficient to predict the radiative transfer accurately for most cases considered, when compared with increasingly accurate solutions based on explicitly tracing a larger number of wail reflections without any closure approximation applied.

  12. Moderators of the Relationship between Implicit and Explicit Evaluation

    PubMed Central

    Nosek, Brian A.

    2005-01-01

    Automatic and controlled modes of evaluation sometimes provide conflicting reports of the quality of social objects. This paper presents evidence for four moderators of the relationship between automatic (implicit) and controlled (explicit) evaluations. Implicit and explicit preferences were measured for a variety of object pairs using a large sample. The average correlation was r = .36, and 52 of the 57 object pairs showed a significant positive correlation. Results of multilevel modeling analyses suggested that: (a) implicit and explicit preferences are related, (b) the relationship varies as a function of the objects assessed, and (c) at least four variables moderate the relationship – self-presentation, evaluative strength, dimensionality, and distinctiveness. The variables moderated implicit-explicit correspondence across individuals and accounted for much of the observed variation across content domains. The resulting model of the relationship between automatic and controlled evaluative processes is grounded in personal experience with the targets of evaluation. PMID:16316292

  13. Bidirectional holographic codes and sub-AdS locality

    NASA Astrophysics Data System (ADS)

    Yang, Zhao; Hayden, Patrick; Qi, Xiaoliang

    Tensor networks implementing quantum error correcting codes have recently been used as toy models of the holographic duality which explicitly realize some of the more puzzling features of the AdS/CFT correspondence. These models reproduce the Ryu-Takayanagi entropy formula for boundary intervals, and allow bulk operators to be mapped to the boundary in a redundant fashion. These exactly solvable, explicit models have provided valuable insight but nonetheless suffer from many deficiencies, some of which we attempt to address in this talk. We propose a new class of tensor network models that subsume the earlier advances and, in addition, incorporate additional features of holographic duality, including: (1) a holographic interpretation of all boundary states, not just those in a ''code'' subspace, (2) a set of bulk states playing the role of ''classical geometries'' which reproduce the Ryu-Takayanagi formula for boundary intervals, (3) a bulk gauge symmetry analogous to diffeomorphism invariance in gravitational theories, (4) emergent bulk locality for sufficiently sparse excitations, and the ability to describe geometry at sub-AdS resolutions or even flat space. David and Lucile Packard Foundation.

  14. Bidirectional holographic codes and sub-AdS locality

    NASA Astrophysics Data System (ADS)

    Yang, Zhao; Hayden, Patrick; Qi, Xiao-Liang

    2016-01-01

    Tensor networks implementing quantum error correcting codes have recently been used to construct toy models of holographic duality explicitly realizing some of the more puzzling features of the AdS/CFT correspondence. These models reproduce the Ryu-Takayanagi entropy formula for boundary intervals, and allow bulk operators to be mapped to the boundary in a redundant fashion. These exactly solvable, explicit models have provided valuable insight but nonetheless suffer from many deficiencies, some of which we attempt to address in this article. We propose a new class of tensor network models that subsume the earlier advances and, in addition, incorporate additional features of holographic duality, including: (1) a holographic interpretation of all boundary states, not just those in a "code" subspace, (2) a set of bulk states playing the role of "classical geometries" which reproduce the Ryu-Takayanagi formula for boundary intervals, (3) a bulk gauge symmetry analogous to diffeomorphism invariance in gravitational theories, (4) emergent bulk locality for sufficiently sparse excitations, and (5) the ability to describe geometry at sub-AdS resolutions or even flat space.

  15. A work-centered cognitively based architecture for decision support: the work-centered infomediary layer (WIL) model

    NASA Astrophysics Data System (ADS)

    Zachary, Wayne; Eggleston, Robert; Donmoyer, Jason; Schremmer, Serge

    2003-09-01

    Decision-making is strongly shaped and influenced by the work context in which decisions are embedded. This suggests that decision support needs to be anchored by a model (implicit or explicit) of the work process, in contrast to traditional approaches that anchor decision support to either context free decision models (e.g., utility theory) or to detailed models of the external (e.g., battlespace) environment. An architecture for cognitively-based, work centered decision support called the Work-centered Informediary Layer (WIL) is presented. WIL separates decision support into three overall processes that build and dynamically maintain an explicit context model, use the context model to identify opportunities for decision support and tailor generic decision-support strategies to the current context and offer them to the system-user/decision-maker. The generic decision support strategies include such things as activity/attention aiding, decision process structuring, work performance support (selective, contextual automation), explanation/ elaboration, infosphere data retrieval, and what if/action-projection and visualization. A WIL-based application is a work-centered decision support layer that provides active support without intent inferencing, and that is cognitively based without requiring classical cognitive task analyses. Example WIL applications are detailed and discussed.

  16. Numerical analysis of the dynamic interaction between wheel set and turnout crossing using the explicit finite element method

    NASA Astrophysics Data System (ADS)

    Xin, L.; Markine, V. L.; Shevtsov, I. Y.

    2016-03-01

    A three-dimensional (3-D) explicit dynamic finite element (FE) model is developed to simulate the impact of the wheel on the crossing nose. The model consists of a wheel set moving over the turnout crossing. Realistic wheel, wing rail and crossing geometries have been used in the model. Using this model the dynamic responses of the system such as the contact forces between the wheel and the crossing, crossing nose displacements and accelerations, stresses in rail material as well as in sleepers and ballast can be obtained. Detailed analysis of the wheel set and crossing interaction using the local contact stress state in the rail is possible as well, which provides a good basis for prediction of the long-term behaviour of the crossing (fatigue analysis). In order to tune and validate the FE model field measurements conducted on several turnouts in the railway network in the Netherlands are used here. The parametric study including variations of the crossing nose geometries performed here demonstrates the capabilities of the developed model. The results of the validation and parametric study are presented and discussed.

  17. Latin hypercube sampling and geostatistical modeling of spatial uncertainty in a spatially explicit forest landscape model simulation

    Treesearch

    Chonggang Xu; Hong S. He; Yuanman Hu; Yu Chang; Xiuzhen Li; Rencang Bu

    2005-01-01

    Geostatistical stochastic simulation is always combined with Monte Carlo method to quantify the uncertainty in spatial model simulations. However, due to the relatively long running time of spatially explicit forest models as a result of their complexity, it is always infeasible to generate hundreds or thousands of Monte Carlo simulations. Thus, it is of great...

  18. Using expert knowledge to incorporate uncertainty in cause-of-death assignments for modeling of cause-specific mortality

    USGS Publications Warehouse

    Walsh, Daniel P.; Norton, Andrew S.; Storm, Daniel J.; Van Deelen, Timothy R.; Heisy, Dennis M.

    2018-01-01

    Implicit and explicit use of expert knowledge to inform ecological analyses is becoming increasingly common because it often represents the sole source of information in many circumstances. Thus, there is a need to develop statistical methods that explicitly incorporate expert knowledge, and can successfully leverage this information while properly accounting for associated uncertainty during analysis. Studies of cause-specific mortality provide an example of implicit use of expert knowledge when causes-of-death are uncertain and assigned based on the observer's knowledge of the most likely cause. To explicitly incorporate this use of expert knowledge and the associated uncertainty, we developed a statistical model for estimating cause-specific mortality using a data augmentation approach within a Bayesian hierarchical framework. Specifically, for each mortality event, we elicited the observer's belief of cause-of-death by having them specify the probability that the death was due to each potential cause. These probabilities were then used as prior predictive values within our framework. This hierarchical framework permitted a simple and rigorous estimation method that was easily modified to include covariate effects and regularizing terms. Although applied to survival analysis, this method can be extended to any event-time analysis with multiple event types, for which there is uncertainty regarding the true outcome. We conducted simulations to determine how our framework compared to traditional approaches that use expert knowledge implicitly and assume that cause-of-death is specified accurately. Simulation results supported the inclusion of observer uncertainty in cause-of-death assignment in modeling of cause-specific mortality to improve model performance and inference. Finally, we applied the statistical model we developed and a traditional method to cause-specific survival data for white-tailed deer, and compared results. We demonstrate that model selection results changed between the two approaches, and incorporating observer knowledge in cause-of-death increased the variability associated with parameter estimates when compared to the traditional approach. These differences between the two approaches can impact reported results, and therefore, it is critical to explicitly incorporate expert knowledge in statistical methods to ensure rigorous inference.

  19. Implicit and explicit ethnocentrism: revisiting the ideologies of prejudice.

    PubMed

    Cunningham, William A; Nezlek, John B; Banaji, Mahzarin R

    2004-10-01

    Two studies investigated relationships among individual differences in implicit and explicit prejudice, right-wing ideology, and rigidity in thinking. The first study examined these relationships focusing on White Americans' prejudice toward Black Americans. The second study provided the first test of implicit ethnocentrism and its relationship to explicit ethnocentrism by studying the relationship between attitudes toward five social groups. Factor analyses found support for both implicit and explicit ethnocentrism. In both studies, mean explicit attitudes toward out groups were positive, whereas implicit attitudes were negative, suggesting that implicit and explicit prejudices are distinct; however, in both studies, implicit and explicit attitudes were related (r = .37, .47). Latent variable modeling indicates a simple structure within this ethnocentric system, with variables organized in order of specificity. These results lead to the conclusion that (a) implicit ethnocentrism exists and (b) it is related to and distinct from explicit ethnocentrism.

  20. An image-based skeletal dosimetry model for the ICRP reference adult male—internal electron sources

    NASA Astrophysics Data System (ADS)

    Hough, Matthew; Johnson, Perry; Rajon, Didier; Jokisch, Derek; Lee, Choonsik; Bolch, Wesley

    2011-04-01

    In this study, a comprehensive electron dosimetry model of the adult male skeletal tissues is presented. The model is constructed using the University of Florida adult male hybrid phantom of Lee et al (2010 Phys. Med. Biol. 55 339-63) and the EGSnrc-based Paired Image Radiation Transport code of Shah et al (2005 J. Nucl. Med. 46 344-53). Target tissues include the active bone marrow, associated with radiogenic leukemia, and total shallow marrow, associated with radiogenic bone cancer. Monoenergetic electron emissions are considered over the energy range 1 keV to 10 MeV for the following sources: bone marrow (active and inactive), trabecular bone (surfaces and volumes), and cortical bone (surfaces and volumes). Specific absorbed fractions are computed according to the MIRD schema, and are given as skeletal-averaged values in the paper with site-specific values reported in both tabular and graphical format in an electronic annex available from http://stacks.iop.org/0031-9155/56/2309/mmedia. The distribution of cortical bone and spongiosa at the macroscopic dimensions of the phantom, as well as the distribution of trabecular bone and marrow tissues at the microscopic dimensions of the phantom, is imposed through detailed analyses of whole-body ex vivo CT images (1 mm resolution) and spongiosa-specific ex vivo microCT images (30 µm resolution), respectively, taken from a 40 year male cadaver. The method utilized in this work includes: (1) explicit accounting for changes in marrow self-dose with variations in marrow cellularity, (2) explicit accounting for electron escape from spongiosa, (3) explicit consideration of spongiosa cross-fire from cortical bone, and (4) explicit consideration of the ICRP's change in the surrogate tissue region defining the location of the osteoprogenitor cells (from a 10 µm endosteal layer covering the trabecular and cortical surfaces to a 50 µm shallow marrow layer covering trabecular and medullary cavity surfaces). Skeletal-averaged values of absorbed fraction in the present model are noted to be very compatible with those weighted by the skeletal tissue distributions found in the ICRP Publication 110 adult male and female voxel phantoms, but are in many cases incompatible with values used in current and widely implemented internal dosimetry software.

  1. Diabatic forcing and initialization with assimilation of cloud and rain water in a forecast model: Methodology

    NASA Technical Reports Server (NTRS)

    Raymond, William H.; Olson, William S.; Callan, Geary

    1990-01-01

    The focus of this part of the investigation is to find one or more general modeling techniques that will help reduce the time taken by numerical forecast models to initiate or spin-up precipitation processes and enhance storm intensity. If the conventional data base could explain the atmospheric mesoscale flow in detail, then much of our problem would be eliminated. But the data base is primarily synoptic scale, requiring that a solution must be sought either in nonconventional data, in methods to initialize mesoscale circulations, or in ways of retaining between forecasts the model generated mesoscale dynamics and precipitation fields. All three methods are investigated. The initialization and assimilation of explicit cloud and rainwater quantities computed from conservation equations in a mesoscale regional model are examined. The physical processes include condensation, evaporation, autoconversion, accretion, and the removal of rainwater by fallout. The question of how to initialize the explicit liquid water calculations in numerical models and how to retain information about precipitation processes during the 4-D assimilation cycle are important issues that are addressed. The explicit cloud calculations were purposely kept simple so that different initialization techniques can be easily and economically tested. Precipitation spin-up processes associated with three different types of weather phenomena are examined. Our findings show that diabatic initialization, or diabatic initialization in combination with a new diabatic forcing procedure, work effectively to enhance the spin-up of precipitation in a mesoscale numerical weather prediction forecast. Also, the retention of cloud and rain water during the analysis phase of the 4-D data assimilation procedure is shown to be valuable. Without detailed observations, the vertical placement of the diabatic heating remains a critical problem.

  2. Pre-Service Teachers' Implicit and Explicit Attitudes toward Obesity Influence Their Judgments of Students

    ERIC Educational Resources Information Center

    Glock, Sabine; Beverborg, Arnoud Oude Groote; Müller, Barbara C. N.

    2016-01-01

    Obese children experience disadvantages in school and discrimination from their teachers. Teachers' implicit and explicit attitudes have been identified as contributing to these disadvantages. Drawing on dual process models, we investigated the nature of pre-service teachers' implicit and explicit attitudes, their motivation to respond without…

  3. Modeling and Simulations for the High Flux Isotope Reactor Cycle 400

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ilas, Germina; Chandler, David; Ade, Brian J

    2015-03-01

    A concerted effort over the past few years has been focused on enhancing the core model for the High Flux Isotope Reactor (HFIR), as part of a comprehensive study for HFIR conversion from high-enriched uranium (HEU) to low-enriched uranium (LEU) fuel. At this time, the core model used to perform analyses in support of HFIR operation is an MCNP model for the beginning of Cycle 400, which was documented in detail in a 2005 technical report. A HFIR core depletion model that is based on current state-of-the-art methods and nuclear data was needed to serve as reference for the designmore » of an LEU fuel for HFIR. The recent enhancements in modeling and simulations for HFIR that are discussed in the present report include: (1) revision of the 2005 MCNP model for the beginning of Cycle 400 to improve the modeling data and assumptions as necessary based on appropriate primary reference sources HFIR drawings and reports; (2) improvement of the fuel region model, including an explicit representation for the involute fuel plate geometry that is characteristic to HFIR fuel; and (3) revision of the Monte Carlo-based depletion model for HFIR in use since 2009 but never documented in detail, with the development of a new depletion model for the HFIR explicit fuel plate representation. The new HFIR models for Cycle 400 are used to determine various metrics of relevance to reactor performance and safety assessments. The calculated metrics are compared, where possible, with measurement data from preconstruction critical experiments at HFIR, data included in the current HFIR safety analysis report, and/or data from previous calculations performed with different methods or codes. The results of the analyses show that the models presented in this report provide a robust and reliable basis for HFIR analyses.« less

  4. SOCIO-ETHICAL ISSUES IN PERSONALIZED MEDICINE: A SYSTEMATIC REVIEW OF ENGLISH LANGUAGE HEALTH TECHNOLOGY ASSESSMENTS OF GENE EXPRESSION PROFILING TESTS FOR BREAST CANCER PROGNOSIS.

    PubMed

    Ali-Khan, Sarah E; Black, Lee; Palmour, Nicole; Hallett, Michael T; Avard, Denise

    2015-01-01

    There have been multiple calls for explicit integration of ethical, legal, and social issues (ELSI) in health technology assessment (HTA) and addressing ELSI has been highlighted as key in optimizing benefits in the Omics/Personalized Medicine field. This study examines HTAs of an early clinical example of Personalized Medicine (gene expression profile tests [GEP] for breast cancer prognosis) aiming to: (i) identify ELSI; (ii) assess whether ELSIs are implicitly or explicitly addressed; and (iii) report methodology used for ELSI integration. A systematic search for HTAs (January 2004 to September 2012), followed by descriptive and qualitative content analysis. Seventeen HTAs for GEP were retrieved. Only three (18%) explicitly presented ELSI, and only one reported methodology. However, all of the HTAs included implicit ELSI. Eight themes of implicit and explicit ELSI were identified. "Classical" ELSI including privacy, informed consent, and concerns about limited patient/clinician genetic literacy were always presented explicitly. Some ELSI, including the need to understand how individual patients' risk tolerances affect clinical decision-making after reception of GEP results, were presented both explicitly and implicitly in HTAs. Others, such as concern about evidentiary deficiencies for clinical utility of GEP tests, occurred only implicitly. Despite a wide variety of important ELSI raised, these were rarely explicitly addressed in HTAs. Explicit treatment would increase their accessibility to decision-makers, and may augment HTA efficiency maximizing their utility. This is particularly important where complex Personalized Medicine applications are rapidly expanding choices for patients, clinicians and healthcare systems.

  5. Systemic Blockade of D2-Like Dopamine Receptors Facilitates Extinction of Conditioned Fear in Mice

    ERIC Educational Resources Information Center

    Ponnusamy, Ravikumar; Nissim, Helen A.; Barad, Mark

    2005-01-01

    Extinction of conditioned fear in animals is the explicit model of behavior therapy for human anxiety disorders, including panic disorder, obsessive-compulsive disorder, and post-traumatic stress disorder. Based on previous data indicating that fear extinction in rats is blocked by quinpirole, an agonist of dopamine D2 receptors, we hypothesized…

  6. Object-oriented biomedical system modelling--the language.

    PubMed

    Hakman, M; Groth, T

    1999-11-01

    The paper describes a new object-oriented biomedical continuous system modelling language (OOBSML). It is fully object-oriented and supports model inheritance, encapsulation, and model component instantiation and behaviour polymorphism. Besides the traditional differential and algebraic equation expressions the language includes also formal expressions for documenting models and defining model quantity types and quantity units. It supports explicit definition of model input-, output- and state quantities, model components and component connections. The OOBSML model compiler produces self-contained, independent, executable model components that can be instantiated and used within other OOBSML models and/or stored within model and model component libraries. In this way complex models can be structured as multilevel, multi-component model hierarchies. Technically the model components produced by the OOBSML compiler are executable computer code objects based on distributed object and object request broker technology. This paper includes both the language tutorial and the formal language syntax and semantic description.

  7. Linking Geomechanical Models with Observations of Microseismicity during CCS Operations

    NASA Astrophysics Data System (ADS)

    Verdon, J.; Kendall, J.; White, D.

    2012-12-01

    During CO2 injection for the purposes of carbon capture and storage (CCS), injection-induced fracturing of the overburden represents a key risk to storage integrity. Fractures in a caprock provide a pathway along which buoyant CO2 can rise and escape the storage zone. Therefore the ability to link field-scale geomechanical models with field geophysical observations is of paramount importance to guarantee secure CO2 storage. Accurate location of microseismic events identifies where brittle failure has occurred on fracture planes. This is a manifestation of the deformation induced by CO2 injection. As the pore pressure is increased during injection, effective stress is decreased, leading to inflation of the reservoir and deformation of surrounding rocks, which creates microseismicity. The deformation induced by injection can be simulated using finite-element mechanical models. Such a model can be used to predict when and where microseismicity is expected to occur. However, typical elements in a field scale mechanical models have decameter scales, while the rupture size for microseismic events are typically of the order of 1 square meter. This means that mapping modeled stress changes to predictions of microseismic activity can be challenging. Where larger scale faults have been identified, they can be included explicitly in the geomechanical model. Where movement is simulated along these discrete features, it can be assumed that microseismicity will occur. However, microseismic events typically occur on fracture networks that are too small to be simulated explicitly in a field-scale model. Therefore, the likelihood of microseismicity occurring must be estimated within a finite element that does not contain explicitly modeled discontinuities. This can be done in a number of ways, including the utilization of measures such as closeness on the stress state to predetermined failure criteria, either for planes with a defined orientation (the Mohr-Coulomb criteria) for planes with arbitrary orientation (the Fracture Potential). Inelastic deformation may be incorporated within the constitutive models of the mechanical model itself in the form of plastic deformation criteria. Under such a system yield, plastic deformation, and strain hardening/weakening can be incorporated explicitly into the mechanical model, where the assumption is that the onset of inelastic processes corresponds with the onset of microseismicity within a particular element. Alternatively, an elastic geomechanical model may be used, where the resulting stress states after deformation are post-processed for a microseismicity analysis. In this paper we focus on CO2 injection for CCS and Enhanced Oil Recovery in the Weyburn Field, Canada. We generate field-scale geomechanical models to simulate the response to CO2 injection. We compare observations of microseismicity to the predictions made by the models, showing how geomechanical models can improve interpretation and understanding of microseismic observations, as well as how microseismic observations can be used to ground-truth models (a model that provides predictions with observations can be deemed more reliable than one that does not). By tuning material properties within acceptable ranges, we are able to find models that match microseismic and other geophysical observations most accurately.

  8. Topological entanglement entropy with a twist.

    PubMed

    Brown, Benjamin J; Bartlett, Stephen D; Doherty, Andrew C; Barrett, Sean D

    2013-11-27

    Defects in topologically ordered models have interesting properties that are reminiscent of the anyonic excitations of the models themselves. For example, dislocations in the toric code model are known as twists and possess properties that are analogous to Ising anyons. We strengthen this analogy by using the topological entanglement entropy as a diagnostic tool to identify properties of both defects and excitations in the toric code. Specifically, we show, through explicit calculation, that the toric code model including twists and dyon excitations has the same quantum dimensions, the same total quantum dimension, and the same fusion rules as an Ising anyon model.

  9. Regulatory T cell effects in antitumor laser immunotherapy: a mathematical model and analysis

    NASA Astrophysics Data System (ADS)

    Dawkins, Bryan A.; Laverty, Sean M.

    2016-03-01

    Regulatory T cells (Tregs) have tremendous influence on treatment outcomes in patients receiving immunotherapy for cancerous tumors. We present a mathematical model incorporating the primary cellular and molecular components of antitumor laser immunotherapy. We explicitly model developmental classes of dendritic cells (DCs), cytotoxic T cells (CTLs), primary and metastatic tumor cells, and tumor antigen. Regulatory T cells have been shown to kill antigen presenting cells, to influence dendritic cell maturation and migration, to kill activated killer CTLs in the tumor microenvironment, and to influence CTL proliferation. Since Tregs affect explicitly modeled cells, but we do not explicitly model dynamics of Treg themselves, we use model parameters to analyze effects of Treg immunosuppressive activity. We will outline a systematic method for assigning clinical outcomes to model simulations and use this condition to associate simulated patient treatment outcome with Treg activity.

  10. Efficiency and flexibility using implicit methods within atmosphere dycores

    NASA Astrophysics Data System (ADS)

    Evans, K. J.; Archibald, R.; Norman, M. R.; Gardner, D. J.; Woodward, C. S.; Worley, P.; Taylor, M.

    2016-12-01

    A suite of explicit and implicit methods are evaluated for a range of configurations of the shallow water dynamical core within the spectral-element Community Atmosphere Model (CAM-SE) to explore their relative computational performance. The configurations are designed to explore the attributes of each method under different but relevant model usage scenarios including varied spectral order within an element, static regional refinement, and scaling to large problem sizes. The limitations and benefits of using explicit versus implicit, with different discretizations and parameters, are discussed in light of trade-offs such as MPI communication, memory, and inherent efficiency bottlenecks. For the regionally refined shallow water configurations, the implicit BDF2 method is about the same efficiency as an explicit Runge-Kutta method, without including a preconditioner. Performance of the implicit methods with the residual function executed on a GPU is also presented; there is speed up for the residual relative to a CPU, but overwhelming transfer costs motivate moving more of the solver to the device. Given the performance behavior of implicit methods within the shallow water dynamical core, the recommendation for future work using implicit solvers is conditional based on scale separation and the stiffness of the problem. The strong growth of linear iterations with increasing resolution or time step size is the main bottleneck to computational efficiency. Within the hydrostatic dynamical core, of CAM-SE, we present results utilizing approximate block factorization preconditioners implemented using the Trilinos library of solvers. They reduce the cost of linear system solves and improve parallel scalability. We provide a summary of the remaining efficiency considerations within the preconditioner and utilization of the GPU, as well as a discussion about the benefits of a time stepping method that provides converged and stable solutions for a much wider range of time step sizes. As more complex model components, for example new physics and aerosols, are connected in the model, having flexibility in the time stepping will enable more options for combining and resolving multiple scales of behavior.

  11. Incorporating microbes into large-scale biogeochemical models

    NASA Astrophysics Data System (ADS)

    Allison, S. D.; Martiny, J. B.

    2008-12-01

    Micro-organisms, including Bacteria, Archaea, and Fungi, control major processes throughout the Earth system. Recent advances in microbial ecology and microbiology have revealed an astounding level of genetic and metabolic diversity in microbial communities. However, a framework for interpreting the meaning of this diversity has lagged behind the initial discoveries. Microbial communities have yet to be included explicitly in any major biogeochemical models in terrestrial ecosystems, and have only recently broken into ocean models. Although simplification of microbial communities is essential in complex systems, omission of community parameters may seriously compromise model predictions of biogeochemical processes. Two key questions arise from this tradeoff: 1) When and where must microbial community parameters be included in biogeochemical models? 2) If microbial communities are important, how should they be simplified, aggregated, and parameterized in models? To address these questions, we conducted a meta-analysis to determine if microbial communities are sensitive to four environmental disturbances that are associated with global change. In all cases, we found that community composition changed significantly following disturbance. However, the implications for ecosystem function were unclear in most of the published studies. Therefore, we developed a simple model framework to illustrate the situations in which microbial community changes would affect rates of biogeochemical processes. We found that these scenarios could be quite common, but powerful predictive models cannot be developed without much more information on the functions and disturbance responses of microbial taxa. Small-scale models that explicitly incorporate microbial communities also suggest that process rates strongly depend on microbial interactions and disturbance responses. The challenge is to scale up these models to make predictions at the ecosystem and global scales based on measurable parameters. We argue that meeting this challenge will require a coordinated effort to develop a series of nested models at scales ranging from the micron to the globe in order to optimize the tradeoff between model realism and feasibility.

  12. Knowledge representation to support reasoning based on multiple models

    NASA Technical Reports Server (NTRS)

    Gillam, April; Seidel, Jorge P.; Parker, Alice C.

    1990-01-01

    Model Based Reasoning is a powerful tool used to design and analyze systems, which are often composed of numerous interactive, interrelated subsystems. Models of the subsystems are written independently and may be used together while they are still under development. Thus the models are not static. They evolve as information becomes obsolete, as improved artifact descriptions are developed, and as system capabilities change. Researchers are using three methods to support knowledge/data base growth, to track the model evolution, and to handle knowledge from diverse domains. First, the representation methodology is based on having pools, or types, of knowledge from which each model is constructed. In addition information is explicit. This includes the interactions between components, the description of the artifact structure, and the constraints and limitations of the models. The third principle we have followed is the separation of the data and knowledge from the inferencing and equation solving mechanisms. This methodology is used in two distinct knowledge-based systems: one for the design of space systems and another for the synthesis of VLSI circuits. It has facilitated the growth and evolution of our models, made accountability of results explicit, and provided credibility for the user community. These capabilities have been implemented and are being used in actual design projects.

  13. On constitutive functions for hindered settling velocity in 1-D settler models: Selection of appropriate model structure.

    PubMed

    Torfs, Elena; Balemans, Sophie; Locatelli, Florent; Diehl, Stefan; Bürger, Raimund; Laurent, Julien; François, Pierre; Nopens, Ingmar

    2017-03-01

    Advanced 1-D models for Secondary Settling Tanks (SSTs) explicitly account for several phenomena that influence the settling process (such as hindered settling and compression settling). For each of these phenomena a valid mathematical expression needs to be selected and its parameters calibrated to obtain a model that can be used for operation and control. This is, however, a challenging task as these phenomena may occur simultaneously. Therefore, the presented work evaluates several available expressions for hindered settling based on long-term batch settling data. Specific attention is paid to the behaviour of these hindered settling functions in the compression region in order to evaluate how the modelling of sludge compression is influenced by the choice of a certain hindered settling function. The analysis shows that the exponential hindered settling forms, which are most commonly used in traditional SST models, not only account for hindered settling but partly lump other phenomena (compression) as well. This makes them unsuitable for advanced 1-D models that explicitly include each phenomenon in a modular way. A power-law function is shown to be more appropriate to describe the hindered settling velocity in advanced 1-D SST models. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. A Unified Framework for Monetary Theory and Policy Analysis.

    ERIC Educational Resources Information Center

    Lagos, Ricardo; Wright, Randall

    2005-01-01

    Search-theoretic models of monetary exchange are based on explicit descriptions of the frictions that make money essential. However, tractable versions of these models typically make strong assumptions that render them ill suited for monetary policy analysis. We propose a new framework, based on explicit micro foundations, within which macro…

  15. A Naturalistic Inquiry into Praxis When Education Instructors Use Explicit Metacognitive Modeling

    ERIC Educational Resources Information Center

    Shannon, Nancy Gayle

    2014-01-01

    This naturalistic inquiry brought together six education instructors in one small teacher preparation program to explore what happens to educational instructors' praxis when the education instructors use explicit metacognitive modeling to reveal their thinking behind their pedagogical decision-making. The participants, while teaching an…

  16. Modeling trends from North American Breeding Bird Survey data: a spatially explicit approach

    USGS Publications Warehouse

    Bled, Florent; Sauer, John R.; Pardieck, Keith L.; Doherty, Paul; Royle, J. Andy

    2013-01-01

    Population trends, defined as interval-specific proportional changes in population size, are often used to help identify species of conservation interest. Efficient modeling of such trends depends on the consideration of the correlation of population changes with key spatial and environmental covariates. This can provide insights into causal mechanisms and allow spatially explicit summaries at scales that are of interest to management agencies. We expand the hierarchical modeling framework used in the North American Breeding Bird Survey (BBS) by developing a spatially explicit model of temporal trend using a conditional autoregressive (CAR) model. By adopting a formal spatial model for abundance, we produce spatially explicit abundance and trend estimates. Analyses based on large-scale geographic strata such as Bird Conservation Regions (BCR) can suffer from basic imbalances in spatial sampling. Our approach addresses this issue by providing an explicit weighting based on the fundamental sample allocation unit of the BBS. We applied the spatial model to three species from the BBS. Species have been chosen based upon their well-known population change patterns, which allows us to evaluate the quality of our model and the biological meaning of our estimates. We also compare our results with the ones obtained for BCRs using a nonspatial hierarchical model (Sauer and Link 2011). Globally, estimates for mean trends are consistent between the two approaches but spatial estimates provide much more precise trend estimates in regions on the edges of species ranges that were poorly estimated in non-spatial analyses. Incorporating a spatial component in the analysis not only allows us to obtain relevant and biologically meaningful estimates for population trends, but also enables us to provide a flexible framework in order to obtain trend estimates for any area.

  17. Can a continuum solvent model reproduce the free energy landscape of a -hairpin folding in water?

    NASA Astrophysics Data System (ADS)

    Zhou, Ruhong; Berne, Bruce J.

    2002-10-01

    The folding free energy landscape of the C-terminal -hairpin of protein G is explored using the surface-generalized Born (SGB) implicit solvent model, and the results are compared with the landscape from an earlier study with explicit solvent model. The OPLSAA force field is used for the -hairpin in both implicit and explicit solvent simulations, and the conformational space sampling is carried out with a highly parallel replica-exchange method. Surprisingly, we find from exhaustive conformation space sampling that the free energy landscape from the implicit solvent model is quite different from that of the explicit solvent model. In the implicit solvent model some nonnative states are heavily overweighted, and more importantly, the lowest free energy state is no longer the native -strand structure. An overly strong salt-bridge effect between charged residues (E42, D46, D47, E56, and K50) is found to be responsible for this behavior in the implicit solvent model. Despite this, we find that the OPLSAA/SGB energies of all the nonnative structures are higher than that of the native structure; thus the OPLSAA/SGB energy is still a good scoring function for structure prediction for this -hairpin. Furthermore, the -hairpin population at 282 K is found to be less than 40% from the implicit solvent model, which is much smaller than the 72% from the explicit solvent model and 80% from experiment. On the other hand, both implicit and explicit solvent simulations with the OPLSAA force field exhibit no meaningful helical content during the folding process, which is in contrast to some very recent studies using other force fields.

  18. Can a continuum solvent model reproduce the free energy landscape of a β-hairpin folding in water?

    PubMed Central

    Zhou, Ruhong; Berne, Bruce J.

    2002-01-01

    The folding free energy landscape of the C-terminal β-hairpin of protein G is explored using the surface-generalized Born (SGB) implicit solvent model, and the results are compared with the landscape from an earlier study with explicit solvent model. The OPLSAA force field is used for the β-hairpin in both implicit and explicit solvent simulations, and the conformational space sampling is carried out with a highly parallel replica-exchange method. Surprisingly, we find from exhaustive conformation space sampling that the free energy landscape from the implicit solvent model is quite different from that of the explicit solvent model. In the implicit solvent model some nonnative states are heavily overweighted, and more importantly, the lowest free energy state is no longer the native β-strand structure. An overly strong salt-bridge effect between charged residues (E42, D46, D47, E56, and K50) is found to be responsible for this behavior in the implicit solvent model. Despite this, we find that the OPLSAA/SGB energies of all the nonnative structures are higher than that of the native structure; thus the OPLSAA/SGB energy is still a good scoring function for structure prediction for this β-hairpin. Furthermore, the β-hairpin population at 282 K is found to be less than 40% from the implicit solvent model, which is much smaller than the 72% from the explicit solvent model and ≈80% from experiment. On the other hand, both implicit and explicit solvent simulations with the OPLSAA force field exhibit no meaningful helical content during the folding process, which is in contrast to some very recent studies using other force fields. PMID:12242327

  19. Can a continuum solvent model reproduce the free energy landscape of a beta -hairpin folding in water?

    PubMed

    Zhou, Ruhong; Berne, Bruce J

    2002-10-01

    The folding free energy landscape of the C-terminal beta-hairpin of protein G is explored using the surface-generalized Born (SGB) implicit solvent model, and the results are compared with the landscape from an earlier study with explicit solvent model. The OPLSAA force field is used for the beta-hairpin in both implicit and explicit solvent simulations, and the conformational space sampling is carried out with a highly parallel replica-exchange method. Surprisingly, we find from exhaustive conformation space sampling that the free energy landscape from the implicit solvent model is quite different from that of the explicit solvent model. In the implicit solvent model some nonnative states are heavily overweighted, and more importantly, the lowest free energy state is no longer the native beta-strand structure. An overly strong salt-bridge effect between charged residues (E42, D46, D47, E56, and K50) is found to be responsible for this behavior in the implicit solvent model. Despite this, we find that the OPLSAA/SGB energies of all the nonnative structures are higher than that of the native structure; thus the OPLSAA/SGB energy is still a good scoring function for structure prediction for this beta-hairpin. Furthermore, the beta-hairpin population at 282 K is found to be less than 40% from the implicit solvent model, which is much smaller than the 72% from the explicit solvent model and approximately equal 80% from experiment. On the other hand, both implicit and explicit solvent simulations with the OPLSAA force field exhibit no meaningful helical content during the folding process, which is in contrast to some very recent studies using other force fields.

  20. Hierarchical spatial models for predicting pygmy rabbit distribution and relative abundance

    USGS Publications Warehouse

    Wilson, T.L.; Odei, J.B.; Hooten, M.B.; Edwards, T.C.

    2010-01-01

    Conservationists routinely use species distribution models to plan conservation, restoration and development actions, while ecologists use them to infer process from pattern. These models tend to work well for common or easily observable species, but are of limited utility for rare and cryptic species. This may be because honest accounting of known observation bias and spatial autocorrelation are rarely included, thereby limiting statistical inference of resulting distribution maps. We specified and implemented a spatially explicit Bayesian hierarchical model for a cryptic mammal species (pygmy rabbit Brachylagus idahoensis). Our approach used two levels of indirect sign that are naturally hierarchical (burrows and faecal pellets) to build a model that allows for inference on regression coefficients as well as spatially explicit model parameters. We also produced maps of rabbit distribution (occupied burrows) and relative abundance (number of burrows expected to be occupied by pygmy rabbits). The model demonstrated statistically rigorous spatial prediction by including spatial autocorrelation and measurement uncertainty. We demonstrated flexibility of our modelling framework by depicting probabilistic distribution predictions using different assumptions of pygmy rabbit habitat requirements. Spatial representations of the variance of posterior predictive distributions were obtained to evaluate heterogeneity in model fit across the spatial domain. Leave-one-out cross-validation was conducted to evaluate the overall model fit. Synthesis and applications. Our method draws on the strengths of previous work, thereby bridging and extending two active areas of ecological research: species distribution models and multi-state occupancy modelling. Our framework can be extended to encompass both larger extents and other species for which direct estimation of abundance is difficult. ?? 2010 The Authors. Journal compilation ?? 2010 British Ecological Society.

  1. Parametrization of Backbone Flexibility in a Coarse-Grained Force Field for Proteins (COFFDROP) Derived from All-Atom Explicit-Solvent Molecular Dynamics Simulations of All Possible Two-Residue Peptides.

    PubMed

    Frembgen-Kesner, Tamara; Andrews, Casey T; Li, Shuxiang; Ngo, Nguyet Anh; Shubert, Scott A; Jain, Aakash; Olayiwola, Oluwatoni J; Weishaar, Mitch R; Elcock, Adrian H

    2015-05-12

    Recently, we reported the parametrization of a set of coarse-grained (CG) nonbonded potential functions, derived from all-atom explicit-solvent molecular dynamics (MD) simulations of amino acid pairs and designed for use in (implicit-solvent) Brownian dynamics (BD) simulations of proteins; this force field was named COFFDROP (COarse-grained Force Field for Dynamic Representations Of Proteins). Here, we describe the extension of COFFDROP to include bonded backbone terms derived from fitting to results of explicit-solvent MD simulations of all possible two-residue peptides containing the 20 standard amino acids, with histidine modeled in both its protonated and neutral forms. The iterative Boltzmann inversion (IBI) method was used to optimize new CG potential functions for backbone-related terms by attempting to reproduce angle, dihedral, and distance probability distributions generated by the MD simulations. In a simple test of the transferability of the extended force field, the angle, dihedral, and distance probability distributions obtained from BD simulations of 56 three-residue peptides were compared to results from corresponding explicit-solvent MD simulations. In a more challenging test of the COFFDROP force field, it was used to simulate eight intrinsically disordered proteins and was shown to quite accurately reproduce the experimental hydrodynamic radii (Rhydro), provided that the favorable nonbonded interactions of the force field were uniformly scaled downward in magnitude. Overall, the results indicate that the COFFDROP force field is likely to find use in modeling the conformational behavior of intrinsically disordered proteins and multidomain proteins connected by flexible linkers.

  2. Solar flare model atmospheres

    NASA Technical Reports Server (NTRS)

    Hawley, Suzanne L.; Fisher, George H.

    1993-01-01

    Solar flare model atmospheres computed under the assumption of energetic equilibrium in the chromosphere are presented. The models use a static, one-dimensional plane parallel geometry and are designed within a physically self-consistent coronal loop. Assumed flare heating mechanisms include collisions from a flux of non-thermal electrons and x-ray heating of the chromosphere by the corona. The heating by energetic electrons accounts explicitly for variations of the ionized fraction with depth in the atmosphere. X-ray heating of the chromosphere by the corona incorporates a flare loop geometry by approximating distant portions of the loop with a series of point sources, while treating the loop leg closest to the chromospheric footpoint in the plane-parallel approximation. Coronal flare heating leads to increased heat conduction, chromospheric evaporation and subsequent changes in coronal pressure; these effects are included self-consistently in the models. Cooling in the chromosphere is computed in detail for the important optically thick HI, CaII and MgII transitions using the non-LTE prescription in the program MULTI. Hydrogen ionization rates from x-ray photo-ionization and collisional ionization by non-thermal electrons are included explicitly in the rate equations. The models are computed in the 'impulsive' and 'equilibrium' limits, and in a set of intermediate 'evolving' states. The impulsive atmospheres have the density distribution frozen in pre-flare configuration, while the equilibrium models assume the entire atmosphere is in hydrostatic and energetic equilibrium. The evolving atmospheres represent intermediate stages where hydrostatic equilibrium has been established in the chromosphere and corona, but the corona is not yet in energetic equilibrium with the flare heating source. Thus, for example, chromospheric evaporation is still in the process of occurring.

  3. Multiscale modeling of porous ceramics using movable cellular automaton method

    NASA Astrophysics Data System (ADS)

    Smolin, Alexey Yu.; Smolin, Igor Yu.; Smolina, Irina Yu.

    2017-10-01

    The paper presents a multiscale model for porous ceramics based on movable cellular automaton method, which is a particle method in novel computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the unique position in space. As a result, we get the average values of Young's modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behavior at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via effective properties determined earliar. If the pore size distribution function of the material has N maxima we need to perform computations for N-1 levels in order to get the properties step by step from the lowest scale up to the macroscale. The proposed approach was applied to modeling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behavior of the model sample at the macroscale.

  4. A Minimal Three-Dimensional Tropical Cyclone Model.

    NASA Astrophysics Data System (ADS)

    Zhu, Hongyan; Smith, Roger K.; Ulrich, Wolfgang

    2001-07-01

    A minimal 3D numerical model designed for basic studies of tropical cyclone behavior is described. The model is formulated in coordinates on an f or plane and has three vertical levels, one characterizing a shallow boundary layer and the other two representing the upper and lower troposphere, respectively. It has three options for treating cumulus convection on the subgrid scale and a simple scheme for the explicit release of latent heat on the grid scale. The subgrid-scale schemes are based on the mass-flux models suggested by Arakawa and Ooyama in the late 1960s, but modified to include the effects of precipitation-cooled downdrafts. They differ from one another in the closure that determines the cloud-base mass flux. One closure is based on the assumption of boundary layer quasi-equilibrium proposed by Raymond and Emanuel.It is shown that a realistic hurricane-like vortex develops from a moderate strength initial vortex, even when the initial environment is slightly stable to deep convection. This is true for all three cumulus schemes as well as in the case where only the explicit release of latent heat is included. In all cases there is a period of gestation during which the boundary layer moisture in the inner core region increases on account of surface moisture fluxes, followed by a period of rapid deepening. Precipitation from the convection scheme dominates the explicit precipitation in the early stages of development, but this situation is reversed as the vortex matures. These findings are similar to those of Baik et al., who used the Betts-Miller parameterization scheme in an axisymmetric model with 11 levels in the vertical. The most striking difference between the model results using different convection schemes is the length of the gestation period, whereas the maximum intensity attained is similar for the three schemes. The calculations suggest the hypothesis that the period of rapid development in tropical cyclones is accompanied by a change in the character of deep convection in the inner core region from buoyantly driven, predominantly upright convection to slantwise forced moist ascent.

  5. Processing of false belief passages during natural story comprehension: An fMRI study.

    PubMed

    Kandylaki, Katerina D; Nagels, Arne; Tune, Sarah; Wiese, Richard; Bornkessel-Schlesewsky, Ina; Kircher, Tilo

    2015-11-01

    The neural correlates of theory of mind (ToM) are typically studied using paradigms which require participants to draw explicit, task-related inferences (e.g., in the false belief task). In a natural setup, such as listening to stories, false belief mentalizing occurs incidentally as part of narrative processing. In our experiment, participants listened to auditorily presented stories with false belief passages (implicit false belief processing) and immediately after each story answered comprehension questions (explicit false belief processing), while neural responses were measured with functional magnetic resonance imaging (fMRI). All stories included (among other situations) one false belief condition and one closely matched control condition. For the implicit ToM processing, we modeled the hemodynamic response during the false belief passages in the story and compared it to the hemodynamic response during the closely matched control passages. For implicit mentalizing, we found activation in typical ToM processing regions, that is the angular gyrus (AG), superior medial frontal gyrus (SmFG), precuneus (PCUN), middle temporal gyrus (MTG) as well as in the inferior frontal gyrus (IFG) billaterally. For explicit ToM, we only found AG activation. The conjunction analysis highlighted the left AG and MTG as well as the bilateral IFG as overlapping ToM processing regions for both implicit and explicit modes. Implicit ToM processing during listening to false belief passages, recruits the left SmFG and billateral PCUN in addition to the "mentalizing network" known form explicit processing tasks. © 2015 Wiley Periodicals, Inc.

  6. Efficiency analysis of numerical integrations for finite element substructure in real-time hybrid simulation

    NASA Astrophysics Data System (ADS)

    Wang, Jinting; Lu, Liqiao; Zhu, Fei

    2018-01-01

    Finite element (FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations (RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time (TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method (CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ (λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.

  7. Modeling Task Switching without Switching Tasks: A Short-Term Priming Account of Explicitly Cued Performance

    ERIC Educational Resources Information Center

    Schneider, Darryl W.; Logan, Gordon D.

    2005-01-01

    Switch costs in task switching are commonly attributed to an executive control process of task-set reconfiguration, particularly in studies involving the explicit task-cuing procedure. The authors propose an alternative account of explicitly cued performance that is based on 2 mechanisms: priming of cue encoding from residual activation of cues in…

  8. Human-Assisted-Manufacturing Model Library

    DTIC Science & Technology

    2012-06-01

    Handwrite 1 Word Continuous H21 0.04515 1 Word Discontinuous H25 0.05375 1 Word Upper Case H35 0.07525 1 Character Continuous H4 0.0086...also needs to include the relationship that says “this fastener connects these parts”. If this information is not explicitly included in the design...fasteners (or structural interface definitions) have been attached. Note that if after doing this there is NOT a single tree, we can say that the

  9. The Things You Do: Internal Models of Others’ Expected Behaviour Guide Action Observation

    PubMed Central

    Schenke, Kimberley C.; Wyer, Natalie A.; Bach, Patric

    2016-01-01

    Predictions allow humans to manage uncertainties within social interactions. Here, we investigate how explicit and implicit person models–how different people behave in different situations–shape these predictions. In a novel action identification task, participants judged whether actors interacted with or withdrew from objects. In two experiments, we manipulated, unbeknownst to participants, the two actors action likelihoods across situations, such that one actor typically interacted with one object and withdrew from the other, while the other actor showed the opposite behaviour. In Experiment 2, participants additionally received explicit information about the two individuals that either matched or mismatched their actual behaviours. The data revealed direct but dissociable effects of both kinds of person information on action identification. Implicit action likelihoods affected response times, speeding up the identification of typical relative to atypical actions, irrespective of the explicit knowledge about the individual’s behaviour. Explicit person knowledge, in contrast, affected error rates, causing participants to respond according to expectations instead of observed behaviour, even when they were aware that the explicit information might not be valid. Together, the data show that internal models of others’ behaviour are routinely re-activated during action observation. They provide first evidence of a person-specific social anticipation system, which predicts forthcoming actions from both explicit information and an individuals’ prior behaviour in a situation. These data link action observation to recent models of predictive coding in the non-social domain where similar dissociations between implicit effects on stimulus identification and explicit behavioural wagers have been reported. PMID:27434265

  10. Density Functional Theory Calculation of pKa's of Thiols in Aqueous Solution Using Explicit Water Molecules and the Polarizable Continuum Model.

    PubMed

    Thapa, Bishnu; Schlegel, H Bernhard

    2016-07-21

    The pKa's of substituted thiols are important for understanding their properties and reactivities in applications in chemistry, biochemistry, and material chemistry. For a collection of 175 different density functionals and the SMD implicit solvation model, the average errors in the calculated pKa's of methanethiol and ethanethiol are almost 10 pKa units higher than for imidazole. A test set of 45 substituted thiols with pKa's ranging from 4 to 12 has been used to assess the performance of 8 functionals with 3 different basis sets. As expected, the basis set needs to include polarization functions on the hydrogens and diffuse functions on the heavy atoms. Solvent cavity scaling was ineffective in correcting the errors in the calculated pKa's. Inclusion of an explicit water molecule that is hydrogen bonded with the H of the thiol group (in neutral) or S(-) (in thiolates) lowers error by an average of 3.5 pKa units. With one explicit water and the SMD solvation model, pKa's calculated with the M06-2X, PBEPBE, BP86, and LC-BLYP functionals are found to deviate from the experimental values by about 1.5-2.0 pKa units whereas pKa's with the B3LYP, ωB97XD and PBEVWN5 functionals are still in error by more than 3 pKa units. The inclusion of three explicit water molecules lowers the calculated pKa further by about 4.5 pKa units. With the B3LYP and ωB97XD functionals, the calculated pKa's are within one unit of the experimental values whereas most other functionals used in this study underestimate the pKa's. This study shows that the ωB97XD functional with the 6-31+G(d,p) and 6-311++G(d,p) basis sets, and the SMD solvation model with three explicit water molecules hydrogen bonded to the sulfur produces the best result for the test set (average error -0.11 ± 0.50 and +0.15 ± 0.58, respectively). The B3LYP functional also performs well (average error -1.11 ± 0.82 and -0.78 ± 0.79, respectively).

  11. Warning systems in risk management.

    PubMed

    Paté-Cornell, M E

    1986-06-01

    A method is presented here that allows probabilistic evaluation and optimization of warning systems, and comparison of their performance and cost-effectiveness with those of other means of risk management. The model includes an assessment of the signals, and of human response, given the memory that people have kept of the quality of previous alerts. The trade-off between the rate of false alerts and the length of the lead time is studied to account for the long-term effects of "crying wolf" and the effectiveness of emergency actions. An explicit formulation of the system's benefits, including inputs from a signal model, a response model, and a consequence model, is given to allow optimization of the warning threshold and of the system's sensitivity.

  12. Heat Transfer and Fluid Mechanics Institute, 24th, Oregon State University, Corvallis, Ore., June 12-14, 1974, Proceedings

    NASA Technical Reports Server (NTRS)

    Davis, L. R. (Editor); Wilson, R. E.

    1974-01-01

    Recent theoretical and experimental studies in heat transfer and fluid mechanics, including some environmental protection investigations, are presented in a number of papers. Some of the topics covered include condensation heat transfer, a model of turbulent momentum and heat transfer at points of separation and reattachment, an explicit scheme for calculations of confined turbulent flows with heat transfer, heat transfer effects on a delta wing in subsonic flow, fluid mechanics of ocean outfalls, thermal plumes from industrial cooling water, a photochemical air pollution model for the Los Angeles air basin, and a turbulence model of diurnal variations in the planetary boundary layer. Individual items are announced in this issue.

  13. Current Status and Challenges of Atmospheric Data Assimilation

    NASA Astrophysics Data System (ADS)

    Atlas, R. M.; Gelaro, R.

    2016-12-01

    The issues of modern atmospheric data assimilation are fairly simple to comprehend but difficult to address, involving the combination of literally billions of model variables and tens of millions of observations daily. In addition to traditional meteorological variables such as wind, temperature pressure and humidity, model state vectors are being expanded to include explicit representation of precipitation, clouds, aerosols and atmospheric trace gases. At the same time, model resolutions are approaching single-kilometer scales globally and new observation types have error characteristics that are increasingly non-Gaussian. This talk describes the current status and challenges of atmospheric data assimilation, including an overview of current methodologies, the difficulty of estimating error statistics, and progress toward coupled earth system analyses.

  14. Spatially explicit modeling of greater sage-grouse (Centrocercus urophasianus) habitat in Nevada and northeastern California: a decision-support tool for management

    USGS Publications Warehouse

    Coates, Peter S.; Casazza, Michael L.; Brussee, Brianne E.; Ricca, Mark A.; Gustafson, K. Benjamin; Overton, Cory T.; Sanchez-Chopitea, Erika; Kroger, Travis; Mauch, Kimberly; Niell, Lara; Howe, Kristy; Gardner, Scott; Espinosa, Shawn; Delehanty, David J.

    2014-01-01

    Greater sage-grouse (Centrocercus urophasianus, hereafter referred to as “sage-grouse”) populations are declining throughout the sagebrush (Artemisia spp.) ecosystem, including millions of acres of potential habitat across the West. Habitat maps derived from empirical data are needed given impending listing decisions that will affect both sage-grouse population dynamics and human land-use restrictions. This report presents the process for developing spatially explicit maps describing relative habitat suitability for sage-grouse in Nevada and northeastern California. Maps depicting habitat suitability indices (HSI) values were generated based on model-averaged resource selection functions informed by more than 31,000 independent telemetry locations from more than 1,500 radio-marked sage-grouse across 12 project areas in Nevada and northeastern California collected during a 15-year period (1998–2013). Modeled habitat covariates included land cover composition, water resources, habitat configuration, elevation, and topography, each at multiple spatial scales that were relevant to empirically observed sage-grouse movement patterns. We then present an example of how the HSI can be delineated into categories. Specifically, we demonstrate that the deviation from the mean can be used to classify habitat suitability into three categories of habitat quality (high, moderate, and low) and one non-habitat category. The classification resulted in an agreement of 93–97 percent for habitat versus non-habitat across a suite of independent validation datasets. Lastly, we provide an example of how space use models can be integrated with habitat models to help inform conservation planning. In this example, we combined probabilistic breeding density with a non-linear probability of occurrence relative to distance to nearest lek (traditional breeding ground) using count data to calculate a composite space use index (SUI). The SUI was then classified into two categories of use (high and low-to-no) and intersected with the HSI categories to create potential management prioritization scenarios based oninformation about sage-grouse occupancy coupled with habitat suitability. This provided an example of a conservation planning application that uses the intersection of the spatially-explicit HSI and empirically-based SUI to identify potential spatially explicit strategies for sage-grouse management. Importantly, the reported categories for the HSI and SUI can be reclassified relatively easily to employ alternative conservation thresholds that may be identified through decision-making processes with stake-holders, managers, and biologists. Moreover, the HSI/SUI interface map can be updated readily as new data become available.

  15. FRAP Analysis: Accounting for Bleaching during Image Capture

    PubMed Central

    Wu, Jun; Shekhar, Nandini; Lele, Pushkar P.; Lele, Tanmay P.

    2012-01-01

    The analysis of Fluorescence Recovery After Photobleaching (FRAP) experiments involves mathematical modeling of the fluorescence recovery process. An important feature of FRAP experiments that tends to be ignored in the modeling is that there can be a significant loss of fluorescence due to bleaching during image capture. In this paper, we explicitly include the effects of bleaching during image capture in the model for the recovery process, instead of correcting for the effects of bleaching using reference measurements. Using experimental examples, we demonstrate the usefulness of such an approach in FRAP analysis. PMID:22912750

  16. Comparing approaches to spatially explicit ecosystem service modeling: a case study from the San Pedro River, Arizona

    USGS Publications Warehouse

    Bagstad, Kenneth J.; Semmens, Darius J.; Winthrop, Robert

    2013-01-01

    Although the number of ecosystem service modeling tools has grown in recent years, quantitative comparative studies of these tools have been lacking. In this study, we applied two leading open-source, spatially explicit ecosystem services modeling tools – Artificial Intelligence for Ecosystem Services (ARIES) and Integrated Valuation of Ecosystem Services and Tradeoffs (InVEST) – to the San Pedro River watershed in southeast Arizona, USA, and northern Sonora, Mexico. We modeled locally important services that both modeling systems could address – carbon, water, and scenic viewsheds. We then applied managerially relevant scenarios for urban growth and mesquite management to quantify ecosystem service changes. InVEST and ARIES use different modeling approaches and ecosystem services metrics; for carbon, metrics were more similar and results were more easily comparable than for viewsheds or water. However, findings demonstrate similar gains and losses of ecosystem services and conclusions when comparing effects across our scenarios. Results were more closely aligned for landscape-scale urban-growth scenarios and more divergent for a site-scale mesquite-management scenario. Follow-up studies, including testing in different geographic contexts, can improve our understanding of the strengths and weaknesses of these and other ecosystem services modeling tools as they move closer to readiness for supporting day-to-day resource management.

  17. A Three-Stage Model of Housing Search,

    DTIC Science & Technology

    1980-05-01

    Hanushek and Quigley, 1978) that recognize housing search as a transaction cost but rarely - .. examine search behavior; and descriptive studies of search...explicit mobility models that have recently appeared in the liter- ature (Speare et al., 1975; Hanushek and Quigley, 1978; Brummell, 1979). Although...1978; Hanushek and Quigley, 1978; Cronin, 1978). By explicitly assigning dollar values, the economic models attempt to obtain an objective measure of

  18. Comparative functional neuroanatomy between implicit and explicit memory tasks under negative emotional condition in schizophrenia.

    PubMed

    Song, Xiao-Li; Kim, Gwang-Won; Moon, Chung-Man; Jeong, Gwang-Woo

    To evaluate the brain activation patterns in response to negative emotion during implicit and explicit memory in patients with schizophrenia. Fourteen patients with schizophrenia and 14 healthy controls were included in this study. The 3.0T fMRI was obtained while the subjects performed the implicit and explicit retrievals with unpleasant words. The different predominant brain activation areas were observed during the implicit retrieval and explicit with unpleasant words. The differential neural mechanisms between implicit and explicit memory tasks associated with negative emotional processing in schizophrenia. Copyright © 2017. Published by Elsevier Inc.

  19. The Anomalous Accretion Disk of the Cataclysmic Variable RW Sextantis

    NASA Astrophysics Data System (ADS)

    Linnell, Albert P.; Godon, P.; Hubeny, I.; Sion, E. M.; Szkody, P.

    2011-01-01

    The standard model for stable Cataclysmic Variable (CV) accretion disks (Frank, King and Raine 1992) derives an explicit analytic expression for the disk effective temperature as function of radial distance from the white dwarf (WD). That model specifies that the effective temperature, Teff(R), varies with R as ()0.25, where () represents a combination of parameters including R, the mass transfer rate M(dot), and other parameters. It is well known that fits of standard model synthetic spectra to observed CV spectra find almost no instances of agreement. We have derived a generalized expression for the radial temperature gradient, which preserves the total disk luminosity as function of M(dot) but permits a different exponent from the theoretical value of 0.25, and have applied it to RW Sex (Linnell et al.,2010,ApJ, 719,271). We find an excellent fit to observed FUSE and IUE spectra for an exponent of 0.125, curiously close to 1/2 the theoretical value. Our annulus synthetic spectra, combined to represent the accretion disk, were produced with program TLUSTY, were non-LTE and included H, He, C, Mg, Al, Si, and Fe as explicit ions. We illustrate our results with a plot showing the failure to fit RW Sex for a range of M(dot) values, our model fit to the observations, and a chi2 plot showing the selection of the exponent 0.125 as the best fit for the M(dot) range shown. (For the final model parameters see the paper cited.)

  20. ATR performance modeling concepts

    NASA Astrophysics Data System (ADS)

    Ross, Timothy D.; Baker, Hyatt B.; Nolan, Adam R.; McGinnis, Ryan E.; Paulson, Christopher R.

    2016-05-01

    Performance models are needed for automatic target recognition (ATR) development and use. ATRs consume sensor data and produce decisions about the scene observed. ATR performance models (APMs) on the other hand consume operating conditions (OCs) and produce probabilities about what the ATR will produce. APMs are needed for many modeling roles of many kinds of ATRs (each with different sensing modality and exploitation functionality combinations); moreover, there are different approaches to constructing the APMs. Therefore, although many APMs have been developed, there is rarely one that fits a particular need. Clarified APM concepts may allow us to recognize new uses of existing APMs and identify new APM technologies and components that better support coverage of the needed APMs. The concepts begin with thinking of ATRs as mapping OCs of the real scene (including the sensor data) to reports. An APM is then a mapping from explicit quantized OCs (represented with less resolution than the real OCs) and latent OC distributions to report distributions. The roles of APMs can be distinguished by the explicit OCs they consume. APMs used in simulations consume the true state that the ATR is attempting to report. APMs used online with the exploitation consume the sensor signal and derivatives, such as match scores. APMs used in sensor management consume neither of those, but estimate performance from other OCs. This paper will summarize the major building blocks for APMs, including knowledge sources, OC models, look-up tables, analytical and learned mappings, and tools for signal synthesis and exploitation.

  1. Analytical steady-state solutions for water-limited cropping systems using saline irrigation water

    NASA Astrophysics Data System (ADS)

    Skaggs, T. H.; Anderson, R. G.; Corwin, D. L.; Suarez, D. L.

    2014-12-01

    Due to the diminishing availability of good quality water for irrigation, it is increasingly important that irrigation and salinity management tools be able to target submaximal crop yields and support the use of marginal quality waters. In this work, we present a steady-state irrigated systems modeling framework that accounts for reduced plant water uptake due to root zone salinity. Two explicit, closed-form analytical solutions for the root zone solute concentration profile are obtained, corresponding to two alternative functional forms of the uptake reduction function. The solutions express a general relationship between irrigation water salinity, irrigation rate, crop salt tolerance, crop transpiration, and (using standard approximations) crop yield. Example applications are illustrated, including the calculation of irrigation requirements for obtaining targeted submaximal yields, and the generation of crop-water production functions for varying irrigation waters, irrigation rates, and crops. Model predictions are shown to be mostly consistent with existing models and available experimental data. Yet the new solutions possess advantages over available alternatives, including: (i) the solutions were derived from a complete physical-mathematical description of the system, rather than based on an ad hoc formulation; (ii) the analytical solutions are explicit and can be evaluated without iterative techniques; (iii) the solutions permit consideration of two common functional forms of salinity induced reductions in crop water uptake, rather than being tied to one particular representation; and (iv) the utilized modeling framework is compatible with leading transient-state numerical models.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wieder, William R.; Allison, Steven D.; Davidson, Eric A.

    Microbes influence soil organic matter (SOM) decomposition and the long-term stabilization of carbon (C) in soils. We contend that by revising the representation of microbial processes and their interactions with the physicochemical soil environment, Earth system models (ESMs) may make more realistic global C cycle projections. Explicit representation of microbial processes presents considerable challenges due to the scale at which these processes occur. Thus, applying microbial theory in ESMs requires a framework to link micro-scale process-level understanding and measurements to macro-scale models used to make decadal- to century-long projections. Here, we review the diversity, advantages, and pitfalls of simulating soilmore » biogeochemical cycles using microbial-explicit modeling approaches. We present a roadmap for how to begin building, applying, and evaluating reliable microbial-explicit model formulations that can be applied in ESMs. Drawing from experience with traditional decomposition models we suggest: (1) guidelines for common model parameters and output that can facilitate future model intercomparisons; (2) development of benchmarking and model-data integration frameworks that can be used to effectively guide, inform, and evaluate model parameterizations with data from well-curated repositories; and (3) the application of scaling methods to integrate microbial-explicit soil biogeochemistry modules within ESMs. With contributions across scientific disciplines, we feel this roadmap can advance our fundamental understanding of soil biogeochemical dynamics and more realistically project likely soil C response to environmental change at global scales.« less

  3. Self-Love or Other-Love? Explicit Other-Preference but Implicit Self-Preference

    PubMed Central

    Gebauer, Jochen E.; Göritz, Anja S.; Hofmann, Wilhelm; Sedikides, Constantine

    2012-01-01

    Do humans prefer the self even over their favorite other person? This question has pervaded philosophy and social-behavioral sciences. Psychology’s distinction between explicit and implicit preferences calls for a two-tiered solution. Our evolutionarily-based Dissociative Self-Preference Model offers two hypotheses. Other-preferences prevail at an explicit level, because they convey caring for others, which strengthens interpersonal bonds–a major evolutionary advantage. Self-preferences, however, prevail at an implicit level, because they facilitate self-serving automatic behavior, which favors the self in life-or-die situations–also a major evolutionary advantage. We examined the data of 1,519 participants, who completed an explicit measure and one of five implicit measures of preferences for self versus favorite other. The results were consistent with the Dissociative Self-Preference Model. Explicitly, participants preferred their favorite other over the self. Implicitly, however, they preferred the self over their favorite other (be it their child, romantic partner, or best friend). Results are discussed in relation to evolutionary theorizing on self-deception. PMID:22848605

  4. ESL Elementary Teachers' Use of Children's Picture Books to Initiate Explicit Instruction of Reading Comprehension Strategies

    ERIC Educational Resources Information Center

    Al Khaiyali, Al Tiyb S.

    2014-01-01

    Reading comprehension instruction has been recognized as a key factor in developing any reading and literacy program. Therefore, many attempts were devoted to improve explicit comprehension strategy instruction at different school levels and fields including EFL and ESL. Despite these efforts, explicit comprehension instruction is still drought…

  5. The Effect of Explicit Instruction on Strategic Reading in a Literacy Methods Course

    ERIC Educational Resources Information Center

    Iwai, Yuko

    2016-01-01

    This study examined the impact of explicit instruction on metacognitive reading strategies among 18 K-8 teacher candidates in a literacy methods course. They received weekly explicit intervention about these strategies over one semester. Collected data included pre- and post-scores of the Metacognitive Awareness of Reading Strategies Inventory…

  6. Critical design elements of e-health applications for users with severe mental illness: singular focus, simple architecture, prominent contents, explicit navigation, and inclusive hyperlinks.

    PubMed

    Rotondi, Armando J; Eack, Shaun M; Hanusa, Barbara H; Spring, Michael B; Haas, Gretchen L

    2015-03-01

    E-health applications are becoming integral components of general medical care delivery models and emerging for mental health care. Few exist for treatment of those with severe mental illness (SMI). In part, this is due to a lack of models to design such technologies for persons with cognitive impairments and lower technology experience. This study evaluated the effectiveness of an e-health design model for persons with SMI termed the Flat Explicit Design Model (FEDM). Persons with schizophrenia (n = 38) performed tasks to evaluate the effectiveness of 5 Web site designs: 4 were prominent public Web sites, and 1 was designed according to the FEDM. Linear mixed-effects regression models were used to examine differences in usability between the Web sites. Omnibus tests of between-site differences were conducted, followed by post hoc pairwise comparisons of means to examine specific Web site differences when omnibus tests reached statistical significance. The Web site designed using the FEDM required less time to find information, had a higher success rate, and was rated easier to use and less frustrating than the other Web sites. The home page design of one of the other Web sites provided the best indication to users about a Web site's contents. The results are consistent with and were used to expand the FEDM. The FEDM provides evidence-based guidelines to design e-health applications for person with SMI, including: minimize an application's layers or hierarchy, use explicit text, employ navigational memory aids, group hyperlinks in 1 area, and minimize the number of disparate subjects an application addresses. © The Author 2013. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  7. A Metacognitive Approach to "Implicit" and "Explicit" Evaluations: Comment on Gawronski and Bodenhausen (2006)

    ERIC Educational Resources Information Center

    Petty, Richard E.; Brinol, Pablo

    2006-01-01

    Comments on the article by B. Gawronski and G. V. Bodenhausen (see record 2006-10465-003). A metacognitive model (MCM) is presented to describe how automatic (implicit) and deliberative (explicit) measures of attitudes respond to change attempts. The model assumes that contemporary implicit measures tap quick evaluative associations, whereas…

  8. A Watershed-based spatially-explicit demonstration of an Integrated Environmental Modeling Framework for Ecosystem Services in the Coal River Basin (WV, USA)

    EPA Science Inventory

    We demonstrate a spatially-explicit regional assessment of current condition of aquatic ecoservices in the Coal River Basin (CRB), with limited sensitivity analysis for the atmospheric contaminant mercury. The integrated modeling framework (IMF) forecasts water quality and quant...

  9. USING THE ECLPSS SOFTWARE ENVIRONMENT TO BUILD A SPATIALLY EXPLICIT COMPONENT-BASED MODEL OF OZONE EFFECTS ON FOREST ECOSYSTEMS. (R827958)

    EPA Science Inventory

    We have developed a modeling framework to support grid-based simulation of ecosystems at multiple spatial scales, the Ecological Component Library for Parallel Spatial Simulation (ECLPSS). ECLPSS helps ecologists to build robust spatially explicit simulations of ...

  10. Fitts’ Law in the Control of Isometric Grip Force With Naturalistic Targets

    PubMed Central

    Thumser, Zachary C.; Slifkin, Andrew B.; Beckler, Dylan T.; Marasco, Paul D.

    2018-01-01

    Fitts’ law models the relationship between amplitude, precision, and speed of rapid movements. It is widely used to quantify performance in pointing tasks, study human-computer interaction, and generally to understand perceptual-motor information processes, including research to model performance in isometric force production tasks. Applying Fitts’ law to an isometric grip force task would allow for quantifying grasp performance in rehabilitative medicine and may aid research on prosthetic control and design. We examined whether Fitts’ law would hold when participants attempted to accurately produce their intended force output while grasping a manipulandum when presented with images of various everyday objects (we termed this the implicit task). Although our main interest was the implicit task, to benchmark it and establish validity, we examined performance against a more standard visual feedback condition via a digital force-feedback meter on a video monitor (explicit task). Next, we progressed from visual force feedback with force meter targets to the same targets without visual force feedback (operating largely on feedforward control with tactile feedback). This provided an opportunity to see if Fitts’ law would hold without vision, and allowed us to progress toward the more naturalistic implicit task (which does not include visual feedback). Finally, we changed the nature of the targets from requiring explicit force values presented as arrows on a force-feedback meter (explicit targets) to the more naturalistic and intuitive target forces implied by images of objects (implicit targets). With visual force feedback the relation between task difficulty and the time to produce the target grip force was predicted by Fitts’ law (average r2 = 0.82). Without vision, average grip force scaled accurately although force variability was insensitive to the target presented. In contrast, images of everyday objects generated more reliable grip forces without the visualized force meter. In sum, population means were well-described by Fitts’ law for explicit targets with vision (r2 = 0.96) and implicit targets (r2 = 0.89), but not as well-described for explicit targets without vision (r2 = 0.54). Implicit targets should provide a realistic see-object-squeeze-object test using Fitts’ law to quantify the relative speed-accuracy relationship of any given grasper. PMID:29773999

  11. Intrusive effects of implicitly processed information on explicit memory.

    PubMed

    Sentz, Dustin F; Kirkhart, Matthew W; LoPresto, Charles; Sobelman, Steven

    2002-02-01

    This study described the interference of implicitly processed information on the memory for explicitly processed information. Participants studied a list of words either auditorily or visually under instructions to remember the words (explicit study). They were then visually presented another word list under instructions which facilitate implicit but not explicit processing. Following a distractor task, memory for the explicit study list was tested with either a visual or auditory recognition task that included new words, words from the explicit study list, and words implicitly processed. Analysis indicated participants both failed to recognize words from the explicit study list and falsely recognized words that were implicitly processed as originating from the explicit study list. However, this effect only occurred when the testing modality was visual, thereby matching the modality for the implicitly processed information, regardless of the modality of the explicit study list. This "modality effect" for explicit memory was interpreted as poor source memory for implicitly processed information and in light of the procedures used. as well as illustrating an example of "remembering causing forgetting."

  12. A masked negative self-esteem? Implicit and explicit self-esteem in patients with Narcissistic Personality Disorder.

    PubMed

    Marissen, Marlies A E; Brouwer, Marlies E; Hiemstra, Annemarie M F; Deen, Mathijs L; Franken, Ingmar H A

    2016-08-30

    The mask model of narcissism states that the narcissistic traits of patients with NPD are the result of a compensatory reaction to underlying ego fragility. This model assumes that high explicit self-esteem masks low implicit self-esteem. However, research on narcissism has predominantly focused on non-clinical participants and data derived from patients diagnosed with Narcissistic Personality Disorder (NPD) remain scarce. Therefore, the goal of the present study was to test the mask model hypothesis of narcissism among patients with NPD. Male patients with NPD were compared to patients with other PD's and healthy participants on implicit and explicit self-esteem. NPD patients did not differ in levels of explicit and implicit self-esteem compared to both the psychiatric and the healthy control group. Overall, the current study found no evidence in support of the mask model of narcissism among a clinical group. This implicates that it might not be relevant for clinicians to focus treatment of NPD on an underlying negative self-esteem. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. The Cloud Feedback Model Intercomparison Project Observational Simulator Package: Version 2

    NASA Astrophysics Data System (ADS)

    Swales, Dustin J.; Pincus, Robert; Bodas-Salcedo, Alejandro

    2018-01-01

    The Cloud Feedback Model Intercomparison Project Observational Simulator Package (COSP) gathers together a collection of observation proxies or satellite simulators that translate model-simulated cloud properties to synthetic observations as would be obtained by a range of satellite observing systems. This paper introduces COSP2, an evolution focusing on more explicit and consistent separation between host model, coupling infrastructure, and individual observing proxies. Revisions also enhance flexibility by allowing for model-specific representation of sub-grid-scale cloudiness, provide greater clarity by clearly separating tasks, support greater use of shared code and data including shared inputs across simulators, and follow more uniform software standards to simplify implementation across a wide range of platforms. The complete package including a testing suite is freely available.

  14. Neuman systems model-based research: an integrative review project.

    PubMed

    Fawcett, J; Giangrande, S K

    2001-07-01

    The project integrated Neuman systems model-based research literature. Two hundred published studies were located. This article is limited to the 59 full journal articles and 3 book chapters identified. A total of 37% focused on prevention interventions; 21% on perception of stressors; and 10% on stressor reactions. Only 50% of the reports explicitly linked the model with the study variables, and 61% did not include conclusions regarding model utility or credibility. No programs of research were identified. Academic courses and continuing education workshops are needed to help researchers design programs of Neuman systems model-based research and better explicate linkages between the model and the research.

  15. Using Abstraction in Explicity Parallel Programs.

    DTIC Science & Technology

    1991-07-01

    However, we only rely on sequential consistency of memory operations. includ- ing reads. writes and any synchronization primitives provided by the...explicit synchronization primitives . This demonstrates the practical power of sequentially consistent memory, as opposed to weaker models of memory that...a small set of synchronization primitives , all pro- cedures have non-waiting specifications. This is in contrast to richer process-oriented

  16. Effects of a Graphic Organizer Training Package on the Persuasive Writing of Middle School Students with Autism

    ERIC Educational Resources Information Center

    Bishop, Anne E.; Sawyer, Mary; Alber-Morgan, Sheila R.; Boggs, Melissa

    2015-01-01

    This study examined the effects of a graphic organizer intervention package on the quality and quantity of persuasive writing of three middle school students with Autism Spectrum Disorder (ASD). The intervention included a 3-day training which consisted of explicit instruction on the components of a persuasive essay, modeling and guided practice…

  17. Anyons in an electromagnetic field and the Bargmann-Michel-Telegdi equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, S.

    1995-05-15

    The Lagrangian model for anyons, presented earlier, is extended to include interactions with an external, homogeneous electromagnetic field. Explicit electric and magnetic moment terms for the anyon are introduced in the Lagrangian. The (2+1)-dimensional Bargmann-Michel-Telegdi equation as well as the correct value (2) of the gyromagnetic ratio is rederived, in the Hamiltonian framework.

  18. Probing the cross-effect of strains in non-linear elasticity of nearly regular polymer networks by pure shear deformation.

    PubMed

    Katashima, Takuya; Urayama, Kenji; Chung, Ung-il; Sakai, Takamasa

    2015-05-07

    The pure shear deformation of the Tetra-polyethylene glycol gels reveals the presence of an explicit cross-effect of strains in the strain energy density function even for the polymer networks with nearly regular structure including no appreciable amount of structural defect such as trapped entanglement. This result is in contrast to the expectation of the classical Gaussian network model (Neo Hookean model), i.e., the vanishing of the cross effect in regular networks with no trapped entanglement. The results show that (1) the cross effect of strains is not dependent on the network-strand length; (2) the cross effect is not affected by the presence of non-network strands; (3) the cross effect is proportional to the network polymer concentration including both elastically effective and ineffective strands; (4) no cross effect is expected exclusively in zero limit of network concentration in real polymer networks. These features indicate that the real polymer networks with regular network structures have an explicit cross-effect of strains, which originates from some interaction between network strands (other than entanglement effect) such as nematic interaction, topological interaction, and excluded volume interaction.

  19. The Full Scope of Family Physicians' Work Is Not Reflected by Current Procedural Terminology Codes.

    PubMed

    Young, Richard A; Burge, Sandy; Kumar, Kaparaboyna Ashok; Wilson, Jocelyn

    2017-01-01

    The purpose of this study was to characterize the content of family physician (FP) clinic encounters, and to count the number of visits in which the FPs addressed issues not explicitly reportable by 99211 to 99215 and 99354 Current Procedural Terminology (CPT) codes with current reimbursement methods and based on examples provided in the CPT manual. The data collection instrument was modeled on the National Ambulatory Medical Care Survey. Trained assistants directly observed every other FP-patient encounter and recorded every patient concern, issue addressed by the physician (including care barriers related to health care systems and social determinants), and treatment ordered in clinics affiliated with 10 residencies of the Residency Research Network of Texas. A visit was deemed to include physician work that was not explicitly reportable if the number or nature of issues addressed exceeded the definitions or examples for 99205/99215 or 99214 + 99354 or a preventive service code, included the physician addressing health care system or social determinant issues, or included the care of a family member. In 982 physician-patient encounters, patients raised 517 different reasons for visit (total, 5278; mean, 5.4 per visit; range, 1 to 16) and the FPs addressed 509 different issues (total issues, 3587; mean, 3.7 per visit; range, 1 to 10). FPs managed 425 different medications, 18 supplements, and 11 devices. A mean of 3.9 chronic medications were continued per visit (range, 0 to 21) and 4.6 total medications were managed (range, 0 to 22). In 592 (60.3%) of the visits the FPs did work that was not explicitly reportable with available CPT codes: 582 (59.3%) addressed more numerous issues than explicitly reportable, 64 (6.5%) addressed system barriers, and 13 (1.3%) addressed concerns for other family members. FPs perform cognitive work in a majority of their patient encounters that are not explicitly reportable, either by being higher than the CPT example number of diagnoses per code or the type of problems addressed, which has implications for the care of complex multi-morbid patients and the growth of the primary care workforce. To address these limitations, either the CPT codes and their associated rules should be updated to reflect the realities of family physicians' practices or new billing and coding approaches should be developed. © Copyright 2017 by the American Board of Family Medicine.

  20. The Effects of Explicit Teaching of Strategies, Second-Order Concepts, and Epistemological Underpinnings on Students' Ability to Reason Causally in History

    ERIC Educational Resources Information Center

    Stoel, Gerhard L.; van Drie, Jannet P.; van Boxtel, Carla A. M.

    2017-01-01

    This article reports an experimental study on the effects of explicit teaching on 11th grade students' ability to reason causally in history. Underpinned by the model of domain learning, explicit teaching is conceptualized as multidimensional, focusing on strategies and second-order concepts to generate and verbalize causal explanations and…

  1. Investigating the predictive validity of implicit and explicit measures of motivation on condom use, physical activity and healthy eating.

    PubMed

    Keatley, David; Clarke, David D; Hagger, Martin S

    2012-01-01

    The literature on health-related behaviours and motivation is replete with research involving explicit processes and their relations with intentions and behaviour. Recently, interest has been focused on the impact of implicit processes and measures on health-related behaviours. Dual-systems models have been proposed to provide a framework for understanding the effects of explicit or deliberative and implicit or impulsive processes on health behaviours. Informed by a dual-systems approach and self-determination theory, the aim of this study was to test the effects of implicit and explicit motivation on three health-related behaviours in a sample of undergraduate students (N = 162). Implicit motives were hypothesised to predict behaviour independent of intentions while explicit motives would be mediated by intentions. Regression analyses indicated that implicit motivation predicted physical activity behaviour only. Across all behaviours, intention mediated the effects of explicit motivational variables from self-determination theory. This study provides limited support for dual-systems models and the role of implicit motivation in the prediction of health-related behaviour. Suggestions for future research into the role of implicit processes in motivation are outlined.

  2. An explicit asymptotic model for the surface wave in a viscoelastic half-space based on applying Rabotnov's fractional exponential integral operators

    NASA Astrophysics Data System (ADS)

    Wilde, M. V.; Sergeeva, N. V.

    2018-05-01

    An explicit asymptotic model extracting the contribution of a surface wave to the dynamic response of a viscoelastic half-space is derived. Fractional exponential Rabotnov's integral operators are used for describing of material properties. The model is derived by extracting the principal part of the poles corresponding to the surface waves after applying Laplace and Fourier transforms. The simplified equations for the originals are written by using power series expansions. Padè approximation is constructed to unite short-time and long-time models. The form of this approximation allows to formulate the explicit model using a fractional exponential Rabotnov's integral operator with parameters depending on the properties of surface wave. The applicability of derived models is studied by comparing with the exact solutions of a model problem. It is revealed that the model based on Padè approximation is highly effective for all the possible time domains.

  3. Optimal implicit 2-D finite differences to model wave propagation in poroelastic media

    NASA Astrophysics Data System (ADS)

    Itzá, Reymundo; Iturrarán-Viveros, Ursula; Parra, Jorge O.

    2016-08-01

    Numerical modeling of seismic waves in heterogeneous porous reservoir rocks is an important tool for the interpretation of seismic surveys in reservoir engineering. We apply globally optimal implicit staggered-grid finite differences (FD) to model 2-D wave propagation in heterogeneous poroelastic media at a low-frequency range (<10 kHz). We validate the numerical solution by comparing it to an analytical-transient solution obtaining clear seismic wavefields including fast P and slow P and S waves (for a porous media saturated with fluid). The numerical dispersion and stability conditions are derived using von Neumann analysis, showing that over a wide range of porous materials the Courant condition governs the stability and this optimal implicit scheme improves the stability of explicit schemes. High-order explicit FD can be replaced by some lower order optimal implicit FD so computational cost will not be as expensive while maintaining the accuracy. Here, we compute weights for the optimal implicit FD scheme to attain an accuracy of γ = 10-8. The implicit spatial differentiation involves solving tridiagonal linear systems of equations through Thomas' algorithm.

  4. The Emergence of Organizing Structure in Conceptual Representation.

    PubMed

    Lake, Brenden M; Lawrence, Neil D; Tenenbaum, Joshua B

    2018-06-01

    Both scientists and children make important structural discoveries, yet their computational underpinnings are not well understood. Structure discovery has previously been formalized as probabilistic inference about the right structural form-where form could be a tree, ring, chain, grid, etc. (Kemp & Tenenbaum, 2008). Although this approach can learn intuitive organizations, including a tree for animals and a ring for the color circle, it assumes a strong inductive bias that considers only these particular forms, and each form is explicitly provided as initial knowledge. Here we introduce a new computational model of how organizing structure can be discovered, utilizing a broad hypothesis space with a preference for sparse connectivity. Given that the inductive bias is more general, the model's initial knowledge shows little qualitative resemblance to some of the discoveries it supports. As a consequence, the model can also learn complex structures for domains that lack intuitive description, as well as predict human property induction judgments without explicit structural forms. By allowing form to emerge from sparsity, our approach clarifies how both the richness and flexibility of human conceptual organization can coexist. Copyright © 2018 Cognitive Science Society, Inc.

  5. Luminance, Colour, Viewpoint and Border Enhanced Disparity Energy Model

    PubMed Central

    Martins, Jaime A.; Rodrigues, João M. F.; du Buf, Hans

    2015-01-01

    The visual cortex is able to extract disparity information through the use of binocular cells. This process is reflected by the Disparity Energy Model, which describes the role and functioning of simple and complex binocular neuron populations, and how they are able to extract disparity. This model uses explicit cell parameters to mathematically determine preferred cell disparities, like spatial frequencies, orientations, binocular phases and receptive field positions. However, the brain cannot access such explicit cell parameters; it must rely on cell responses. In this article, we implemented a trained binocular neuronal population, which encodes disparity information implicitly. This allows the population to learn how to decode disparities, in a similar way to how our visual system could have developed this ability during evolution. At the same time, responses of monocular simple and complex cells can also encode line and edge information, which is useful for refining disparities at object borders. The brain should then be able, starting from a low-level disparity draft, to integrate all information, including colour and viewpoint perspective, in order to propagate better estimates to higher cortical areas. PMID:26107954

  6. On isometry anomalies in minimal 𝒩 = (0,1) and 𝒩 = (0,2) sigma models

    NASA Astrophysics Data System (ADS)

    Chen, Jin; Cui, Xiaoyi; Shifman, Mikhail; Vainshtein, Arkady

    2016-09-01

    The two-dimensional minimal supersymmetric sigma models with homogeneous target spaces G/H and chiral fermions of the same chirality are revisited. In particular, we look into the isometry anomalies in O(N) and CP(N - 1) models. These anomalies are generated by fermion loop diagrams which we explicitly calculate. In the case of O(N) sigma models the first Pontryagin class vanishes, so there is no global obstruction for the minimal 𝒩 = (0, 1) supersymmetrization of these models. We show that at the local level isometries in these models can be made anomaly free by specifying the counterterms explicitly. Thus, there are no obstructions to quantizing the minimal 𝒩 = (0, 1) models with the SN-1 = SO(N)/SO(N - 1) target space while preserving the isometries. This also includes CP(1) (equivalent to S2) which is an exceptional case from the CP(N - 1) series. For other CP(N - 1) models, the isometry anomalies cannot be rescued even locally, this leads us to a discussion on the relation between the geometric and gauged formulations of the CP(N - 1) models to compare the original of different anomalies. A dual formalism of O(N) model is also given, in order to show the consistency of our isometry anomaly analysis in different formalisms. The concrete counterterms to be added, however, will be formalism dependent.

  7. Age effects on explicit and implicit memory

    PubMed Central

    Ward, Emma V.; Berry, Christopher J.; Shanks, David R.

    2013-01-01

    It is well-documented that explicit memory (e.g., recognition) declines with age. In contrast, many argue that implicit memory (e.g., priming) is preserved in healthy aging. For example, priming on tasks such as perceptual identification is often not statistically different in groups of young and older adults. Such observations are commonly taken as evidence for distinct explicit and implicit learning/memory systems. In this article we discuss several lines of evidence that challenge this view. We describe how patterns of differential age-related decline may arise from differences in the ways in which the two forms of memory are commonly measured, and review recent research suggesting that under improved measurement methods, implicit memory is not age-invariant. Formal computational models are of considerable utility in revealing the nature of underlying systems. We report the results of applying single and multiple-systems models to data on age effects in implicit and explicit memory. Model comparison clearly favors the single-system view. Implications for the memory systems debate are discussed. PMID:24065942

  8. High-Order/Low-Order methods for ocean modeling

    DOE PAGES

    Newman, Christopher; Womeldorff, Geoff; Chacón, Luis; ...

    2015-06-01

    In this study, we examine a High Order/Low Order (HOLO) approach for a z-level ocean model and show that the traditional semi-implicit and split-explicit methods, as well as a recent preconditioning strategy, can easily be cast in the framework of HOLO methods. The HOLO formulation admits an implicit-explicit method that is algorithmically scalable and second-order accurate, allowing timesteps much larger than the barotropic time scale. We show how HOLO approaches, in particular the implicit-explicit method, can provide a solid route for ocean simulation to heterogeneous computing and exascale environments.

  9. Explicit robust schemes for implementation of a class of principal value-based constitutive models: Symbolic and numeric implementation

    NASA Technical Reports Server (NTRS)

    Arnold, S. M.; Saleeb, A. F.; Tan, H. Q.; Zhang, Y.

    1993-01-01

    The issue of developing effective and robust schemes to implement a class of the Ogden-type hyperelastic constitutive models is addressed. To this end, special purpose functions (running under MACSYMA) are developed for the symbolic derivation, evaluation, and automatic FORTRAN code generation of explicit expressions for the corresponding stress function and material tangent stiffness tensors. These explicit forms are valid over the entire deformation range, since the singularities resulting from repeated principal-stretch values have been theoretically removed. The required computational algorithms are outlined, and the resulting FORTRAN computer code is presented.

  10. Quantum morphogenesis: A variation on Thom's catastrophe theory

    NASA Astrophysics Data System (ADS)

    Aerts, Dirk; Czachor, Marek; Gabora, Liane; Kuna, Maciej; Posiewnik, Andrzej; Pykacz, Jarosław; Syty, Monika

    2003-05-01

    Noncommutative propositions are characteristic of both quantum and nonquantum (sociological, biological, and psychological) situations. In a Hilbert space model, states, understood as correlations between all the possible propositions, are represented by density matrices. If systems in question interact via feedback with environment, their dynamics is nonlinear. Nonlinear evolutions of density matrices lead to the phenomenon of morphogenesis that may occur in noncommutative systems. Several explicit exactly solvable models are presented, including “birth and death of an organism” and “development of complementary properties.”

  11. Exploring the spatial distribution of light interception and photosynthesis of canopies by means of a functional-structural plant model.

    PubMed

    Sarlikioti, V; de Visser, P H B; Marcelis, L F M

    2011-04-01

    At present most process-based models and the majority of three-dimensional models include simplifications of plant architecture that can compromise the accuracy of light interception simulations and, accordingly, canopy photosynthesis. The aim of this paper is to analyse canopy heterogeneity of an explicitly described tomato canopy in relation to temporal dynamics of horizontal and vertical light distribution and photosynthesis under direct- and diffuse-light conditions. Detailed measurements of canopy architecture, light interception and leaf photosynthesis were carried out on a tomato crop. These data were used for the development and calibration of a functional-structural tomato model. The model consisted of an architectural static virtual plant coupled with a nested radiosity model for light calculations and a leaf photosynthesis module. Different scenarios of horizontal and vertical distribution of light interception, incident light and photosynthesis were investigated under diffuse and direct light conditions. Simulated light interception showed a good correspondence to the measured values. Explicitly described leaf angles resulted in higher light interception in the middle of the plant canopy compared with fixed and ellipsoidal leaf-angle distribution models, although the total light interception remained the same. The fraction of light intercepted at a north-south orientation of rows differed from east-west orientation by 10 % on winter and 23 % on summer days. The horizontal distribution of photosynthesis differed significantly between the top, middle and lower canopy layer. Taking into account the vertical variation of leaf photosynthetic parameters in the canopy, led to approx. 8 % increase on simulated canopy photosynthesis. Leaf angles of heterogeneous canopies should be explicitly described as they have a big impact both on light distribution and photosynthesis. Especially, the vertical variation of photosynthesis in canopy is such that the experimental approach of photosynthesis measurements for model parameterization should be revised.

  12. Exploring the spatial distribution of light interception and photosynthesis of canopies by means of a functional–structural plant model

    PubMed Central

    Sarlikioti, V.; de Visser, P. H. B.; Marcelis, L. F. M.

    2011-01-01

    Background and Aims At present most process-based models and the majority of three-dimensional models include simplifications of plant architecture that can compromise the accuracy of light interception simulations and, accordingly, canopy photosynthesis. The aim of this paper is to analyse canopy heterogeneity of an explicitly described tomato canopy in relation to temporal dynamics of horizontal and vertical light distribution and photosynthesis under direct- and diffuse-light conditions. Methods Detailed measurements of canopy architecture, light interception and leaf photosynthesis were carried out on a tomato crop. These data were used for the development and calibration of a functional–structural tomato model. The model consisted of an architectural static virtual plant coupled with a nested radiosity model for light calculations and a leaf photosynthesis module. Different scenarios of horizontal and vertical distribution of light interception, incident light and photosynthesis were investigated under diffuse and direct light conditions. Key Results Simulated light interception showed a good correspondence to the measured values. Explicitly described leaf angles resulted in higher light interception in the middle of the plant canopy compared with fixed and ellipsoidal leaf-angle distribution models, although the total light interception remained the same. The fraction of light intercepted at a north–south orientation of rows differed from east–west orientation by 10 % on winter and 23 % on summer days. The horizontal distribution of photosynthesis differed significantly between the top, middle and lower canopy layer. Taking into account the vertical variation of leaf photosynthetic parameters in the canopy, led to approx. 8 % increase on simulated canopy photosynthesis. Conclusions Leaf angles of heterogeneous canopies should be explicitly described as they have a big impact both on light distribution and photosynthesis. Especially, the vertical variation of photosynthesis in canopy is such that the experimental approach of photosynthesis measurements for model parameterization should be revised. PMID:21355008

  13. Generating Within-Plant Spatial Distributions of an Insect Herbivore Based on Aggregation Patterns and Per-Node Infestation Probabilities.

    PubMed

    Rincon, Diego F; Hoy, Casey W; Cañas, Luis A

    2015-04-01

    Most predator-prey models extrapolate functional responses from small-scale experiments assuming spatially uniform within-plant predator-prey interactions. However, some predators focus their search in certain plant regions, and herbivores tend to select leaves to balance their nutrient uptake and exposure to plant defenses. Individual-based models that account for heterogeneous within-plant predator-prey interactions can be used to scale-up functional responses, but they would require the generation of explicit prey spatial distributions within-plant architecture models. The silverleaf whitefly, Bemisia tabaci biotype B (Gennadius) (Hemiptera: Aleyrodidae), is a significant pest of tomato crops worldwide that exhibits highly aggregated populations at several spatial scales, including within the plant. As part of an analytical framework to understand predator-silverleaf whitefly interactions, the objective of this research was to develop an algorithm to generate explicit spatial counts of silverleaf whitefly nymphs within tomato plants. The algorithm requires the plant size and the number of silverleaf whitefly individuals to distribute as inputs, and includes models that describe infestation probabilities per leaf nodal position and the aggregation pattern of the silverleaf whitefly within tomato plants and leaves. The output is a simulated number of silverleaf whitefly individuals for each leaf and leaflet on one or more plants. Parameter estimation was performed using nymph counts per leaflet censused from 30 artificially infested tomato plants. Validation revealed a substantial agreement between algorithm outputs and independent data that included the distribution of counts of both eggs and nymphs. This algorithm can be used in simulation models that explore the effect of local heterogeneity on whitefly-predator dynamics. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Modelling temporal and spatial dynamics of benthic fauna in North-West-European shelf seas

    NASA Astrophysics Data System (ADS)

    Lessin, Gennadi; Bruggeman, Jorn; Artioli, Yuri; Butenschön, Momme; Blackford, Jerry

    2017-04-01

    Benthic zones of shallow shelf seas receive high amounts of organic material. Physical processes such as resuspension, as well as complex transformations mediated by diverse faunal and microbial communities, define fate of this material, which can be returned to the water column, reworked within sediments or ultimately buried. In recent years, numerical models of various complexity and serving different goals have been developed and applied in order to better understand and predict dynamics of benthic processes. ERSEM includes explicit parameterisations of several groups of benthic biota, which makes it particularly applicable for studies of benthic biodiversity, biological interactions within sediments and benthic-pelagic coupling. To assess model skill in reproducing temporal (inter-annual and seasonal) dynamics of major benthic macrofaunal groups, 1D model simulation results were compared with data from the Western Channel Observatory (WCO) benthic survey. The benthic model was forced with organic matter deposition rates inferred from observed phytoplankton abundance and model parameters were subsequently recalibrated. Based on model results and WCO data comparison, deposit-feeders exert clear seasonal variability, while for suspension-feeders inter-annual variability is more pronounced. Spatial distribution of benthic fauna was investigated using results of a full-scale NEMO-ERSEM hindcast simulation of the North-West European Shelf Seas area, covering the period of 1981-2014. Results suggest close relationship between spatial distribution of biomass of benthic faunal functional groups in relation to bathymetry, hydrodynamic conditions and organic matter supply. Our work highlights that it is feasible to construct, implement and validate models that explicitly include functional groups of benthic macrofauna. Moreover, the modelling approach delivers detailed information on benthic biogeochemistry and food-web at spatial and temporal scales that are unavailable through other sources but highly relevant to marine management, planning and policy.

  15. Modeling Transport of Turbulent Fluxes in a Heterogeneous Urban Canopy Using a Spatially Explicit Energy Balance

    NASA Astrophysics Data System (ADS)

    Moody, M.; Bailey, B.; Stoll, R., II

    2017-12-01

    Understanding how changes in the microclimate near individual plants affects the surface energy budget is integral to modeling land-atmosphere interactions and a wide range of near surface atmospheric boundary layer phenomena. In urban areas, the complex geometry of the urban canopy layer results in large spatial deviations of turbulent fluxes further complicating the development of models. Accurately accounting for this heterogeneity in order to model urban energy and water use requires a sub-plant level understanding of microclimate variables. We present analysis of new experimental field data taken in and around two Blue Spruce (Picea pungens) trees at the University of Utah in 2015. The test sites were chosen in order study the effects of heterogeneity in an urban environment. An array of sensors were placed in and around the conifers to quantify transport in the soil-plant-atmosphere continuum: radiative fluxes, temperature, sap fluxes, etc. A spatial array of LEMS (Local Energy Measurement Systems) were deployed to obtain pressure, surrounding air temperature and relative humidity. These quantities are used to calculate the radiative and turbulent fluxes. Relying on measurements alone is insufficient to capture the complexity of microclimate distribution as one reaches sub-plant scales. A spatially-explicit radiation and energy balance model previously developed for deciduous trees was extended to include conifers. The model discretizes the tree into isothermal sub-volumes on which energy balances are performed and utilizes incoming radiation as the primary forcing input. The radiative transfer component of the model yields good agreement between measured and modeled upward longwave and shortwave radiative fluxes. Ultimately, the model was validated through an examination of the full energy budget including radiative and turbulent fluxes through isolated Picea pungens in an urban environment.

  16. A three-dimensional method-of-characteristics solute-transport model (MOC3D)

    USGS Publications Warehouse

    Konikow, Leonard F.; Goode, D.J.; Hornberger, G.Z.

    1996-01-01

    This report presents a model, MOC3D, that simulates three-dimensional solute transport in flowing ground water. The model computes changes in concentration of a single dissolved chemical constituent over time that are caused by advective transport, hydrodynamic dispersion (including both mechanical dispersion and diffusion), mixing (or dilution) from fluid sources, and mathematically simple chemical reactions (including linear sorption, which is represented by a retardation factor, and decay). The transport model is integrated with MODFLOW, a three-dimensional ground-water flow model that uses implicit finite-difference methods to solve the transient flow equation. MOC3D uses the method of characteristics to solve the transport equation on the basis of the hydraulic gradients computed with MODFLOW for a given time step. This implementation of the method of characteristics uses particle tracking to represent advective transport and explicit finite-difference methods to calculate the effects of other processes. However, the explicit procedure has several stability criteria that may limit the size of time increments for solving the transport equation; these are automatically determined by the program. For improved efficiency, the user can apply MOC3D to a subgrid of the primary MODFLOW grid that is used to solve the flow equation. However, the transport subgrid must have uniform grid spacing along rows and columns. The report includes a description of the theoretical basis of the model, a detailed description of input requirements and output options, and the results of model testing and evaluation. The model was evaluated for several problems for which exact analytical solutions are available and by benchmarking against other numerical codes for selected complex problems for which no exact solutions are available. These test results indicate that the model is very accurate for a wide range of conditions and yields minimal numerical dispersion for advection-dominated problems. Mass-balance errors are generally less than 10 percent, and tend to decrease and stabilize with time.

  17. Class of self-limiting growth models in the presence of nonlinear diffusion

    NASA Astrophysics Data System (ADS)

    Kar, Sandip; Banik, Suman Kumar; Ray, Deb Shankar

    2002-06-01

    The source term in a reaction-diffusion system, in general, does not involve explicit time dependence. A class of self-limiting growth models dealing with animal and tumor growth and bacterial population in a culture, on the other hand, are described by kinetics with explicit functions of time. We analyze a reaction-diffusion system to study the propagation of spatial front for these models.

  18. Substructure based modeling of nickel single crystals cycled at low plastic strain amplitudes

    NASA Astrophysics Data System (ADS)

    Zhou, Dong

    In this dissertation a meso-scale, substructure-based, composite single crystal model is fully developed from the simple uniaxial model to the 3-D finite element method (FEM) model with explicit substructures and further with substructure evolution parameters, to simulate the completely reversed, strain controlled, low plastic strain amplitude cyclic deformation of nickel single crystals. Rate-dependent viscoplasticity and Armstrong-Frederick type kinematic hardening rules are applied to substructures on slip systems in the model to describe the kinematic hardening behavior of crystals. Three explicit substructure components are assumed in the composite single crystal model, namely "loop patches" and "channels" which are aligned in parallel in a "vein matrix," and persistent slip bands (PSBs) connected in series with the vein matrix. A magnetic domain rotation model is presented to describe the reverse magnetostriction of single crystal nickel. Kinematic hardening parameters are obtained by fitting responses to experimental data in the uniaxial model, and the validity of uniaxial assumption is verified in the 3-D FEM model with explicit substructures. With information gathered from experiments, all control parameters in the model including hardening parameters, volume fraction of loop patches and PSBs, and variation of Young's modulus etc. are correlated to cumulative plastic strain and/or plastic strain amplitude; and the whole cyclic deformation history of single crystal nickel at low plastic strain amplitudes is simulated in the uniaxial model. Then these parameters are implanted in the 3-D FEM model to simulate the formation of PSB bands. A resolved shear stress criterion is set to trigger the formation of PSBs, and stress perturbation in the specimen is obtained by several elements assigned with PSB material properties a priori. Displacement increment, plastic strain amplitude control and overall stress-strain monitor and output are carried out in the user subroutine DISP and URDFIL of ABAQUS, respectively, while constitutive formulations of the FEM model are coded and implemented in UMAT. The results of the simulations are compared to experiments. This model verified the validity of Winter's two-phase model and Taylor's uniform stress assumption, explored substructure evolution and "intrinsic" behavior in substructures and successfully simulated the process of PSB band formation and propagation.

  19. Response of SOM Decomposition to Anthropogenic N Deposition: Simulations From the PnET-SOM Model.

    NASA Astrophysics Data System (ADS)

    Tonitto, C.; Goodale, C. L.; Ollinger, S. V.; Jenkins, J. P.

    2008-12-01

    Anthropogenic forcing of the C and N cycles has caused rapid change in atmospheric CO2 and N deposition, with complex and uncertain effects on forest C and N balance. With some exceptions, models of forest ecosystem response to anthropogenic perturbation have historically focused more on aboveground than belowground processes; the complexity of soil organic matter (SOM) is often represented with abstract or incomplete SOM pools, and remains difficult to quantify. We developed a model of SOM dynamics in northern hardwood forests with explicit feedbacks between C and N cycles. The soil model is linked to the aboveground dynamics of the PnET model to form PnET-SOM. The SOM model includes: 1) physically measurable SOM pools, including humic and mineral-associated SOM in O, A, and B soil horizons, 2) empirical soil turnover times based on 14C data, 3) alternative SOM decomposition algorithms with and without explicit microbial processing, and 4) soluble element transport explicitly linked to the hydrologic cycle. We tested model sensitivity to changes in litter decomposition rate (k) and completeness of decomposition (limit value) by altering these parameters based on experimental observations from long-term litter decomposition experiments with N fertilization treatments. After a 100 year simulation, the Oe+Oa horizon SOC pool was reduced by 15 % and the A-horizon humified SOC was reduced by 7 % for N deposition scenarios relative to forests without N fertilization. In contrast, predictions for slower time-scale pools showed negligible variation in response to variation in the limit values tested, with A-horizon mineral SOC pools reduced by < 3 % and B-horizon mineral SOC reduced by 0.1 % for N deposition scenarios relative to forests without N fertilization. The model was also used to test the effect of varying initial litter decomposition rate to simulate response to N deposition. In contrast to the effect of varying limit values, simulations in which only k-values were varied did not drastically alter the predicted SOC pool distribution throughout the soil profile, but did significantly alter the Oi SOC pool. These results suggest that describing soil response to N deposition via alteration of the limit value alone, or as a combined alteration of limit value and the initial decomposition rate, can lead to significant variation in predicted long-term C storage.

  20. Decadal shifts of East Asian summer monsoon in a climate model free of explicit GHGs and aerosols

    NASA Astrophysics Data System (ADS)

    Lin, Renping; Zhu, Jiang; Zheng, Fei

    2016-12-01

    The East Asian summer monsoon (EASM) experienced decadal transitions over the past few decades, and the associated "wetter-South-drier-North" shifts in rainfall patterns in China significantly affected the social and economic development in China. Two viewpoints stand out to explain these decadal shifts, regarding the shifts either a result of internal variability of climate system or that of external forcings (e.g. greenhouse gases (GHGs) and anthropogenic aerosols). However, most climate models, for example, the Atmospheric Model Intercomparison Project (AMIP)-type simulations and the Coupled Model Intercomparison Project (CMIP)-type simulations, fail to simulate the variation patterns, leaving the mechanisms responsible for these shifts still open to dispute. In this study, we conducted a successful simulation of these decadal transitions in a coupled model where we applied ocean data assimilation in the model free of explicit aerosols and GHGs forcing. The associated decadal shifts of the three-dimensional spatial structure in the 1990s, including the eastward retreat, the northward shift of the western Pacific subtropical high (WPSH), and the south-cool-north-warm pattern of the upper-level tropospheric temperature, were all well captured. Our simulation supports the argument that the variations of the oceanic fields are the dominant factor responsible for the EASM decadal transitions.

  1. Decadal shifts of East Asian summer monsoon in a climate model free of explicit GHGs and aerosols

    PubMed Central

    Lin, Renping; Zhu, Jiang; Zheng, Fei

    2016-01-01

    The East Asian summer monsoon (EASM) experienced decadal transitions over the past few decades, and the associated "wetter-South-drier-North" shifts in rainfall patterns in China significantly affected the social and economic development in China. Two viewpoints stand out to explain these decadal shifts, regarding the shifts either a result of internal variability of climate system or that of external forcings (e.g. greenhouse gases (GHGs) and anthropogenic aerosols). However, most climate models, for example, the Atmospheric Model Intercomparison Project (AMIP)-type simulations and the Coupled Model Intercomparison Project (CMIP)-type simulations, fail to simulate the variation patterns, leaving the mechanisms responsible for these shifts still open to dispute. In this study, we conducted a successful simulation of these decadal transitions in a coupled model where we applied ocean data assimilation in the model free of explicit aerosols and GHGs forcing. The associated decadal shifts of the three-dimensional spatial structure in the 1990s, including the eastward retreat, the northward shift of the western Pacific subtropical high (WPSH), and the south-cool-north-warm pattern of the upper-level tropospheric temperature, were all well captured. Our simulation supports the argument that the variations of the oceanic fields are the dominant factor responsible for the EASM decadal transitions. PMID:27934933

  2. Non-hydrostatic general circulation model of the Venus atmosphere

    NASA Astrophysics Data System (ADS)

    Rodin, Alexander V.; Mingalev, Igor; Orlov, Konstantin; Ignatiev, Nikolay

    We present the first non-hydrostatic global circulation model of the Venus atmosphere based on the complete set of gas dynamics equations. The model employs a spatially uniform triangular mesh that allows to avoid artificial damping of the dynamical processes in the polar regions, with altitude as a vertical coordinate. Energy conversion from the solar flux into atmospheric motion is described via explicitly specified heating and cooling rates or, alternatively, with help of the radiation block based on comprehensive treatment of the Venus atmosphere spectroscopy, including line mixing effects in CO2 far wing absorption. Momentum equations are integrated using the semi-Lagrangian explicit scheme that provides high accuracy of mass and energy conservation. Due to high vertical grid resolution required by gas dynamics calculations, the model is integrated on the short time step less than one second. The model reliably repro-duces zonal superrotation, smoothly extending far below the cloud layer, tidal patterns at the cloud level and above, and non-rotating, sun-synchronous global convective cell in the upper atmosphere. One of the most interesting features of the model is the development of the polar vortices resembling those observed by Venus Express' VIRTIS instrument. Initial analysis of the simulation results confirms the hypothesis that it is thermal tides that provides main driver for the superrotation.

  3. Explicit and implicit reinforcement learning across the psychosis spectrum.

    PubMed

    Barch, Deanna M; Carter, Cameron S; Gold, James M; Johnson, Sheri L; Kring, Ann M; MacDonald, Angus W; Pizzagalli, Diego A; Ragland, J Daniel; Silverstein, Steven M; Strauss, Milton E

    2017-07-01

    Motivational and hedonic impairments are core features of a variety of types of psychopathology. An important aspect of motivational function is reinforcement learning (RL), including implicit (i.e., outside of conscious awareness) and explicit (i.e., including explicit representations about potential reward associations) learning, as well as both positive reinforcement (learning about actions that lead to reward) and punishment (learning to avoid actions that lead to loss). Here we present data from paradigms designed to assess both positive and negative components of both implicit and explicit RL, examine performance on each of these tasks among individuals with schizophrenia, schizoaffective disorder, and bipolar disorder with psychosis, and examine their relative relationships to specific symptom domains transdiagnostically. None of the diagnostic groups differed significantly from controls on the implicit RL tasks in either bias toward a rewarded response or bias away from a punished response. However, on the explicit RL task, both the individuals with schizophrenia and schizoaffective disorder performed significantly worse than controls, but the individuals with bipolar did not. Worse performance on the explicit RL task, but not the implicit RL task, was related to worse motivation and pleasure symptoms across all diagnostic categories. Performance on explicit RL, but not implicit RL, was related to working memory, which accounted for some of the diagnostic group differences. However, working memory did not account for the relationship of explicit RL to motivation and pleasure symptoms. These findings suggest transdiagnostic relationships across the spectrum of psychotic disorders between motivation and pleasure impairments and explicit RL. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  4. Thalamic volume deficit contributes to procedural and explicit memory impairment in HIV infection with primary alcoholism comorbidity.

    PubMed

    Fama, Rosemary; Rosenbloom, Margaret J; Sassoon, Stephanie A; Rohlfing, Torsten; Pfefferbaum, Adolf; Sullivan, Edith V

    2014-12-01

    Component cognitive and motor processes contributing to diminished visuomotor procedural learning in HIV infection with comorbid chronic alcoholism (HIV+ALC) include problems with attention and explicit memory processes. The neural correlates associated with this constellation of cognitive and motor processes in HIV infection and alcoholism have yet to be delineated. Frontostriatal regions are affected in HIV infection, frontothalamocerebellar regions are affected in chronic alcoholism, and frontolimbic regions are likely affected in both; all three of these systems have the potential of contributing to both visuomotor procedural learning and explicit memory processes. Here, we examined the neural correlates of implicit memory, explicit memory, attention, and motor tests in 26 HIV+ALC (5 with comorbidity for nonalcohol drug abuse/dependence) and 19 age-range matched healthy control men. Parcellated brain volumes, including cortical, subcortical, and allocortical regions, as well as cortical sulci and ventricles, were derived using the SRI24 brain atlas. Results indicated that smaller thalamic volumes were associated with poorer performance on tests of explicit (immediate and delayed) and implicit (visuomotor procedural) memory in HIV+ALC. By contrast, smaller hippocampal volumes were associated with lower scores on explicit, but not implicit memory. Multiple regression analyses revealed that volumes of both the thalamus and the hippocampus were each unique independent predictors of explicit memory scores. This study provides evidence of a dissociation between implicit and explicit memory tasks in HIV+ALC, with selective relationships observed between hippocampal volume and explicit but not implicit memory, and highlights the relevance of the thalamus to mnemonic processes.

  5. PIRLS 2011 User Guide for the International Database. Supplement 4: PIRLS 2011 Sampling Stratification Information

    ERIC Educational Resources Information Center

    Foy, Pierre, Ed.; Drucker, Kathleen T., Ed.

    2013-01-01

    This supplement contains documentation on the explicit and implicit stratification variables included in the PIRLS 2011 data files. The explicit strata are smaller sampling frames, created from the national sampling frames, from which national samples of schools were drawn. The implicit strata are nested within the explicit strata, and were used…

  6. TIMSS 2011 User Guide for the International Database. Supplement 4: TIMSS 2011 Sampling Stratification Information

    ERIC Educational Resources Information Center

    Foy, Pierre, Ed.; Arora, Alka, Ed.; Stanco, Gabrielle M., Ed.

    2013-01-01

    This supplement contains documentation on the explicit and implicit stratification variables included in the TIMSS 2011 data files. The explicit strata are smaller sampling frames, created from the national sampling frames, from which national samples of schools were drawn. The implicit strata are nested within the explicit strata, and were used…

  7. Innovations in individual feature history management - The significance of feature-based temporal model

    USGS Publications Warehouse

    Choi, J.; Seong, J.C.; Kim, B.; Usery, E.L.

    2008-01-01

    A feature relies on three dimensions (space, theme, and time) for its representation. Even though spatiotemporal models have been proposed, they have principally focused on the spatial changes of a feature. In this paper, a feature-based temporal model is proposed to represent the changes of both space and theme independently. The proposed model modifies the ISO's temporal schema and adds new explicit temporal relationship structure that stores temporal topological relationship with the ISO's temporal primitives of a feature in order to keep track feature history. The explicit temporal relationship can enhance query performance on feature history by removing topological comparison during query process. Further, a prototype system has been developed to test a proposed feature-based temporal model by querying land parcel history in Athens, Georgia. The result of temporal query on individual feature history shows the efficiency of the explicit temporal relationship structure. ?? Springer Science+Business Media, LLC 2007.

  8. Labeling and Knowing: A Reconciliation of Implicit Theory and Explicit Theory among Students with Exceptionalities

    ERIC Educational Resources Information Center

    lo, C. Owen

    2014-01-01

    Using a realist grounded theory method, this study resulted in a theoretical model and 4 propositions. As displayed in the LINK model, the labeling practice is situated in and endorsed by a social context that carries explicit theory about and educational policies regarding the labels. Taking a developmental perspective, the labeling practice…

  9. A Multi-Year Program Developing an Explicit Reflective Pedagogy for Teaching Pre-service Teachers the Nature of Science by Ostention

    NASA Astrophysics Data System (ADS)

    Smith, Mike U.; Scharmann, Lawrence

    2008-02-01

    This investigation delineates a multi-year action research agenda designed to develop an instructional model for teaching the nature of science (NOS) to preservice science teachers. Our past research strongly supports the use of explicit reflective instructional methods, which includes Thomas Kuhn’s notion of learning by ostention and treating science as a continuum (i.e., comparing fields of study to one another for relative placement as less to more scientific). Instruction based on conceptual change precepts, however, also exhibits promise. Thus, the investigators sought to ascertain the degree to which conceptual change took place among students (n = 15) participating in the NOS instructional model. Three case studies are presented to illustrate successful conceptual changes that took place as a result of the NOS instructional model. All three cases represent students who claim a very conservative Christian heritage and for whom evolution was not considered a legitimate scientific theory prior to participating in the NOS instructional model. All three case study individuals, along with their twelve classmates, placed evolution as most scientific when compared to intelligent design and a fictional field of study called “Umbrellaology.”

  10. Particle-hole symmetry in generalized seniority, microscopic interacting boson (fermion) model, nucleon-pair approximation, and other models

    NASA Astrophysics Data System (ADS)

    Jia, L. Y.

    2016-06-01

    The particle-hole symmetry (equivalence) of the full shell-model Hilbert space is straightforward and routinely used in practical calculations. In this work I show that this symmetry is preserved in the subspace truncated up to a certain generalized seniority and give the explicit transformation between the states in the two types (particle and hole) of representations. Based on the results, I study particle-hole symmetry in popular theories that could be regarded as further truncations on top of the generalized seniority, including the microscopic interacting boson (fermion) model, the nucleon-pair approximation, and other models.

  11. Intraspecific density dependence and a guild of consumers coexisting on one resource.

    PubMed

    McPeek, Mark A

    2012-12-01

    The importance of negative intraspecific density dependence to promoting species coexistence in a community is well accepted. However, such mechanisms are typically omitted from more explicit models of community dynamics. Here I analyze a variation of the Rosenzweig-MacArthur consumer-resource model that includes negative intraspecific density dependence for consumers to explore its effect on the coexistence of multiple consumers feeding on a single resource. This analysis demonstrates that a guild of multiple consumers can easily coexist on a single resource if each limits its own abundance to some degree, and stronger intraspecific density dependence permits a wider variety of consumers to coexist. The mechanism permitting multiple consumers to coexist works in a fashion similar to apparent competition or to each consumer having its own specialized predator. These results argue for a more explicit emphasis on how negative intraspecific density dependence is generated and how these mechanisms combine with species interactions to shape overall community structure.

  12. The relation between environmental factors and pedometer-determined physical activity in children: the mediating role of autonomous motivation.

    PubMed

    Rutten, Cindy; Boen, Filip; Seghers, Jan

    2013-05-01

    Based on self-determination theory, the purpose of this study was to explore the mediating role of autonomous motivation in the relation between environmental factors and pedometer-determined PA among 10- to 12-year-old Flemish children. Data were collected from 787 6th grade pupils and one of their parents. Children completed self-report measures including autonomous motivation for PA and perceived autonomy support for PA by parents and friends. Parents completed a questionnaire concerning their PA related parenting practices (logistic support and explicit modeling) and the perceived home environment with respect to PA opportunities. The results confirmed that autonomous motivation mediated the relation between children's PA and their perceived autonomy support by friends and parents. Autonomous motivation also mediated the relation between parental logistic support and PA. In addition, a positive direct relation was found between parental explicit modeling and children's PA, and between perceived neighbor- hood safety and children's PA.

  13. Mapping malaria risk and vulnerability in the United Republic of Tanzania: a spatial explicit model.

    PubMed

    Hagenlocher, Michael; Castro, Marcia C

    2015-01-01

    Outbreaks of vector-borne diseases (VBDs) impose a heavy burden on vulnerable populations. Despite recent progress in eradication and control, malaria remains the most prevalent VBD. Integrative approaches that take into account environmental, socioeconomic, demographic, biological, cultural, and political factors contributing to malaria risk and vulnerability are needed to effectively reduce malaria burden. Although the focus on malaria risk has increasingly gained ground, little emphasis has been given to develop quantitative methods for assessing malaria risk including malaria vulnerability in a spatial explicit manner. Building on a conceptual risk and vulnerability framework, we propose a spatial explicit approach for modeling relative levels of malaria risk - as a function of hazard, exposure, and vulnerability - in the United Republic of Tanzania. A logistic regression model was employed to identify a final set of risk factors and their contribution to malaria endemicity based on multidisciplinary geospatial information. We utilized a Geographic Information System for the construction and visualization of a malaria vulnerability index and its integration into a spatially explicit malaria risk map. The spatial pattern of malaria risk was very heterogeneous across the country. Malaria risk was higher in Mainland areas than in Zanzibar, which is a result of differences in both malaria entomological inoculation rate and prevailing vulnerabilities. Areas of high malaria risk were identified in the southeastern part of the country, as well as in two distinct "hotspots" in the northwestern part of the country bordering Lake Victoria, while concentrations of high malaria vulnerability seem to occur in the northwestern, western, and southeastern parts of the mainland. Results were visualized using both 10×10 km(2) grids and subnational administrative units. The presented approach makes an important contribution toward a decision support tool. By decomposing malaria risk into its components, the approach offers evidence on which factors could be targeted for reducing malaria risk and vulnerability to the disease. Ultimately, results offer relevant information for place-based intervention planning and more effective spatial allocation of resources.

  14. Explicit and implicit learning: The case of computer programming

    NASA Astrophysics Data System (ADS)

    Mancy, Rebecca

    The central question of this thesis concerns the role of explicit and implicit learning in the acquisition of a complex skill, namely computer programming. This issue is explored with reference to information processing models of memory drawn from cognitive science. These models indicate that conscious information processing occurs in working memory where information is stored and manipulated online, but that this mode of processing shows serious limitations in terms of capacity or resources. Some information processing models also indicate information processing in the absence of conscious awareness through automation and implicit learning. It was hypothesised that students would demonstrate implicit and explicit knowledge and that both would contribute to their performance in programming. This hypothesis was investigated via two empirical studies. The first concentrated on temporary storage and online processing in working memory and the second on implicit and explicit knowledge. Storage and processing were tested using two tools: temporary storage capacity was measured using a digit span test; processing was investigated with a disembedding test. The results were used to calculate correlation coefficients with performance on programming examinations. Individual differences in temporary storage had only a small role in predicting programming performance and this factor was not a major determinant of success. Individual differences in disembedding were more strongly related to programming achievement. The second study used interviews to investigate the use of implicit and explicit knowledge. Data were analysed according to a grounded theory paradigm. The results indicated that students possessed implicit and explicit knowledge, but that the balance between the two varied between students and that the most successful students did not necessarily possess greater explicit knowledge. The ways in which students described their knowledge led to the development of a framework which extends beyond the implicit-explicit dichotomy to four descriptive categories of knowledge along this dimension. Overall, the results demonstrated that explicit and implicit knowledge both contribute to the acquisition ofprogramming skills. Suggestions are made for further research, and the results are discussed in the context of their implications for education.

  15. Advanced hierarchical distance sampling

    USGS Publications Warehouse

    Royle, Andy

    2016-01-01

    In this chapter, we cover a number of important extensions of the basic hierarchical distance-sampling (HDS) framework from Chapter 8. First, we discuss the inclusion of “individual covariates,” such as group size, in the HDS model. This is important in many surveys where animals form natural groups that are the primary observation unit, with the size of the group expected to have some influence on detectability. We also discuss HDS integrated with time-removal and double-observer or capture-recapture sampling. These “combined protocols” can be formulated as HDS models with individual covariates, and thus they have a commonality with HDS models involving group structure (group size being just another individual covariate). We cover several varieties of open-population HDS models that accommodate population dynamics. On one end of the spectrum, we cover models that allow replicate distance sampling surveys within a year, which estimate abundance relative to availability and temporary emigration through time. We consider a robust design version of that model. We then consider models with explicit dynamics based on the Dail and Madsen (2011) model and the work of Sollmann et al. (2015). The final major theme of this chapter is relatively newly developed spatial distance sampling models that accommodate explicit models describing the spatial distribution of individuals known as Point Process models. We provide novel formulations of spatial DS and HDS models in this chapter, including implementations of those models in the unmarked package using a hack of the pcount function for N-mixture models.

  16. Parameterization of backbone flexibility in a coarse-grained force field for proteins (COFFDROP) derived from all-atom explicit-solvent molecular dynamics simulations of all possible two-residue peptides

    PubMed Central

    Frembgen-Kesner, Tamara; Andrews, Casey T.; Li, Shuxiang; Ngo, Nguyet Anh; Shubert, Scott A.; Jain, Aakash; Olayiwola, Oluwatoni; Weishaar, Mitch R.; Elcock, Adrian H.

    2015-01-01

    Recently, we reported the parameterization of a set of coarse-grained (CG) nonbonded potential functions, derived from all-atom explicit-solvent molecular dynamics (MD) simulations of amino acid pairs, and designed for use in (implicit-solvent) Brownian dynamics (BD) simulations of proteins; this force field was named COFFDROP (COarse-grained Force Field for Dynamic Representations Of Proteins). Here, we describe the extension of COFFDROP to include bonded backbone terms derived from fitting to results of explicit-solvent MD simulations of all possible two-residue peptides containing the 20 standard amino acids, with histidine modeled in both its protonated and neutral forms. The iterative Boltzmann inversion (IBI) method was used to optimize new CG potential functions for backbone-related terms by attempting to reproduce angle, dihedral and distance probability distributions generated by the MD simulations. In a simple test of the transferability of the extended force field, the angle, dihedral and distance probability distributions obtained from BD simulations of 56 three-residue peptides were compared to results from corresponding explicit-solvent MD simulations. In a more challenging test of the COFFDROP force field, it was used to simulate eight intrinsically disordered proteins and was shown to quite accurately reproduce the experimental hydrodynamic radii (Rhydro), provided that the favorable nonbonded interactions of the force field were uniformly scaled downwards in magnitude. Overall, the results indicate that the COFFDROP force field is likely to find use in modeling the conformational behavior of intrinsically disordered proteins and multi-domain proteins connected by flexible linkers. PMID:26574429

  17. A Quasi-2D Delta-growth Model Accounting for Multiple Avulsion Events, Validated by Robust Data from the Yellow River Delta, China

    NASA Astrophysics Data System (ADS)

    Moodie, A. J.; Nittrouer, J. A.; Ma, H.; Carlson, B.; Parker, G.

    2016-12-01

    The autogenic "life cycle" of a lowland fluvial channel building a deltaic lobe typically follows a temporal sequence that includes: channel initiation, progradation and aggradation, and abandonment via avulsion. In terms of modeling these processes, it is possible to use a one-dimensional (1D) morphodynamic scheme to capture the magnitude of the prograding and aggrading processes. These models can include algorithms to predict the timing and location of avulsions for a channel lobe. However, this framework falls short in its ability to evaluate the deltaic system beyond the time scale of a single channel, and assess sedimentation processes occurring on the floodplain, which is important for lobe building. Herein, we adapt a 1D model to explicitly account for multiple avulsions and therefore replicate a deltaic system that includes many lobe cycles. Following an avulsion, sediment on the floodplain and beyond the radially-averaged shoreline is redistributed across the delta topset and along the shoreline, respectively, simultaneously prograding and aggrading the delta. Over time this framework produces net shoreline progradation and forward-stepping of subsequent avulsions. Testing this model using modern systems is inherently difficult due to a lack of data: most modern delta lobes are active for timescales of centuries to millennia, and so observing multiple iterations of the channel-lobe cycle is impossible. However, the Yellow River delta (China) is unique because the lobe cycles here occur within years to decades. Therefore it is possible to measure shoreline evolution through multiple lobe cycles, based on satellite imagery and historical records. These data are used to validate the model outcomes. Our findings confirm that the explicit accounting of avulsion processes in a quasi-2D model framework is capable of capturing shoreline development patterns that otherwise are not resolvable based on previously published delta building models.

  18. Effect of explicit dimension instruction on speech category learning

    PubMed Central

    Chandrasekaran, Bharath; Yi, Han-Gyol; Smayda, Kirsten E.; Maddox, W. Todd

    2015-01-01

    Learning non-native speech categories is often considered a challenging task in adulthood. This difficulty is driven by cross-language differences in weighting critical auditory dimensions that differentiate speech categories. For example, previous studies have shown that differentiating Mandarin tonal categories requires attending to dimensions related to pitch height and direction. Relative to native speakers of Mandarin, the pitch direction dimension is under-weighted by native English speakers. In the current study, we examined the effect of explicit instructions (dimension instruction) on native English speakers' Mandarin tone category learning within the framework of a dual-learning systems (DLS) model. This model predicts that successful speech category learning is initially mediated by an explicit, reflective learning system that frequently utilizes unidimensional rules, with an eventual switch to a more implicit, reflexive learning system that utilizes multidimensional rules. Participants were explicitly instructed to focus and/or ignore the pitch height dimension, the pitch direction dimension, or were given no explicit prime. Our results show that instruction instructing participants to focus on pitch direction, and instruction diverting attention away from pitch height resulted in enhanced tone categorization. Computational modeling of participant responses suggested that instruction related to pitch direction led to faster and more frequent use of multidimensional reflexive strategies, and enhanced perceptual selectivity along the previously underweighted pitch direction dimension. PMID:26542400

  19. The explicit and implicit dance in psychoanalytic change.

    PubMed

    Fosshage, James L

    2004-02-01

    How the implicit/non-declarative and explicit/declarative cognitive domains interact is centrally important in the consideration of effecting change within the psychoanalytic arena. Stern et al. (1998) declare that long-lasting change occurs in the domain of implicit relational knowledge. In the view of this author, the implicit and explicit domains are intricately intertwined in an interactive dance within a psychoanalytic process. The author views that a spirit of inquiry (Lichtenberg, Lachmann & Fosshage 2002) serves as the foundation of the psychoanalytic process. Analyst and patient strive to explore, understand and communicate and, thereby, create a 'spirit' of interaction that contributes, through gradual incremental learning, to new implicit relational knowledge. This spirit, as part of the implicit relational interaction, is a cornerstone of the analytic relationship. The 'inquiry' more directly brings explicit/declarative processing to the foreground in the joint attempt to explore and understand. The spirit of inquiry in the psychoanalytic arena highlights both the autobiographical scenarios of the explicit memory system and the mental models of the implicit memory system as each contributes to a sense of self, other, and self with other. This process facilitates the extrication and suspension of the old models, so that new models based on current relational experience can be gradually integrated into both memory systems for lasting change.

  20. A Framework for the Optimization of Discrete-Event Simulation Models

    NASA Technical Reports Server (NTRS)

    Joshi, B. D.; Unal, R.; White, N. H.; Morris, W. D.

    1996-01-01

    With the growing use of computer modeling and simulation, in all aspects of engineering, the scope of traditional optimization has to be extended to include simulation models. Some unique aspects have to be addressed while optimizing via stochastic simulation models. The optimization procedure has to explicitly account for the randomness inherent in the stochastic measures predicted by the model. This paper outlines a general purpose framework for optimization of terminating discrete-event simulation models. The methodology combines a chance constraint approach for problem formulation, together with standard statistical estimation and analyses techniques. The applicability of the optimization framework is illustrated by minimizing the operation and support resources of a launch vehicle, through a simulation model.

  1. Modeling the Explicit Chemistry of Anthropogenic and Biogenic Organic Aerosols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madronich, Sasha

    2015-12-09

    The atmospheric burden of Secondary Organic Aerosols (SOA) remains one of the most important yet uncertain aspects of the radiative forcing of climate. This grant focused on improving our quantitative understanding of SOA formation and evolution, by developing, applying, and improving a highly detailed model of atmospheric organic chemistry, the Generation of Explicit Chemistry and Kinetics of Organics in the Atmosphere (GECKO-A) model. Eleven (11) publications have resulted from this grant.

  2. Building a Progressive-Situational Model of Post-Diagnosis Information Seeking for Parents of Individuals With Down Syndrome

    PubMed Central

    Gibson, Amelia N.

    2016-01-01

    This grounded theory study used in-depth, semi-structured interview to examine the information-seeking behaviors of 35 parents of children with Down syndrome. Emergent themes include a progressive pattern of behavior including information overload and avoidance, passive attention, and active information seeking; varying preferences between tacit and explicit information at different stages; and selection of information channels and sources that varied based on personal and situational constraints. Based on the findings, the author proposes a progressive model of health information seeking and a framework for using this model to collect data in practice. The author also discusses the practical and theoretical implications of a responsive, progressive approach to understanding parents’ health information–seeking behavior. PMID:28462351

  3. Improved limits on dark matter annihilation in the Sun with the 79-string IceCube detector and implications for supersymmetry

    NASA Astrophysics Data System (ADS)

    Aartsen, M. G.; Abraham, K.; Ackermann, M.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Ahrens, M.; Altmann, D.; Anderson, T.; Ansseau, I.; Anton, G.; Archinger, M.; Arguelles, C.; Arlen, T. C.; Auffenberg, J.; Bai, X.; Barwick, S. W.; Baum, V.; Bay, R.; Beatty, J. J.; Becker Tjus, J.; Becker, K.-H.; Beiser, E.; BenZvi, S.; Berghaus, P.; Berley, D.; Bernardini, E.; Bernhard, A.; Besson, D. Z.; Binder, G.; Bindig, D.; Bissok, M.; Blaufuss, E.; Blumenthal, J.; Boersma, D. J.; Bohm, C.; Börner, M.; Bos, F.; Bose, D.; Böser, S.; Botner, O.; Braun, J.; Brayeur, L.; Bretz, H.-P.; Buzinsky, N.; Casey, J.; Casier, M.; Cheung, E.; Chirkin, D.; Christov, A.; Clark, K.; Classen, L.; Coenders, S.; Collin, G. H.; Conrad, J. M.; Cowen, D. F.; Cruz Silva, A. H.; Danninger, M.; Daughhetee, J.; Davis, J. C.; Day, M.; de André, J. P. A. M.; De Clercq, C.; del Pino Rosendo, E.; Dembinski, H.; De Ridder, S.; Desiati, P.; de Vries, K. D.; de Wasseige, G.; de With, M.; DeYoung, T.; Díaz-Vélez, J. C.; di Lorenzo, V.; Dumm, J. P.; Dunkman, M.; Eberhardt, B.; Edsjö, J.; Ehrhardt, T.; Eichmann, B.; Euler, S.; Evenson, P. A.; Fahey, S.; Fazely, A. R.; Feintzeig, J.; Felde, J.; Filimonov, K.; Finley, C.; Flis, S.; Fösig, C.-C.; Fuchs, T.; Gaisser, T. K.; Gaior, R.; Gallagher, J.; Gerhardt, L.; Ghorbani, K.; Gier, D.; Gladstone, L.; Glagla, M.; Glüsenkamp, T.; Goldschmidt, A.; Golup, G.; Gonzalez, J. G.; Góra, D.; Grant, D.; Griffith, Z.; Groß, A.; Ha, C.; Haack, C.; Haj Ismail, A.; Hallgren, A.; Halzen, F.; Hansen, E.; Hansmann, B.; Hanson, K.; Hebecker, D.; Heereman, D.; Helbing, K.; Hellauer, R.; Hickford, S.; Hignight, J.; Hill, G. C.; Hoffman, K. D.; Hoffmann, R.; Holzapfel, K.; Homeier, A.; Hoshina, K.; Huang, F.; Huber, M.; Huelsnitz, W.; Hulth, P. O.; Hultqvist, K.; In, S.; Ishihara, A.; Jacobi, E.; Japaridze, G. S.; Jeong, M.; Jero, K.; Jones, B. J. P.; Jurkovic, M.; Kappes, A.; Karg, T.; Karle, A.; Katz, U.; Kauer, M.; Keivani, A.; Kelley, J. L.; Kemp, J.; Kheirandish, A.; Kiryluk, J.; Klein, S. R.; Kohnen, G.; Koirala, R.; Kolanoski, H.; Konietz, R.; Köpke, L.; Kopper, C.; Kopper, S.; Koskinen, D. J.; Kowalski, M.; Krings, K.; Kroll, G.; Kroll, M.; Krückl, G.; Kunnen, J.; Kurahashi, N.; Kuwabara, T.; Labare, M.; Lanfranchi, J. L.; Larson, M. J.; Lesiak-Bzdak, M.; Leuermann, M.; Leuner, J.; Lu, L.; Lünemann, J.; Madsen, J.; Maggi, G.; Mahn, K. B. M.; Mandelartz, M.; Maruyama, R.; Mase, K.; Matis, H. S.; Maunu, R.; McNally, F.; Meagher, K.; Medici, M.; Meier, M.; Meli, A.; Menne, T.; Merino, G.; Meures, T.; Miarecki, S.; Middell, E.; Mohrmann, L.; Montaruli, T.; Morse, R.; Nahnhauer, R.; Naumann, U.; Neer, G.; Niederhausen, H.; Nowicki, S. C.; Nygren, D. R.; Obertacke Pollmann, A.; Olivas, A.; Omairat, A.; O'Murchadha, A.; Palczewski, T.; Pandya, H.; Pankova, D. V.; Paul, L.; Pepper, J. A.; Pérez de los Heros, C.; Pfendner, C.; Pieloth, D.; Pinat, E.; Posselt, J.; Price, P. B.; Przybylski, G. T.; Quinnan, M.; Raab, C.; Rädel, L.; Rameez, M.; Rawlins, K.; Reimann, R.; Relich, M.; Resconi, E.; Rhode, W.; Richman, M.; Richter, S.; Riedel, B.; Robertson, S.; Rongen, M.; Rott, C.; Ruhe, T.; Ryckbosch, D.; Sabbatini, L.; Sander, H.-G.; Sandrock, A.; Sandroos, J.; Sarkar, S.; Savage, C.; Schatto, K.; Schimp, M.; Schlunder, P.; Schmidt, T.; Schoenen, S.; Schöneberg, S.; Schönwald, A.; Schulte, L.; Schumacher, L.; Scott, P.; Seckel, D.; Seunarine, S.; Silverwood, H.; Soldin, D.; Song, M.; Spiczak, G. M.; Spiering, C.; Stahlberg, M.; Stamatikos, M.; Stanev, T.; Stasik, A.; Steuer, A.; Stezelberger, T.; Stokstad, R. G.; Stößl, A.; Ström, R.; Strotjohann, N. L.; Sullivan, G. W.; Sutherland, M.; Taavola, H.; Taboada, I.; Tatar, J.; Ter-Antonyan, S.; Terliuk, A.; Te{š}ić, G.; Tilav, S.; Toale, P. A.; Tobin, M. N.; Toscano, S.; Tosi, D.; Tselengidou, M.; Turcati, A.; Unger, E.; Usner, M.; Vallecorsa, S.; Vandenbroucke, J.; van Eijndhoven, N.; Vanheule, S.; van Santen, J.; Veenkamp, J.; Vehring, M.; Voge, M.; Vraeghe, M.; Walck, C.; Wallace, A.; Wallraff, M.; Wandkowsky, N.; Weaver, Ch.; Wendt, C.; Westerhoff, S.; Whelan, B. J.; Wiebe, K.; Wiebusch, C. H.; Wille, L.; Williams, D. R.; Wills, L.; Wissing, H.; Wolf, M.; Wood, T. R.; Woschnagg, K.; Xu, D. L.; Xu, X. W.; Xu, Y.; Yanez, J. P.; Yodh, G.; Yoshida, S.; Zoll, M.

    2016-04-01

    We present an improved event-level likelihood formalism for including neutrino telescope data in global fits to new physics. We derive limits on spin-dependent dark matter-proton scattering by employing the new formalism in a re-analysis of data from the 79-string IceCube search for dark matter annihilation in the Sun, including explicit energy information for each event. The new analysis excludes a number of models in the weak-scale minimal supersymmetric standard model (MSSM) for the first time. This work is accompanied by the public release of the 79-string IceCube data, as well as an associated computer code for applying the new likelihood to arbitrary dark matter models.

  4. Plasmonic Metallurgy Enabled by DNA

    DOE PAGES

    Ross, Michael B.; Ku, Jessie C.; Lee, Byeongdu; ...

    2016-02-05

    In this study, mixed silver and gold plasmonic nanoparticle architectures are synthesized using DNA-programmable assembly, unveiling exquisitely tunable optical properties that are predicted and explained both by effective thin-film models and explicit electrodynamic simulations. These data demonstrate that the manner and ratio with which multiple metallic components are arranged can greatly alter optical properties, including tunable color and asymmetric reflectivity behavior of relevance for thin-film applications.

  5. Explicit modeling of volatile organic compounds partitioning in the atmospheric aqueous phase

    NASA Astrophysics Data System (ADS)

    Mouchel-Vallon, C.; Bräuer, P.; Camredon, M.; Valorso, R.; Madronich, S.; Herrmann, H.; Aumont, B.

    2012-09-01

    The gas phase oxidation of organic species is a multigenerational process involving a large number of secondary compounds. Most secondary organic species are water-soluble multifunctional oxygenated molecules. The fully explicit chemical mechanism GECKO-A (Generator of Explicit Chemistry and Kinetics of Organics in the Atmosphere) is used to describe the oxidation of organics in the gas phase and their mass transfer to the aqueous phase. The oxidation of three hydrocarbons of atmospheric interest (isoprene, octane and α-pinene) is investigated for various NOx conditions. The simulated oxidative trajectories are examined in a new two dimensional space defined by the mean oxidation state and the solubility. The amount of dissolved organic matter was found to be very low (<2%) under a water content typical of deliquescent aerosols. For cloud water content, 50% (isoprene oxidation) to 70% (octane oxidation) of the carbon atoms are found in the aqueous phase after the removal of the parent hydrocarbons for low NOx conditions. For high NOx conditions, this ratio is only 5% in the isoprene oxidation case, but remains large for α-pinene and octane oxidation cases (40% and 60%, respectively). Although the model does not yet include chemical reactions in the aqueous phase, much of this dissolved organic matter should be processed in cloud drops and modify both oxidation rates and the speciation of organic species.

  6. Explicit modeling of volatile organic compounds partitioning in the atmospheric aqueous phase

    NASA Astrophysics Data System (ADS)

    Mouchel-Vallon, C.; Bräuer, P.; Camredon, M.; Valorso, R.; Madronich, S.; Herrmann, H.; Aumont, B.

    2013-01-01

    The gas phase oxidation of organic species is a multigenerational process involving a large number of secondary compounds. Most secondary organic species are water-soluble multifunctional oxygenated molecules. The fully explicit chemical mechanism GECKO-A (Generator of Explicit Chemistry and Kinetics of Organics in the Atmosphere) is used to describe the oxidation of organics in the gas phase and their mass transfer to the aqueous phase. The oxidation of three hydrocarbons of atmospheric interest (isoprene, octane and α-pinene) is investigated for various NOx conditions. The simulated oxidative trajectories are examined in a new two dimensional space defined by the mean oxidation state and the solubility. The amount of dissolved organic matter was found to be very low (yield less than 2% on carbon atom basis) under a water content typical of deliquescent aerosols. For cloud water content, 50% (isoprene oxidation) to 70% (octane oxidation) of the carbon atoms are found in the aqueous phase after the removal of the parent hydrocarbons for low NOx conditions. For high NOx conditions, this ratio is only 5% in the isoprene oxidation case, but remains large for α-pinene and octane oxidation cases (40% and 60%, respectively). Although the model does not yet include chemical reactions in the aqueous phase, much of this dissolved organic matter should be processed in cloud drops and modify both oxidation rates and the speciation of organic species.

  7. Charged patchy particle models in explicit salt: Ion distributions, electrostatic potentials, and effective interactions.

    PubMed

    Yigit, Cemil; Heyda, Jan; Dzubiella, Joachim

    2015-08-14

    We introduce a set of charged patchy particle models (CPPMs) in order to systematically study the influence of electrostatic charge patchiness and multipolarity on macromolecular interactions by means of implicit-solvent, explicit-ion Langevin dynamics simulations employing the Gromacs software. We consider well-defined zero-, one-, and two-patched spherical globules each of the same net charge and (nanometer) size which are composed of discrete atoms. The studied mono- and multipole moments of the CPPMs are comparable to those of globular proteins with similar size. We first characterize ion distributions and electrostatic potentials around a single CPPM. Although angle-resolved radial distribution functions reveal the expected local accumulation and depletion of counter- and co-ions around the patches, respectively, the orientation-averaged electrostatic potential shows only a small variation among the various CPPMs due to space charge cancellations. Furthermore, we study the orientation-averaged potential of mean force (PMF), the number of accumulated ions on the patches, as well as the CPPM orientations along the center-to-center distance of a pair of CPPMs. We compare the PMFs to the classical Derjaguin-Verwey-Landau-Overbeek theory and previously introduced orientation-averaged Debye-Hückel pair potentials including dipolar interactions. Our simulations confirm the adequacy of the theories in their respective regimes of validity, while low salt concentrations and large multipolar interactions remain a challenge for tractable theoretical descriptions.

  8. Distribution of dopant ions around poly(3,4-ethylenedioxythiophene) chains: a theoretical study.

    PubMed

    Casanovas, Jordi; Zanuy, David; Alemán, Carlos

    2017-04-12

    The effect of counterions and multiple polymer chains on the properties and structure of poly(3,4-ethylenedioxythiophene) (PEDOT) doped with ClO 4 - has been examined using density functional theory (DFT) calculations with periodic boundary conditions (PBCs). Calculations on a one-dimensional periodic model with four explicit polymer repeat units and two ClO 4 - molecules indicate that the latter are separated as much as possible, with the salt structure and band gap obtained from such ClO 4 - distribution being in excellent agreement with those determined experimentally. On the other hand, DFT calculations on periodic models that include two chains indicate that neighboring PEDOT chains are shifted along the molecular axis by a half of the repeat unit length, with dopant ions intercalated between the polymer molecules acting as cement. In order to support these structural features, classical molecular dynamics (MD) simulations have been performed on a multiphasic system consisting of 69 explicit PEDOT chains anchored onto a steel surface, explicit ClO 4 - anions embedded in the polymer matrix, and an acetonitrile phase layer onto the polymer matrix. Analyses of the radial distribution functions indicate that the all-anti conformation, the relative disposition of adjacent PEDOT chains and the distribution of ClO 4 - dopant ions are fully consistent with periodic DFT predictions. The agreement between two such different methodologies allows reinforcing the microscopic understanding of the PEDOT film structure.

  9. Effects of prompting and reinforcement of one response pattern upon imitation of a different modeled pattern

    PubMed Central

    Bondy, Andrew S.

    1982-01-01

    Twelve preschool children participated in a study of the effects of explicit training on the imitation of modeled behavior. The responses trained involved a marble-dropping pattern that differed from the modeled pattern. Training consisted of physical prompts and verbal praise during a single session. No prompts or praise were used during test periods. After operant levels of the experimental responses were measured, training either preceded or was interposed within a series of exposures to modeled behavior that differed from the trained behavior. Children who were initially exposed to a modeling session immediately imitated, whereas those children who were initially trained immediately performed the appropriate response. Children initially trained on one pattern generally continued to exhibit that pattern even after many modeling sessions. Children who first viewed the modeled response and then were exposed to explicit training of a different response reversed their response pattern from the trained response to the modeled response within a few sessions. The results suggest that under certain conditions explicit training will exert greater control over responding than immediate modeling stimuli. PMID:16812260

  10. Deconstructing the core dynamics from a complex time-lagged regulatory biological circuit.

    PubMed

    Eriksson, O; Brinne, B; Zhou, Y; Björkegren, J; Tegnér, J

    2009-03-01

    Complex regulatory dynamics is ubiquitous in molecular networks composed of genes and proteins. Recent progress in computational biology and its application to molecular data generate a growing number of complex networks. Yet, it has been difficult to understand the governing principles of these networks beyond graphical analysis or extensive numerical simulations. Here the authors exploit several simplifying biological circumstances which thereby enable to directly detect the underlying dynamical regularities driving periodic oscillations in a dynamical nonlinear computational model of a protein-protein network. System analysis is performed using the cell cycle, a mathematically well-described complex regulatory circuit driven by external signals. By introducing an explicit time delay and using a 'tearing-and-zooming' approach the authors reduce the system to a piecewise linear system with two variables that capture the dynamics of this complex network. A key step in the analysis is the identification of functional subsystems by identifying the relations between state-variables within the model. These functional subsystems are referred to as dynamical modules operating as sensitive switches in the original complex model. By using reduced mathematical representations of the subsystems the authors derive explicit conditions on how the cell cycle dynamics depends on system parameters, and can, for the first time, analyse and prove global conditions for system stability. The approach which includes utilising biological simplifying conditions, identification of dynamical modules and mathematical reduction of the model complexity may be applicable to other well-characterised biological regulatory circuits. [Includes supplementary material].

  11. Uncertainties in SOA Formation from the Photooxidation of α-pinene

    NASA Astrophysics Data System (ADS)

    McVay, R.; Zhang, X.; Aumont, B.; Valorso, R.; Camredon, M.; La, S.; Seinfeld, J.

    2015-12-01

    Explicit chemical models such as GECKO-A (the Generator for Explicit Chemistry and Kinetics of Organics in the Atmosphere) enable detailed modeling of gas-phase photooxidation and secondary organic aerosol (SOA) formation. Comparison between these explicit models and chamber experiments can provide insight into processes that are missing or unknown in these models. GECKO-A is used to model seven SOA formation experiments from α-pinene photooxidation conducted at varying seed particle concentrations with varying oxidation rates. We investigate various physical and chemical processes to evaluate the extent of agreement between the experiments and the model predictions. We examine the effect of vapor wall loss on SOA formation and how the importance of this effect changes at different oxidation rates. Proposed gas-phase autoxidation mechanisms are shown to significantly affect SOA predictions. The potential effects of particle-phase dimerization and condensed-phase photolysis are investigated. We demonstrate the extent to which SOA predictions in the α-pinene photooxidation system depend on uncertainties in the chemical mechanism.

  12. Testing the Use of Implicit Solvent in the Molecular Dynamics Modelling of DNA Flexibility

    NASA Astrophysics Data System (ADS)

    Mitchell, J.; Harris, S.

    DNA flexibility controls packaging, looping and in some cases sequence specific protein binding. Molecular dynamics simulations carried out with a computationally efficient implicit solvent model are potentially a powerful tool for studying larger DNA molecules than can be currently simulated when water and counterions are represented explicitly. In this work we compare DNA flexibility at the base pair step level modelled using an implicit solvent model to that previously determined from explicit solvent simulations and database analysis. Although much of the sequence dependent behaviour is preserved in implicit solvent, the DNA is considerably more flexible when the approximate model is used. In addition we test the ability of the implicit solvent to model stress induced DNA disruptions by simulating a series of DNA minicircle topoisomers which vary in size and superhelical density. When compared with previously run explicit solvent simulations, we find that while the levels of DNA denaturation are similar using both computational methodologies, the specific structural form of the disruptions is different.

  13. Spatially explicit modelling of cholera epidemics

    NASA Astrophysics Data System (ADS)

    Finger, F.; Bertuzzo, E.; Mari, L.; Knox, A. C.; Gatto, M.; Rinaldo, A.

    2013-12-01

    Epidemiological models can provide crucial understanding about the dynamics of infectious diseases. Possible applications range from real-time forecasting and allocation of health care resources to testing alternative intervention mechanisms such as vaccines, antibiotics or the improvement of sanitary conditions. We apply a spatially explicit model to the cholera epidemic that struck Haiti in October 2010 and is still ongoing. The dynamics of susceptibles as well as symptomatic and asymptomatic infectives are modelled at the scale of local human communities. Dissemination of Vibrio cholerae through hydrological transport and human mobility along the road network is explicitly taken into account, as well as the effect of rainfall as a driver of increasing disease incidence. The model is calibrated using a dataset of reported cholera cases. We further model the long term impact of several types of interventions on the disease dynamics by varying parameters appropriately. Key epidemiological mechanisms and parameters which affect the efficiency of treatments such as antibiotics are identified. Our results lead to conclusions about the influence of different intervention strategies on the overall epidemiological dynamics.

  14. Assessing implicit models for nonpolar mean solvation forces: The importance of dispersion and volume terms

    PubMed Central

    Wagoner, Jason A.; Baker, Nathan A.

    2006-01-01

    Continuum solvation models provide appealing alternatives to explicit solvent methods because of their ability to reproduce solvation effects while alleviating the need for expensive sampling. Our previous work has demonstrated that Poisson-Boltzmann methods are capable of faithfully reproducing polar explicit solvent forces for dilute protein systems; however, the popular solvent-accessible surface area model was shown to be incapable of accurately describing nonpolar solvation forces at atomic-length scales. Therefore, alternate continuum methods are needed to reproduce nonpolar interactions at the atomic scale. In the present work, we address this issue by supplementing the solvent-accessible surface area model with additional volume and dispersion integral terms suggested by scaled particle models and Weeks–Chandler–Andersen theory, respectively. This more complete nonpolar implicit solvent model shows very good agreement with explicit solvent results and suggests that, although often overlooked, the inclusion of appropriate dispersion and volume terms are essential for an accurate implicit solvent description of atomic-scale nonpolar forces. PMID:16709675

  15. A Pilot Tsunami Inundation Forecast System for Australia

    NASA Astrophysics Data System (ADS)

    Allen, Stewart C. R.; Greenslade, Diana J. M.

    2016-12-01

    The Joint Australian Tsunami Warning Centre (JATWC) provides a tsunami warning service for Australia. Warnings are currently issued according to a technique that does not include explicit modelling at the coastline, including any potential coastal inundation. This paper investigates the feasibility of developing and implementing tsunami inundation modelling as part of the JATWC warning system. An inundation model was developed for a site in Southeast Australia, on the basis of the availability of bathymetric and topographic data and observations of past tsunamis. The model was forced using data from T2, the operational deep-water tsunami scenario database currently used for generating warnings. The model was evaluated not only for its accuracy but also for its computational speed, particularly with respect to operational applications. Limitations of the proposed forecast processes in the Australian context and areas requiring future improvement are discussed.

  16. Development and assessment of 30-meter pine density maps for landscape-level modeling of mountain pine beetle dynamics

    Treesearch

    Benjamin A. Crabb; James A. Powell; Barbara J. Bentz

    2012-01-01

    Forecasting spatial patterns of mountain pine beetle (MPB) population success requires spatially explicit information on host pine distribution. We developed a means of producing spatially explicit datasets of pine density at 30-m resolution using existing geospatial datasets of vegetation composition and structure. Because our ultimate goal is to model MPB population...

  17. Emergence of a coherent and cohesive swarm based on mutual anticipation

    PubMed Central

    Murakami, Hisashi; Niizato, Takayuki; Gunji, Yukio-Pegio

    2017-01-01

    Collective behavior emerging out of self-organization is one of the most striking properties of an animal group. Typically, it is hypothesized that each individual in an animal group tends to align its direction of motion with those of its neighbors. Most previous models for collective behavior assume an explicit alignment rule, by which an agent matches its velocity with that of neighbors in a certain neighborhood, to reproduce a collective order pattern by simple interactions. Recent empirical studies, however, suggest that there is no evidence for explicit matching of velocity, and that collective polarization arises from interactions other than those that follow the explicit alignment rule. We here propose a new lattice-based computational model that does not incorporate the explicit alignment rule but is based instead on mutual anticipation and asynchronous updating. Moreover, we show that this model can realize densely collective motion with high polarity. Furthermore, we focus on the behavior of a pair of individuals, and find that the turning response is drastically changed depending on the distance between two individuals rather than the relative heading, and is consistent with the empirical observations. Therefore, the present results suggest that our approach provides an alternative model for collective behavior. PMID:28406173

  18. Flory-type theories of polymer chains under different external stimuli

    NASA Astrophysics Data System (ADS)

    Budkov, Yu A.; Kiselev, M. G.

    2018-01-01

    In this Review, we present a critical analysis of various applications of the Flory-type theories to a theoretical description of the conformational behavior of single polymer chains in dilute polymer solutions under a few external stimuli. Different theoretical models of flexible polymer chains in the supercritical fluid are discussed and analysed. Different points of view on the conformational behavior of the polymer chain near the liquid-gas transition critical point of the solvent are presented. A theoretical description of the co-solvent-induced coil-globule transitions within the implicit-solvent-explicit-co-solvent models is discussed. Several explicit-solvent-explicit-co-solvent theoretical models of the coil-to-globule-to-coil transition of the polymer chain in a mixture of good solvents (co-nonsolvency) are analysed and compared with each other. Finally, a new theoretical model of the conformational behavior of the dielectric polymer chain under the external constant electric field in the dilute polymer solution with an explicit account for the many-body dipole correlations is discussed. The polymer chain collapse induced by many-body dipole correlations of monomers in the context of statistical thermodynamics of dielectric polymers is analysed.

  19. Implicit-Explicit Time Integration Methods for Non-hydrostatic Atmospheric Models

    NASA Astrophysics Data System (ADS)

    Gardner, D. J.; Guerra, J. E.; Hamon, F. P.; Reynolds, D. R.; Ullrich, P. A.; Woodward, C. S.

    2016-12-01

    The Accelerated Climate Modeling for Energy (ACME) project is developing a non-hydrostatic atmospheric dynamical core for high-resolution coupled climate simulations on Department of Energy leadership class supercomputers. An important factor in computational efficiency is avoiding the overly restrictive time step size limitations of fully explicit time integration methods due to the stiffest modes present in the model (acoustic waves). In this work we compare the accuracy and performance of different Implicit-Explicit (IMEX) splittings of the non-hydrostatic equations and various Additive Runge-Kutta (ARK) time integration methods. Results utilizing the Tempest non-hydrostatic atmospheric model and the ARKode package show that the choice of IMEX splitting and ARK scheme has a significant impact on the maximum stable time step size as well as solution quality. Horizontally Explicit Vertically Implicit (HEVI) approaches paired with certain ARK methods lead to greatly improved runtimes. With effective preconditioning IMEX splittings that incorporate some implicit horizontal dynamics can be competitive with HEVI results. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-699187

  20. The Construction of Visual-spatial Situation Models in Children's Reading and Their Relation to Reading Comprehension

    PubMed Central

    Barnes, Marcia A.; Raghubar, Kimberly P.; Faulkner, Heather; Denton, Carolyn A.

    2014-01-01

    Readers construct mental models of situations described by text to comprehend what they read, updating these situation models based on explicitly described and inferred information about causal, temporal, and spatial relations. Fluent adult readers update their situation models while reading narrative text based in part on spatial location information that is consistent with the perspective of the protagonist. The current study investigates whether children update spatial situation models in a similar way, whether there are age-related changes in children's formation of spatial situation models during reading, and whether measures of the ability to construct and update spatial situation models are predictive of reading comprehension. Typically-developing children from ages 9 through 16 years (n=81) were familiarized with a physical model of a marketplace. Then the model was covered, and children read stories that described the movement of a protagonist through the marketplace and were administered items requiring memory for both explicitly stated and inferred information about the character's movements. Accuracy of responses and response times were evaluated. Results indicated that: (a) location and object information during reading appeared to be activated and updated not simply from explicit text-based information but from a mental model of the real world situation described by the text; (b) this pattern showed no age-related differences; and (c) the ability to update the situation model of the text based on inferred information, but not explicitly stated information, was uniquely predictive of reading comprehension after accounting for word decoding. PMID:24315376

  1. Argumentation and indigenous knowledge: socio-historical influences in contextualizing an argumentation model in South African schools

    NASA Astrophysics Data System (ADS)

    Gallard Martínez, Alejandro J.

    2011-09-01

    This forum considers argumentation as a means of science teaching in South African schools, through the integration of indigenous knowledge (IK). It addresses issues raised in Mariana G. Hewson and Meshach B. Ogunniyi's paper entitled: Argumentation-teaching as a method to introduce indigenous knowledge into science classrooms: opportunities and challenges. As well as Peter Easton's: Hawks and baby chickens: cultivating the sources of indigenous science education; and, Femi S. Otulaja, Ann Cameron and Audrey Msimanga's: Rethinking argumentation-teaching strategies and indigenous knowledge in South African science classrooms. The first topic addressed is that implementation of argumentation in the science classroom becomes a complex endeavor when the tensions between students' IK, the educational infrastructure (allowance for teacher professional development, etc.) and local belief systems are made explicit. Secondly, western styles of debate become mitigating factors because they do not always adequately translate to South African culture. For example, in many instances it is more culturally acceptable in South Africa to build consensus than to be confrontational. Thirdly, the tension between what is "authentic science" and what is not becomes an influencing factor when a tension is created between IK and western science. Finally, I argue that the thrust of argumentation is to set students up as "scientist-students" who will be considered through a deficit model by judging their habitus and cultural capital. Explicitly, a "scientist-student" is a student who has "learned," modeled and thoroughly assimilated the habits of western scientists, evidently—and who will be judged by and held accountable for their demonstration of explicit related behaviors in the science classroom. I propose that science teaching, to include argumentation, should consist of "listening carefully" (radical listening) to students and valuing their language, culture, and learning as a model for "science for all".

  2. Rapid Response Tools and Datasets for Post-fire Erosion Modeling: Lessons Learned from the Rock House and High Park Fires

    NASA Astrophysics Data System (ADS)

    Miller, Mary Ellen; Elliot, William E.; MacDonald, Lee H.

    2013-04-01

    Once the danger posed by an active wildfire has passed, land managers must rapidly assess the threat from post-fire runoff and erosion due to the loss of surface cover and fire-induced changes in soil properties. Increased runoff and sediment delivery are of great concern to both the pubic and resource managers. Post-fire assessments and proposals to mitigate these threats are typically undertaken by interdisciplinary Burned Area Emergency Response (BAER) teams. These teams are under very tight deadlines, so they often begin their analysis while the fire is still burning and typically must complete their plans within a couple of weeks. Many modeling tools and datasets have been developed over the years to assist BAER teams, but process-based, spatially explicit models are currently under-utilized relative to simpler, lumped models because they are more difficult to set up and require the preparation of spatially-explicit data layers such as digital elevation models, soils, and land cover. The difficulty of acquiring and utilizing these data layers in spatially-explicit models increases with increasing fire size. Spatially-explicit post-fire erosion modeling was attempted for a small watershed in the 1270 km2 Rock House fire in Texas, but the erosion modeling work could not be completed in time. The biggest limitation was the time required to extract the spatially explicit soils data needed to run the preferred post-fire erosion model (GeoWEPP with Disturbed WEPP parameters). The solution is to have the spatial soil, land cover, and DEM data layers prepared ahead of time, and to have a clear methodology for the BAER teams to incorporate these layers in spatially-explicit modeling interfaces like GeoWEPP. After a fire occurs the data layers can quickly be clipped to the fire perimeter. The soil and land cover parameters can then be adjusted according to the burn severity map, which is one of the first products generated for the BAER teams. Under a previous project for the U.S. Environmental Protection Agency this preparatory work was done for much of Colorado, and in June 2012 the High Park wildfire in north central Colorado burned over 340 km2. The data layers for the entire burn area were quickly assembled and the spatially explicit runoff and erosion modeling was completed in less than three days. The resulting predictions were then used by the BAER team to quantify downstream risks and delineate priority areas for different post-fire treatments. These two contrasting case studies demonstrate the feasibility and the value of preparing datasets and modeling tools ahead of time. In recognition of this, the U.S. National Aeronautic and Space Administration has agreed to fund a pilot project to demonstrate the utility of acquiring and preparing the necessary data layers for fire-prone wildlands across the western U.S. A similar modeling and data acquisition approach could be followed

  3. Explicit and implicit cognition: a preliminary test of a dual-process theory of cognitive vulnerability to depression.

    PubMed

    Haeffel, Gerald J; Abramson, Lyn Y; Brazy, Paige C; Shah, James Y; Teachman, Bethany A; Nosek, Brian A

    2007-06-01

    Two studies were conducted to test a dual-process theory of cognitive vulnerability to depression. According to this theory, implicit and explicit cognitive processes have differential effects on depressive reactions to stressful life events. Implicit processes are hypothesized to be critical in determining an individual's immediate affective reaction to stress whereas explicit cognitions are thought to be more involved in long-term depressive reactions. Consistent with hypotheses, the results of study 1 (cross-sectional; N=237) showed that implicit, but not explicit, cognitions predicted immediate affective reactions to a lab stressor. Study 2 (longitudinal; N=251) also supported the dual-process model of cognitive vulnerability to depression. Results showed that both the implicit and explicit measures interacted with life stress to predict prospective changes in depressive symptoms, respectively. However, when both implicit and explicit predictors were entered into a regression equation simultaneously, only the explicit measure interacted with stress to remain a unique predictor of depressive symptoms over the five-week prospective interval.

  4. Isoprene derived secondary organic aerosol in a global aerosol chemistry climate model

    NASA Astrophysics Data System (ADS)

    Stadtler, Scarlet; Kühn, Thomas; Taraborrelli, Domenico; Kokkola, Harri; Schultz, Martin

    2017-04-01

    Secondary organic aerosol (SOA) impacts earth's climate and human health. Since its precursor chemistry and its formation are not fully understood, climate models cannot catch its direct and indirect effects. Global isoprene emissions are higher than any other non-methane hydrocarbons. Therefore, SOA from isoprene-derived, low volatile species (iSOA) is simulated using a global aerosol chemistry climate model ECHAM6-HAM-SALSA-MOZ. Isoprene oxidation in the chemistry model MOZ is following a novel semi-explicit scheme, embedded in a detailed atmospheric chemical mechanism. For iSOA formation four low volatile isoprene oxidation products were identified. The group method by Nanoonlal et al. 2008 was used to estimate their evaporation enthalpies ΔHvap. To calculate the saturation concentration C∗(T) the sectional aerosol model SALSA uses the gas phase concentrations simulated by MOZ and their corresponding ΔHvap to obtain the saturation vapor pressure p∗(T) from the Clausius Clapeyron equation. Subsequently, the saturation concentration is used to calculate the explicit kinetic partitioning of these compounds forming iSOA. Furthermore, the irreversible heterogeneous reactions of IEPOX and glyoxal from isoprene were included. The possibility of reversible heterogeneous uptake was ignored at this stage, leading to an upper estimate of the contribution of glyoxal to iSOA mass.

  5. Harnessing Big Data to Represent 30-meter Spatial Heterogeneity in Earth System Models

    NASA Astrophysics Data System (ADS)

    Chaney, N.; Shevliakova, E.; Malyshev, S.; Van Huijgevoort, M.; Milly, C.; Sulman, B. N.

    2016-12-01

    Terrestrial land surface processes play a critical role in the Earth system; they have a profound impact on the global climate, food and energy production, freshwater resources, and biodiversity. One of the most fascinating yet challenging aspects of characterizing terrestrial ecosystems is their field-scale (˜30 m) spatial heterogeneity. It has been observed repeatedly that the water, energy, and biogeochemical cycles at multiple temporal and spatial scales have deep ties to an ecosystem's spatial structure. Current Earth system models largely disregard this important relationship leading to an inadequate representation of ecosystem dynamics. In this presentation, we will show how existing global environmental datasets can be harnessed to explicitly represent field-scale spatial heterogeneity in Earth system models. For each macroscale grid cell, these environmental data are clustered according to their field-scale soil and topographic attributes to define unique sub-grid tiles. The state-of-the-art Geophysical Fluid Dynamics Laboratory (GFDL) land model is then used to simulate these tiles and their spatial interactions via the exchange of water, energy, and nutrients along explicit topographic gradients. Using historical simulations over the contiguous United States, we will show how a robust representation of field-scale spatial heterogeneity impacts modeled ecosystem dynamics including the water, energy, and biogeochemical cycles as well as vegetation composition and distribution.

  6. Finite-size analysis of the detectability limit of the stochastic block model

    NASA Astrophysics Data System (ADS)

    Young, Jean-Gabriel; Desrosiers, Patrick; Hébert-Dufresne, Laurent; Laurence, Edward; Dubé, Louis J.

    2017-06-01

    It has been shown in recent years that the stochastic block model is sometimes undetectable in the sparse limit, i.e., that no algorithm can identify a partition correlated with the partition used to generate an instance, if the instance is sparse enough and infinitely large. In this contribution, we treat the finite case explicitly, using arguments drawn from information theory and statistics. We give a necessary condition for finite-size detectability in the general SBM. We then distinguish the concept of average detectability from the concept of instance-by-instance detectability and give explicit formulas for both definitions. Using these formulas, we prove that there exist large equivalence classes of parameters, where widely different network ensembles are equally detectable with respect to our definitions of detectability. In an extensive case study, we investigate the finite-size detectability of a simplified variant of the SBM, which encompasses a number of important models as special cases. These models include the symmetric SBM, the planted coloring model, and more exotic SBMs not previously studied. We conclude with three appendices, where we study the interplay of noise and detectability, establish a connection between our information-theoretic approach and random matrix theory, and provide proofs of some of the more technical results.

  7. Molecular Dynamics based on a Generalized Born solvation model: application to protein folding

    NASA Astrophysics Data System (ADS)

    Onufriev, Alexey

    2004-03-01

    An accurate description of the aqueous environment is essential for realistic biomolecular simulations, but may become very expensive computationally. We have developed a version of the Generalized Born model suitable for describing large conformational changes in macromolecules. The model represents the solvent implicitly as continuum with the dielectric properties of water, and include charge screening effects of salt. The computational cost associated with the use of this model in Molecular Dynamics simulations is generally considerably smaller than the cost of representing water explicitly. Also, compared to traditional Molecular Dynamics simulations based on explicit water representation, conformational changes occur much faster in implicit solvation environment due to the absence of viscosity. The combined speed-up allow one to probe conformational changes that occur on much longer effective time-scales. We apply the model to folding of a 46-residue three helix bundle protein (residues 10-55 of protein A, PDB ID 1BDD). Starting from an unfolded structure at 450 K, the protein folds to the lowest energy state in 6 ns of simulation time, which takes about a day on a 16 processor SGI machine. The predicted structure differs from the native one by 2.4 A (backbone RMSD). Analysis of the structures seen on the folding pathway reveals details of the folding process unavailable form experiment.

  8. Explicit simulation of ice particle habits in a Numerical Weather Prediction Model

    NASA Astrophysics Data System (ADS)

    Hashino, Tempei

    2007-05-01

    This study developed a scheme for explicit simulation of ice particle habits in Numerical Weather Prediction (NWP) Models. The scheme is called Spectral Ice Habit Prediction System (SHIPS), and the goal is to retain growth history of ice particles in the Eulerian dynamics framework. It diagnoses characteristics of ice particles based on a series of particle property variables (PPVs) that reflect history of microphysieal processes and the transport between mass bins and air parcels in space. Therefore, categorization of ice particles typically used in bulk microphysical parameterization and traditional bin models is not necessary, so that errors that stem from the categorization can be avoided. SHIPS predicts polycrystals as well as hexagonal monocrystals based on empirically derived habit frequency and growth rate, and simulates the habit-dependent aggregation and riming processes by use of the stochastic collection equation with predicted PPVs. Idealized two dimensional simulations were performed with SHIPS in a NWP model. The predicted spatial distribution of ice particle habits and types, and evolution of particle size distributions showed good quantitative agreement with observation This comprehensive model of ice particle properties, distributions, and evolution in clouds can be used to better understand problems facing wide range of research disciplines, including microphysics processes, radiative transfer in a cloudy atmosphere, data assimilation, and weather modification.

  9. Multiscale Simulations of Protein Landscapes: Using Coarse Grained Models as Reference Potentials to Full Explicit Models

    PubMed Central

    Messer, Benjamin M.; Roca, Maite; Chu, Zhen T.; Vicatos, Spyridon; Kilshtain, Alexandra Vardi; Warshel, Arieh

    2009-01-01

    Evaluating the free energy landscape of proteins and the corresponding functional aspects presents a major challenge for computer simulation approaches. This challenge is due to the complexity of the landscape and the enormous computer time needed for converging simulations. The use of simplified coarse grained (CG) folding models offers an effective way of sampling the landscape but such a treatment, however, may not give the correct description of the effect of the actual protein residues. A general way around this problem that has been put forward in our early work (Fan et al, Theor Chem Acc (1999) 103:77-80) uses the CG model as a reference potential for free energy calculations of different properties of the explicit model. This method is refined and extended here, focusing on improving the electrostatic treatment and on demonstrating key applications. This application includes: evaluation of changes of folding energy upon mutations, calculations of transition states binding free energies (which are crucial for rational enzyme design), evaluation of catalytic landscape and simulation of the time dependent responses to pH changes. Furthermore, the general potential of our approach in overcoming major challenges in studies of structure function correlation in proteins is discussed. PMID:20052756

  10. On finite element implementation and computational techniques for constitutive modeling of high temperature composites

    NASA Technical Reports Server (NTRS)

    Saleeb, A. F.; Chang, T. Y. P.; Wilt, T.; Iskovitz, I.

    1989-01-01

    The research work performed during the past year on finite element implementation and computational techniques pertaining to high temperature composites is outlined. In the present research, two main issues are addressed: efficient geometric modeling of composite structures and expedient numerical integration techniques dealing with constitutive rate equations. In the first issue, mixed finite elements for modeling laminated plates and shells were examined in terms of numerical accuracy, locking property and computational efficiency. Element applications include (currently available) linearly elastic analysis and future extension to material nonlinearity for damage predictions and large deformations. On the material level, various integration methods to integrate nonlinear constitutive rate equations for finite element implementation were studied. These include explicit, implicit and automatic subincrementing schemes. In all cases, examples are included to illustrate the numerical characteristics of various methods that were considered.

  11. Comparison of Damage Path Predictions for Composite Laminates by Explicit and Standard Finite Element Analysis Tools

    NASA Technical Reports Server (NTRS)

    Bogert, Philip B.; Satyanarayana, Arunkumar; Chunchu, Prasad B.

    2006-01-01

    Splitting, ultimate failure load and the damage path in center notched composite specimens subjected to in-plane tension loading are predicted using progressive failure analysis methodology. A 2-D Hashin-Rotem failure criterion is used in determining intra-laminar fiber and matrix failures. This progressive failure methodology has been implemented in the Abaqus/Explicit and Abaqus/Standard finite element codes through user written subroutines "VUMAT" and "USDFLD" respectively. A 2-D finite element model is used for predicting the intra-laminar damages. Analysis results obtained from the Abaqus/Explicit and Abaqus/Standard code show good agreement with experimental results. The importance of modeling delamination in progressive failure analysis methodology is recognized for future studies. The use of an explicit integration dynamics code for simple specimen geometry and static loading establishes a foundation for future analyses where complex loading and nonlinear dynamic interactions of damage and structure will necessitate it.

  12. Does the amygdala response correlate with the personality trait ‘harm avoidance’ while evaluating emotional stimuli explicitly?

    PubMed Central

    2014-01-01

    Background The affective personality trait ‘harm avoidance’ (HA) from Cloninger’s psychobiological personality model determines how an individual deals with emotional stimuli. Emotional stimuli are processed by a neural network that include the left and right amygdalae as important key nodes. Explicit, implicit and passive processing of affective stimuli are known to activate the amygdalae differently reflecting differences in attention, level of detailed analysis of the stimuli and the cognitive control needed to perform the required task. Previous studies revealed that implicit processing or passive viewing of affective stimuli, induce a left amygdala response that correlates with HA. In this new study we have tried to extend these findings to the situation in which the subjects were required to explicitly process emotional stimuli. Methods A group of healthy female participants was asked to rate the valence of positive and negative stimuli while undergoing fMRI. Afterwards the neural responses of the participants to the positive and to the negative stimuli were separately correlated to their HA scores and compared between the low and high HA participants. Results Both analyses revealed increased neural activity in the left laterobasal (LB) amygdala of the high HA participants while they were rating the positive and the negative stimuli. Conclusions Our results indicate that the left amygdala response to explicit processing of affective stimuli does correlate with HA. PMID:24884791

  13. Causal Loop Analysis of coastal geomorphological systems

    NASA Astrophysics Data System (ADS)

    Payo, Andres; Hall, Jim W.; French, Jon; Sutherland, James; van Maanen, Barend; Nicholls, Robert J.; Reeve, Dominic E.

    2016-03-01

    As geomorphologists embrace ever more sophisticated theoretical frameworks that shift from simple notions of evolution towards single steady equilibria to recognise the possibility of multiple response pathways and outcomes, morphodynamic modellers are facing the problem of how to keep track of an ever-greater number of system feedbacks. Within coastal geomorphology, capturing these feedbacks is critically important, especially as the focus of activity shifts from reductionist models founded on sediment transport fundamentals to more synthesist ones intended to resolve emergent behaviours at decadal to centennial scales. This paper addresses the challenge of mapping the feedback structure of processes controlling geomorphic system behaviour with reference to illustrative applications of Causal Loop Analysis at two study cases: (1) the erosion-accretion behaviour of graded (mixed) sediment beds, and (2) the local alongshore sediment fluxes of sand-rich shorelines. These case study examples are chosen on account of their central role in the quantitative modelling of geomorphological futures and as they illustrate different types of causation. Causal loop diagrams, a form of directed graph, are used to distil the feedback structure to reveal, in advance of more quantitative modelling, multi-response pathways and multiple outcomes. In the case of graded sediment bed, up to three different outcomes (no response, and two disequilibrium states) can be derived from a simple qualitative stability analysis. For the sand-rich local shoreline behaviour case, two fundamentally different responses of the shoreline (diffusive and anti-diffusive), triggered by small changes of the shoreline cross-shore position, can be inferred purely through analysis of the causal pathways. Explicit depiction of feedback-structure diagrams is beneficial when developing numerical models to explore coastal morphological futures. By explicitly mapping the feedbacks included and neglected within a model, the modeller can readily assess if critical feedback loops are included.

  14. Implicit and explicit social mentalizing: dual processes driven by a shared neural network

    PubMed Central

    Van Overwalle, Frank; Vandekerckhove, Marie

    2013-01-01

    Recent social neuroscientific evidence indicates that implicit and explicit inferences on the mind of another person (i.e., intentions, attributions or traits), are subserved by a shared mentalizing network. Under both implicit and explicit instructions, ERP studies reveal that early inferences occur at about the same time, and fMRI studies demonstrate an overlap in core mentalizing areas, including the temporo-parietal junction (TPJ) and the medial prefrontal cortex (mPFC). These results suggest a rapid shared implicit intuition followed by a slower explicit verification processes (as revealed by additional brain activation during explicit vs. implicit inferences). These data provide support for a default-adjustment dual-process framework of social mentalizing. PMID:24062663

  15. Multispecies lottery competition: a diffusion analysis

    USGS Publications Warehouse

    Hatfield, J.S.; Chesson, P.L.; Tuljapurkar, S.; Caswell, H.

    1997-01-01

    The lottery model is a stochastic competition model designed for space-limited communities of sedentary organisms. Examples of such communities include coral reef fishes, aquatic sessile organisms, and many plant communities. Explicit conditions for the coexistence of two species and the stationary distribution of the two-species model were determined previously using an approximation with a diffusion process. In this chapter, a diffusion approximation is presented for the multispecies model for communities of two or more species, and a stage-structured model is investigated. The stage-structured model would be more reasonable for communities of long-lived species such as trees in a forest in which recruitment and death rates depend on the age or stage of the individuals.

  16. A Regional Model for Malaria Vector Developmental Habitats Evaluated Using Explicit, Pond-Resolving Surface Hydrology Simulations.

    PubMed

    Asare, Ernest Ohene; Tompkins, Adrian Mark; Bomblies, Arne

    2016-01-01

    Dynamical malaria models can relate precipitation to the availability of vector breeding sites using simple models of surface hydrology. Here, a revised scheme is developed for the VECTRI malaria model, which is evaluated alongside the default scheme using a two year simulation by HYDREMATS, a 10 metre resolution, village-scale model that explicitly simulates individual ponds. Despite the simplicity of the two VECTRI surface hydrology parametrization schemes, they can reproduce the sub-seasonal evolution of fractional water coverage. Calibration of the model parameters is required to simulate the mean pond fraction correctly. The default VECTRI model tended to overestimate water fraction in periods subject to light rainfall events and underestimate it during periods of intense rainfall. This systematic error was improved in the revised scheme by including the a parametrization for surface run-off, such that light rainfall below the initial abstraction threshold does not contribute to ponds. After calibration of the pond model, the VECTRI model was able to simulate vector densities that compared well to the detailed agent based model contained in HYDREMATS without further parameter adjustment. Substituting local rain-gauge data with satellite-retrieved precipitation gave a reasonable approximation, raising the prospects for regional malaria simulations even in data sparse regions. However, further improvements could be made if a method can be derived to calibrate the key hydrology parameters of the pond model in each grid cell location, possibly also incorporating slope and soil texture.

  17. A Regional Model for Malaria Vector Developmental Habitats Evaluated Using Explicit, Pond-Resolving Surface Hydrology Simulations

    PubMed Central

    Asare, Ernest Ohene; Tompkins, Adrian Mark; Bomblies, Arne

    2016-01-01

    Dynamical malaria models can relate precipitation to the availability of vector breeding sites using simple models of surface hydrology. Here, a revised scheme is developed for the VECTRI malaria model, which is evaluated alongside the default scheme using a two year simulation by HYDREMATS, a 10 metre resolution, village-scale model that explicitly simulates individual ponds. Despite the simplicity of the two VECTRI surface hydrology parametrization schemes, they can reproduce the sub-seasonal evolution of fractional water coverage. Calibration of the model parameters is required to simulate the mean pond fraction correctly. The default VECTRI model tended to overestimate water fraction in periods subject to light rainfall events and underestimate it during periods of intense rainfall. This systematic error was improved in the revised scheme by including the a parametrization for surface run-off, such that light rainfall below the initial abstraction threshold does not contribute to ponds. After calibration of the pond model, the VECTRI model was able to simulate vector densities that compared well to the detailed agent based model contained in HYDREMATS without further parameter adjustment. Substituting local rain-gauge data with satellite-retrieved precipitation gave a reasonable approximation, raising the prospects for regional malaria simulations even in data sparse regions. However, further improvements could be made if a method can be derived to calibrate the key hydrology parameters of the pond model in each grid cell location, possibly also incorporating slope and soil texture. PMID:27003834

  18. A different time and place test of ArcHSI: A spatially explicit habitat model for elk in the Black Hills

    Treesearch

    Mark A. Rumble; Lakhdar Benkobi; R. Scott Gamo

    2007-01-01

    We tested predictions of the spatially explicit ArcHSI habitat model for elk. The distribution of elk relative to proximity of forage and cover differed from that predicted. Elk used areas near primary roads similar to that predicted by the model, but elk were farther from secondary roads. Elk used areas categorized as good (> 0.7), fair (> 0.42 to 0.7), and poor...

  19. Pulsar distances and the galactic distribution of free electrons

    NASA Technical Reports Server (NTRS)

    Taylor, J. H.; Cordes, J. M.

    1993-01-01

    The present quantitative model for Galactic free electron distribution abandons the assumption of axisymmetry and explicitly incorporates spiral arms; their shapes and locations are derived from existing radio and optical observations of H II regions. The Gum Nebula's dispersion-measure contributions are also explicitly modeled. Adjustable quantities are calibrated by reference to three different types of data. The new model is estimated to furnish distance estimates to known pulsars that are accurate to about 25 percent.

  20. Regional impacts of iron-light colimitation in a global biogeochemical model

    NASA Astrophysics Data System (ADS)

    Galbraith, E. D.; Gnanadesikan, A.; Dunne, J. P.; Hiscock, M. R.

    2009-07-01

    Laboratory and field studies have revealed that iron has multiple roles in phytoplankton physiology, with particular importance for light-harvesting cellular machinery. However, although iron-limitation is explicitly included in numerous biogeochemical/ecosystem models, its implementation varies, and its effect on the efficiency of light harvesting is often ignored. Given the complexity of the ocean environment, it is difficult to predict the consequences of applying different iron limitation schemes. Here we explore the interaction of iron and nutrient cycles using a new, streamlined model of ocean biogeochemistry. Building on previously published parameterizations of photoadaptation and export production, the Biogeochemistry with Light Iron Nutrients and Gasses (BLING) model is constructed with only three explicit tracers but including macronutrient and micronutrient limitation, light limitation, and an implicit treatment of community structure. The structural simplicity of this computationally inexpensive model allows us to clearly isolate the global effects of iron availability on maximum light-saturated photosynthesis rates from those of photosynthetic efficiency. We find that the effect on light-saturated photosynthesis rates is dominant, negating the importance of photosynthetic efficiency in most regions, especially the cold waters of the Southern Ocean. The primary exceptions to this occur in iron-rich regions of the Northern Hemisphere, where high light-saturated photosynthesis rates cause photosynthetic efficiency to play a more important role. Additionally, we speculate that the small phytoplankton dominating iron-limited regions tend to have relatively high photosynthetic efficiency, such that iron-limitation has less of a deleterious effect on growth rates than would be expected from short-term iron addition experiments.

  1. Continuum Fatigue Damage Modeling for Use in Life Extending Control

    NASA Technical Reports Server (NTRS)

    Lorenzo, Carl F.

    1994-01-01

    This paper develops a simplified continuum (continuous wrp to time, stress, etc.) fatigue damage model for use in Life Extending Controls (LEC) studies. The work is based on zero mean stress local strain cyclic damage modeling. New nonlinear explicit equation forms of cyclic damage in terms of stress amplitude are derived to facilitate the continuum modeling. Stress based continuum models are derived. Extension to plastic strain-strain rate models are also presented. Application of these models to LEC applications is considered. Progress toward a nonzero mean stress based continuum model is presented. Also, new nonlinear explicit equation forms in terms of stress amplitude are also derived for this case.

  2. Importance of spatial autocorrelation in modeling bird distributions at a continental scale

    USGS Publications Warehouse

    Bahn, V.; O'Connor, R.J.; Krohn, W.B.

    2006-01-01

    Spatial autocorrelation in species' distributions has been recognized as inflating the probability of a type I error in hypotheses tests, causing biases in variable selection, and violating the assumption of independence of error terms in models such as correlation or regression. However, it remains unclear whether these problems occur at all spatial resolutions and extents, and under which conditions spatially explicit modeling techniques are superior. Our goal was to determine whether spatial models were superior at large extents and across many different species. In addition, we investigated the importance of purely spatial effects in distribution patterns relative to the variation that could be explained through environmental conditions. We studied distribution patterns of 108 bird species in the conterminous United States using ten years of data from the Breeding Bird Survey. We compared the performance of spatially explicit regression models with non-spatial regression models using Akaike's information criterion. In addition, we partitioned the variance in species distributions into an environmental, a pure spatial and a shared component. The spatially-explicit conditional autoregressive regression models strongly outperformed the ordinary least squares regression models. In addition, partialling out the spatial component underlying the species' distributions showed that an average of 17% of the explained variation could be attributed to purely spatial effects independent of the spatial autocorrelation induced by the underlying environmental variables. We concluded that location in the range and neighborhood play an important role in the distribution of species. Spatially explicit models are expected to yield better predictions especially for mobile species such as birds, even in coarse-grained models with a large extent. ?? Ecography.

  3. LES with and without explicit filtering: comparison and assessment of various models

    NASA Astrophysics Data System (ADS)

    Winckelmans, Gregoire S.; Jeanmart, Herve; Wray, Alan A.; Carati, Daniele

    2000-11-01

    The proper mathematical formalism for large eddy simulation (LES) of turbulent flows assumes that a regular ``explicit" filter (i.e., a filter with a well-defined second moment, such as the gaussian, the top hat, etc.) is applied to the equations of fluid motion. This filter is then responsible for a ``filtered-scale" stress. Because of the discretization of the filtered equations, using the LES grid, there is also a ``subgrid-scale" stress. The global effective stress is found to be the discretization of a filtered-scale stress plus a subgrid-scale stress. The former can be partially reconstructed from an exact, infinite, series, the first term of which is the ``tensor-diffusivity" model of Leonard and is found, in practice, to be sufficient for modeling. Alternatively, sufficient reconstruction can also be achieved using the ``scale-similarity" model of Bardina. The latter corresponds to loss of information: it cannot be reconstructed; its effect (essentially dissipation) must be modeled using ad hoc modeling strategies (such as the dynamic version of the ``effective viscosity" model of Smagorinsky). Practitionners also often assume LES without explicit filtering: the effective stress is then only a subgrid-scale stress. We here compare the performance of various LES models for both approaches (with and without explicit filtering), and for cases without solid boundaries: (1) decay of isotropic turbulence; (2) decay of aircraft wake vortices in a turbulent atmosphere. One main conclusion is that better subgrid-scale models are still needed, the effective viscosity models being too active at the large scales.

  4. The Importance of Explicitly Representing Soil Carbon with Depth over the Permafrost Region in Earth System Models: Implications for Atmospheric Carbon Dynamics at Multiple Temporal Scales between 1960 and 2300.

    NASA Astrophysics Data System (ADS)

    McGuire, A. D.

    2014-12-01

    We conducted an assessment of changes in permafrost area and carbon storage simulated by process-based models between 1960 and 2300. The models participating in this comparison were those that had joined the model integration team of the Vulnerability of Permafrost Carbon Research Coordination Network (see http://www.biology.ufl.edu/permafrostcarbon/). Each of the models in this comparison conducted simulations over the permafrost land region in the Northern Hemisphere driven by CCSM4-simulated climate for RCP 4.5 and 8.5 scenarios. Among the models, the area of permafrost (defined as the area for which active layer thickness was less than 3 m) ranged between 13.2 and 20.0 million km2. Between 1960 and 2300, models indicated the loss of permafrost area between 5.1 to 6.0 million km2 for RCP 4.5 and between 7.1 and 15.2 million km2 for RCP 8.5. Among the models, the density of soil carbon storage in 1960 ranged between 13 and 42 thousand g C m-2; models that explicitly represented carbon with depth had estimates greater than 27 thousand g C m-2. For the RCP 4.5 scenario, changes in soil carbon between 1960 and 2300 ranged between losses of 32 Pg C to gains of 58 Pg C, in which models that explicitly represent soil carbon with depth simulated losses or lower gains of soil carbon in comparison with those that did not. For the RCP 8.5 scenario, changes in soil carbon between 1960 and 2300 ranged between losses of 642 Pg C to gains of 66 Pg C, in which those models that represent soil carbon explicitly with depth all simulated losses, while those that do not all simulated gains. These results indicate that there are substantial differences in responses of carbon dynamics between model that do and do not explicitly represent soil carbon with depth in the permafrost region. We present analyses of the implications of the differences for atmospheric carbon dynamics at multiple temporal scales between 1960 and 2300.

  5. The Impact of Explicit, Self-Regulatory Reading Comprehension Strategy Instruction on the Reading-Specific Self-Efficacy, Attributions, and Affect of Students with Reading Disabilities

    ERIC Educational Resources Information Center

    Nelson, Jason M.; Manset-Williamson, Genevieve

    2006-01-01

    We compared a reading intervention that consisted of explicit, self-regulatory strategy instruction to a strategy intervention that was less explicit to determine the impact on the reading-specific self-efficacy, attributions, and affect of students with reading disabilities (RD). Participants included 20 students with RD who were entering grades…

  6. Effects of Explicit Instruction on the Acquisition of Students' Science Inquiry Skills in Grades 5 and 6 of Primary Education

    ERIC Educational Resources Information Center

    Kruit, P. M.; Oostdam, R. J.; van den Berg, E.; Schuitema, J. A.

    2018-01-01

    In most primary science classes, students are taught science inquiry skills by way of learning by doing. Research shows that explicit instruction may be more effective. The aim of this study was to investigate the effects of explicit instruction on the acquisition of inquiry skills. Participants included 705 Dutch fifth and sixth graders. Students…

  7. A study of material damping in large space structures

    NASA Technical Reports Server (NTRS)

    Highsmith, A. L.; Allen, D. H.

    1989-01-01

    A constitutive model was developed for predicting damping as a function of damage in continuous fiber reinforced laminated composites. The damage model is a continuum formulation, and uses internal state variables to quantify damage and its subsequent effect on material response. The model is sensitive to the stacking sequence of the laminate. Given appropriate baseline data from unidirectional material, and damping as a function of damage in one crossply laminate, damage can be predicted as a function of damage in other crossply laminates. Agreement between theory and experiment was quite good. A micromechanics model was also developed for examining the influence of damage on damping. This model explicitly includes crack surfaces. The model provides reasonable predictions of bending stiffness as a function of damage. Damping predictions are not in agreement with the experiment. This is thought to be a result of dissipation mechanisms such as friction, which are not presently included in the analysis.

  8. Health research access to personal confidential data in England and Wales: assessing any gap in public attitude between preferable and acceptable models of consent.

    PubMed

    Taylor, Mark J; Taylor, Natasha

    2014-12-01

    England and Wales are moving toward a model of 'opt out' for use of personal confidential data in health research. Existing research does not make clear how acceptable this move is to the public. While people are typically supportive of health research, when asked to describe the ideal level of control there is a marked lack of consensus over the preferred model of consent (e.g. explicit consent, opt out etc.). This study sought to investigate a relatively unexplored difference between the consent model that people prefer and that which they are willing to accept. It also sought to explore any reasons for such acceptance.A mixed methods approach was used to gather data, incorporating a structured questionnaire and in-depth focus group discussions led by an external facilitator. The sampling strategy was designed to recruit people with different involvement in the NHS but typically with experience of NHS services. Three separate focus groups were carried out over three consecutive days.The central finding is that people are typically willing to accept models of consent other than that which they would prefer. Such acceptance is typically conditional upon a number of factors, including: security and confidentiality, no inappropriate commercialisation or detrimental use, transparency, independent overview, the ability to object to any processing considered to be inappropriate or particularly sensitive.This study suggests that most people would find research use without the possibility of objection to be unacceptable. However, the study also suggests that people who would prefer to be asked explicitly before data were used for purposes beyond direct care may be willing to accept an opt out model of consent if the reasons for not seeking explicit consent are accessible to them and they trust that data is only going to be used under conditions, and with safeguards, that they would consider to be acceptable even if not preferable.

  9. A Dual-Process Approach to the Role of Mother's Implicit and Explicit Attitudes toward Their Child in Parenting Models

    ERIC Educational Resources Information Center

    Sturge-Apple, Melissa L.; Rogge, Ronald D.; Skibo, Michael A.; Peltz, Jack S.; Suor, Jennifer H.

    2015-01-01

    Extending dual process frameworks of cognition to a novel domain, the present study examined how mothers' explicit and implicit attitudes about her child may operate in models of parenting. To assess implicit attitudes, two separate studies were conducted using the same child-focused Go/No-go Association Task (GNAT-Child). In Study 1, model…

  10. Calabi-Yau structures on categories of matrix factorizations

    NASA Astrophysics Data System (ADS)

    Shklyarov, Dmytro

    2017-09-01

    Using tools of complex geometry, we construct explicit proper Calabi-Yau structures, that is, non-degenerate cyclic cocycles on differential graded categories of matrix factorizations of regular functions with isolated critical points. The formulas involve the Kapustin-Li trace and its higher corrections. From the physics perspective, our result yields explicit 'off-shell' models for categories of topological D-branes in B-twisted Landau-Ginzburg models.

  11. Does Teaching Students How to Explicitly Model the Causal Structure of Systems Improve Their Understanding of These Systems?

    ERIC Educational Resources Information Center

    Jensen, Eva

    2014-01-01

    If students really understand the systems they study, they would be able to tell how changes in the system would affect a result. This demands that the students understand the mechanisms that drive its behaviour. The study investigates potential merits of learning how to explicitly model the causal structure of systems. The approach and…

  12. An Explicit Algorithm for the Simulation of Fluid Flow through Porous Media

    NASA Astrophysics Data System (ADS)

    Trapeznikova, Marina; Churbanova, Natalia; Lyupa, Anastasiya

    2018-02-01

    The work deals with the development of an original mathematical model of porous medium flow constructed by analogy with the quasigasdynamic system of equations and allowing implementation via explicit numerical methods. The model is generalized to the case of multiphase multicomponent fluid and takes into account possible heat sources. The proposed approach is verified by a number of test predictions.

  13. Development and Validation of Spatially Explicit Habitat Models for Cavity-nesting Birds in Fishlake National Forest, Utah

    Treesearch

    Randall A., Jr. Schultz; Thomas C., Jr. Edwards; Gretchen G. Moisen; Tracey S. Frescino

    2005-01-01

    The ability of USDA Forest Service Forest Inventory and Analysis (FIA) generated spatial products to increase the predictive accuracy of spatially explicit, macroscale habitat models was examined for nest-site selection by cavity-nesting birds in Fishlake National Forest, Utah. One FIA-derived variable (percent basal area of aspen trees) was significant in the habitat...

  14. Projecting changes in the distribution and productivity of living marine resources: A critical review of the suite of modelling approaches used in the large European project VECTORS

    NASA Astrophysics Data System (ADS)

    Peck, Myron A.; Arvanitidis, Christos; Butenschön, Momme; Canu, Donata Melaku; Chatzinikolaou, Eva; Cucco, Andrea; Domenici, Paolo; Fernandes, Jose A.; Gasche, Loic; Huebert, Klaus B.; Hufnagl, Marc; Jones, Miranda C.; Kempf, Alexander; Keyl, Friedemann; Maar, Marie; Mahévas, Stéphanie; Marchal, Paul; Nicolas, Delphine; Pinnegar, John K.; Rivot, Etienne; Rochette, Sébastien; Sell, Anne F.; Sinerchia, Matteo; Solidoro, Cosimo; Somerfield, Paul J.; Teal, Lorna R.; Travers-Trolet, Morgan; van de Wolfshaar, Karen E.

    2018-02-01

    We review and compare four broad categories of spatially-explicit modelling approaches currently used to understand and project changes in the distribution and productivity of living marine resources including: 1) statistical species distribution models, 2) physiology-based, biophysical models of single life stages or the whole life cycle of species, 3) food web models, and 4) end-to-end models. Single pressures are rare and, in the future, models must be able to examine multiple factors affecting living marine resources such as interactions between: i) climate-driven changes in temperature regimes and acidification, ii) reductions in water quality due to eutrophication, iii) the introduction of alien invasive species, and/or iv) (over-)exploitation by fisheries. Statistical (correlative) approaches can be used to detect historical patterns which may not be relevant in the future. Advancing predictive capacity of changes in distribution and productivity of living marine resources requires explicit modelling of biological and physical mechanisms. New formulations are needed which (depending on the question) will need to strive for more realism in ecophysiology and behaviour of individuals, life history strategies of species, as well as trophodynamic interactions occurring at different spatial scales. Coupling existing models (e.g. physical, biological, economic) is one avenue that has proven successful. However, fundamental advancements are needed to address key issues such as the adaptive capacity of species/groups and ecosystems. The continued development of end-to-end models (e.g., physics to fish to human sectors) will be critical if we hope to assess how multiple pressures may interact to cause changes in living marine resources including the ecological and economic costs and trade-offs of different spatial management strategies. Given the strengths and weaknesses of the various types of models reviewed here, confidence in projections of changes in the distribution and productivity of living marine resources will be increased by assessing model structural uncertainty through biological ensemble modelling.

  15. Global-scale Joint Body and Surface Wave Tomography with Vertical Transverse Isotropy for Seismic Monitoring Applications

    NASA Astrophysics Data System (ADS)

    Simmons, Nathan; Myers, Steve

    2017-04-01

    We continue to develop more advanced models of Earth's global seismic structure with specific focus on improving predictive capabilities for future seismic events. Our most recent version of the model combines high-quality P and S wave body wave travel times and surface-wave group and phase velocities into a joint (simultaneous) inversion process to tomographically image Earth's crust and mantle. The new model adds anisotropy (known as vertical transverse isotropy) to the model, which is necessitated by the addition of surface waves to the tomographic data set. Like previous versions of the model the new model consists of 59 surfaces and 1.6 million model nodes from the surface to the core-mantle boundary, overlaying a 1-D outer and inner core model. The model architecture is aspherical and we directly incorporate Earth's expected hydrostatic shape (ellipticity and mantle stretching). We also explicitly honor surface undulations including the Moho, several internal crustal units, and the upper mantle transition zone undulations as predicated by previous studies. The explicit Earth model design allows for accurate travel time computation using our unique 3-D ray tracing algorithms, capable of 3-D ray tracing more than 20 distinct seismic phases including crustal, regional, teleseismic, and core phases. Thus, we can now incorporate certain secondary (and sometimes exotic) phases into source location determination and other analyses. New work on model uncertainty quantification assesses the error covariance of the model, which when completed will enable calculation of path-specific estimates of uncertainty for travel times computed using our previous model (LLNL-G3D-JPS) which is available to the monitoring and broader research community and we encourage external evaluation and validation. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  16. Measuring Explicit Word Learning of Preschool Children: A Development Study.

    PubMed

    Kelley, Elizabeth Spencer

    2017-08-15

    The purpose of this article is to present preliminary results related to the development of a new measure of explicit word learning. The measure incorporated elements of explicit vocabulary instruction and dynamic assessment and was designed to be sensitive to differences in word learning skill and to be feasible for use in clinical settings. The explicit word learning measure included brief teaching trials and repeated fine-grained measurement of semantic knowledge and production of 3 novel words (2 verbs and 1 adjective). Preschool children (N = 23) completed the measure of explicit word learning; standardized, norm-referenced measures of expressive and receptive vocabulary; and an incidental word learning task. The measure of explicit word learning provided meaningful information about word learning. Performance on the explicit measure was related to existing vocabulary knowledge and incidental word learning. Findings from this development study indicate that further examination of the measure of explicit word learning is warranted. The measure may have the potential to identify children who are poor word learners. https://doi.org/10.23641/asha.5170738.

  17. Combining Distributed and Shared Memory Models: Approach and Evolution of the Global Arrays Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nieplocha, Jarek; Harrison, Robert J.; Kumar, Mukul

    2002-07-29

    Both shared memory and distributed memory models have advantages and shortcomings. Shared memory model is much easier to use but it ignores data locality/placement. Given the hierarchical nature of the memory subsystems in the modern computers this characteristic might have a negative impact on performance and scalability. Various techniques, such as code restructuring to increase data reuse and introducing blocking in data accesses, can address the problem and yield performance competitive with message passing[Singh], however at the cost of compromising the ease of use feature. Distributed memory models such as message passing or one-sided communication offer performance and scalability butmore » they compromise the ease-of-use. In this context, the message-passing model is sometimes referred to as?assembly programming for the scientific computing?. The Global Arrays toolkit[GA1, GA2] attempts to offer the best features of both models. It implements a shared-memory programming model in which data locality is managed explicitly by the programmer. This management is achieved by explicit calls to functions that transfer data between a global address space (a distributed array) and local storage. In this respect, the GA model has similarities to the distributed shared-memory models that provide an explicit acquire/release protocol. However, the GA model acknowledges that remote data is slower to access than local data and allows data locality to be explicitly specified and hence managed. The GA model exposes to the programmer the hierarchical memory of modern high-performance computer systems, and by recognizing the communication overhead for remote data transfer, it promotes data reuse and locality of reference. This paper describes the characteristics of the Global Arrays programming model, capabilities of the toolkit, and discusses its evolution.« less

  18. MIZMAS: Modeling the Evolution of Ice Thickness and Floe Size Distributions in the Marginal Ice Zone of the Chukchi and Beaufort Seas

    DTIC Science & Technology

    2015-09-30

    ITD theory of Thorndike et al. (1975) in order to explicitly simulate the evolution of FSD and ITD jointly. The FSD theory includes a FSD function and...et al., 2015). 4 RESULTS Modeling: A FSD theory is developed and coupled to the ITD theory of Thorndike et al. (1975) in order to... Thorndike , A.S., D.A. Rothrock, G.A. Maykut, and R. Colony (1975), The thickness distribution of sea ice. J. Geophys. Res., 80, 4501–4513. Zhang

  19. Charge Equilibrium

    NASA Astrophysics Data System (ADS)

    Sigmund, Peter

    The mean equililibrium charge of a penetrating ion can be estimated on the basis of Bohr's velocity criterion or Lamb's energy criterion. Qualitative and quantitative results are derived on the basis of the Thomas-Fermi model of the atom, which is discussed explicitly. This includes a brief introduction to the Thomas-Fermi-Dirac model. Special attention is paid to trial function approaches by Lenz and Jensen as well as Brandt and Kitagawa. The chapter also offers a preliminary discussion of the role of the stopping medium, gas-solid differences, and a survey of data compilations.

  20. How "Hot Precursors" Modify Island Nucleation: A Rate-Equation Model

    NASA Astrophysics Data System (ADS)

    Morales-Cifuentes, Josue R.; Einstein, T. L.; Pimpinelli, A.

    2014-12-01

    We propose a novel island nucleation and growth model explicitly including transient (ballistic) mobility of the monomers deposited at rate F , assumed to be in a hot precursor state before thermalizing. In limiting regimes, corresponding to fast (diffusive) and slow (ballistic) thermalization, the island density N obeys scaling N ∝Fα . In between is found a rich, complex behavior, with various distinctive scaling regimes, characterized by effective exponents αeff and activation energies that we compute exactly. Application to N (F ,T ) of recent organic-molecule deposition experiments yields an excellent fit.

  1. Limitations Of The Current State Space Modelling Approach In Multistage Machining Processes Due To Operation Variations

    NASA Astrophysics Data System (ADS)

    Abellán-Nebot, J. V.; Liu, J.; Romero, F.

    2009-11-01

    The State Space modelling approach has been recently proposed as an engineering-driven technique for part quality prediction in Multistage Machining Processes (MMP). Current State Space models incorporate fixture and datum variations in the multi-stage variation propagation, without explicitly considering common operation variations such as machine-tool thermal distortions, cutting-tool wear, cutting-tool deflections, etc. This paper shows the limitations of the current State Space model through an experimental case study where the effect of the spindle thermal expansion, cutting-tool flank wear and locator errors are introduced. The paper also discusses the extension of the current State Space model to include operation variations and its potential benefits.

  2. Time Evolution of Modeled Reynolds Stresses in Planar Homogeneous Flows

    NASA Technical Reports Server (NTRS)

    Jongen, T.; Gatski, T. B.

    1997-01-01

    The analytic expression of the time evolution of the Reynolds stress anisotropy tensor in all planar homogeneous flows is obtained by exact integration of the modeled differential Reynolds stress equations. The procedure is based on results of tensor representation theory, is applicable for general pressure-strain correlation tensors, and can account for any additional turbulence anisotropy effects included in the closure. An explicit solution of the resulting system of scalar ordinary differential equations is obtained for the case of a linear pressure-strain correlation tensor. The properties of this solution are discussed, and the dynamic behavior of the Reynolds stresses is studied, including limit cycles and sensitivity to initial anisotropies.

  3. Spatially explicit quantification of heterogeneous fire effects over long time series: Patterns from two forest types in the northern U.S. Rockies

    Treesearch

    C. E. Naficy; T. T. Veblen; P. F. Hessburg

    2015-01-01

    Within the last decade, mixed-severity fire regimes (MSFRs) have gained increasing attention in both the scientific and management communities (Arno and others 2000, Baker and others 2007, Hessburg and others 2007, Perry and others 2011, Halofsky and others 2011, Stine and others 2014). The growing influence of the MSFR model derives from several factors including: (1...

  4. On the general constraints in single qubit quantum process tomography

    DOE PAGES

    Bhandari, Ramesh; Peters, Nicholas A.

    2016-05-18

    In this study, we briefly review single-qubit quantum process tomography for trace-preserving and nontrace-preserving processes, and derive explicit forms of the general constraints for fitting experimental data. These forms provide additional insight into the structure of the process matrix. We illustrate this with several examples, including a discussion of qubit leakage error models and the intuition which can be gained from their process matrices.

  5. Methods used to parameterize the spatially-explicit components of a state-and-transition simulation model

    USGS Publications Warehouse

    Sleeter, Rachel; Acevedo, William; Soulard, Christopher E.; Sleeter, Benjamin M.

    2015-01-01

    Spatially-explicit state-and-transition simulation models of land use and land cover (LULC) increase our ability to assess regional landscape characteristics and associated carbon dynamics across multiple scenarios. By characterizing appropriate spatial attributes such as forest age and land-use distribution, a state-and-transition model can more effectively simulate the pattern and spread of LULC changes. This manuscript describes the methods and input parameters of the Land Use and Carbon Scenario Simulator (LUCAS), a customized state-and-transition simulation model utilized to assess the relative impacts of LULC on carbon stocks for the conterminous U.S. The methods and input parameters are spatially explicit and describe initial conditions (strata, state classes and forest age), spatial multipliers, and carbon stock density. Initial conditions were derived from harmonization of multi-temporal data characterizing changes in land use as well as land cover. Harmonization combines numerous national-level datasets through a cell-based data fusion process to generate maps of primary LULC categories. Forest age was parameterized using data from the North American Carbon Program and spatially-explicit maps showing the locations of past disturbances (i.e. wildfire and harvest). Spatial multipliers were developed to spatially constrain the location of future LULC transitions. Based on distance-decay theory, maps were generated to guide the placement of changes related to forest harvest, agricultural intensification/extensification, and urbanization. We analyze the spatially-explicit input parameters with a sensitivity analysis, by showing how LUCAS responds to variations in the model input. This manuscript uses Mediterranean California as a regional subset to highlight local to regional aspects of land change, which demonstrates the utility of LUCAS at many scales and applications.

  6. Supporting the operational use of process based hydrological models and NASA Earth Observations for use in land management and post-fire remediation through a Rapid Response Erosion Database (RRED).

    NASA Astrophysics Data System (ADS)

    Miller, M. E.; Elliot, W.; Billmire, M.; Robichaud, P. R.; Banach, D. M.

    2017-12-01

    We have built a Rapid Response Erosion Database (RRED, http://rred.mtri.org/rred/) for the continental United States to allow land managers to access properly formatted spatial model inputs for the Water Erosion Prediction Project (WEPP). Spatially-explicit process-based models like WEPP require spatial inputs that include digital elevation models (DEMs), soil, climate and land cover. The online database delivers either a 10m or 30m USGS DEM, land cover derived from the Landfire project, and soil data derived from SSURGO and STATSGO datasets. The spatial layers are projected into UTM coordinates and pre-registered for modeling. WEPP soil parameter files are also created along with linkage files to match both spatial land cover and soils data with the appropriate WEPP parameter files. Our goal is to make process-based models more accessible by preparing spatial inputs ahead of time allowing modelers to focus on addressing scenarios of concern. The database provides comprehensive support for post-fire hydrological modeling by allowing users to upload spatial soil burn severity maps, and within moments returns spatial model inputs. Rapid response is critical following natural disasters. After moderate and high severity wildfires, flooding, erosion, and debris flows are a major threat to life, property and municipal water supplies. Mitigation measures must be rapidly implemented if they are to be effective, but they are expensive and cannot be applied everywhere. Fire, runoff, and erosion risks also are highly heterogeneous in space, creating an urgent need for rapid, spatially-explicit assessment. The database has been used to help assess and plan remediation on over a dozen wildfires in the Western US. Future plans include expanding spatial coverage, improving model input data and supporting additional models. Our goal is to facilitate the use of the best possible datasets and models to support the conservation of soil and water.

  7. Actin-mediated bacterial propulsion: comet profile, velocity pulsations.

    PubMed

    Benza, V G

    2008-05-23

    The propulsion of bacteria under the action of an actin gel network is examined in terms of gel concentration dynamics. The model includes the elasticity of the network, the gel-bacterium interaction, the bulk and interface polymerization. A formula for the cruise velocity is obtained where the contributions to bacterial motility arising from elasticity and polymerization are made explicit. Higher velocities correspond to lower concentration peaks and longer tails, in agreement with experimental results. The condition for the onset of motion is explicitly given. The behavior of the system is explored by varying the growth rates and the gel elasticity. At steady state two regimes are found, respectively, of constant and pulsating velocity; in the latter case, the velocity undergoes sudden accelerations and subsequent recoveries. The transition to the pulsating regime is obtained by increasing the elastic response of the gel.

  8. Investigating implicit knowledge in ontologies with application to the anatomical domain.

    PubMed

    Zhang, S; Bodenreider, O

    2004-01-01

    Knowledge in biomedical ontologies can be explicitly represented (often by means of semantic relations), but may also be implicit, i.e., embedded in the concept names and inferable from various combinations of semantic relations. This paper investigates implicit knowledge in two ontologies of anatomy: the Foundational Model of Anatomy and GALEN. The methods consist of extracting the knowledge explicitly represented, acquiring the implicit knowledge through augmentation and inference techniques, and identifying the origin of each semantic relation. The number of relations (12 million in FMA and 4.6 million in GALEN), broken down by source, is presented. Major findings include: each technique provides specific relations; and many relations can be generated by more than one technique. The application of these findings to ontology auditing, validation, and maintenance is discussed, as well as the application to ontology integration.

  9. A data management system for engineering and scientific computing

    NASA Technical Reports Server (NTRS)

    Elliot, L.; Kunii, H. S.; Browne, J. C.

    1978-01-01

    Data elements and relationship definition capabilities for this data management system are explicitly tailored to the needs of engineering and scientific computing. System design was based upon studies of data management problems currently being handled through explicit programming. The system-defined data element types include real scalar numbers, vectors, arrays and special classes of arrays such as sparse arrays and triangular arrays. The data model is hierarchical (tree structured). Multiple views of data are provided at two levels. Subschemas provide multiple structural views of the total data base and multiple mappings for individual record types are supported through the use of a REDEFINES capability. The data definition language and the data manipulation language are designed as extensions to FORTRAN. Examples of the coding of real problems taken from existing practice in the data definition language and the data manipulation language are given.

  10. Global modeling of withdrawal, allocation and consumptive use of surface water and groundwater resources

    NASA Astrophysics Data System (ADS)

    Wada, Y.; Wisser, D.; Bierkens, M. F. P.

    2013-02-01

    To sustain growing food demand and increasing standard of living, global water withdrawal and consumptive water use have been increasing rapidly. To analyze the human perturbation on water resources consistently over a large scale, a number of macro-scale hydrological models (MHMs) have been developed over the recent decades. However, few models consider the feedback between water availability and water demand, and even fewer models explicitly incorporate water allocation from surface water and groundwater resources. Here, we integrate a global water demand model into a global water balance model, and simulate water withdrawal and consumptive water use over the period 1979-2010, considering water allocation from surface water and groundwater resources and explicitly taking into account feedbacks between supply and demand, using two re-analysis products: ERA-Interim and MERRA. We implement an irrigation water scheme, which works dynamically with daily surface and soil water balance, and include a newly available extensive reservoir data set. Simulated surface water and groundwater withdrawal show generally good agreement with available reported national and sub-national statistics. The results show a consistent increase in both surface water and groundwater use worldwide, but groundwater use has been increasing more rapidly than surface water use since the 1990s. Human impacts on terrestrial water storage (TWS) signals are evident, altering the seasonal and inter-annual variability. The alteration is particularly large over the heavily regulated basins such as the Colorado and the Columbia, and over the major irrigated basins such as the Mississippi, the Indus, and the Ganges. Including human water use generally improves the correlation of simulated TWS anomalies with those of the GRACE observations.

  11. Assessment of implicit health attitudes: a multitrait-multimethod approach and a comparison between patients with hypochondriasis and patients with anxiety disorders.

    PubMed

    Weck, Florian; Höfling, Volkmar

    2015-01-01

    Two adaptations of the Implicit Association Task were used to assess implicit anxiety (IAT-Anxiety) and implicit health attitudes (IAT-Hypochondriasis) in patients with hypochondriasis (n = 58) and anxiety patients (n = 71). Explicit anxieties and health attitudes were assessed using questionnaires. The analysis of several multitrait-multimethod models indicated that the low correlation between explicit and implicit measures of health attitudes is due to the substantial methodological differences between the IAT and the self-report questionnaire. Patients with hypochondriasis displayed significantly more dysfunctional explicit and implicit health attitudes than anxiety patients, but no differences were found regarding explicit and implicit anxieties. The study demonstrates the specificity of explicit and implicit dysfunctional health attitudes among patients with hypochondriasis.

  12. Free energy landscape of protein folding in water: explicit vs. implicit solvent.

    PubMed

    Zhou, Ruhong

    2003-11-01

    The Generalized Born (GB) continuum solvent model is arguably the most widely used implicit solvent model in protein folding and protein structure prediction simulations; however, it still remains an open question on how well the model behaves in these large-scale simulations. The current study uses the beta-hairpin from C-terminus of protein G as an example to explore the folding free energy landscape with various GB models, and the results are compared to the explicit solvent simulations and experiments. All free energy landscapes are obtained from extensive conformation space sampling with a highly parallel replica exchange method. Because solvation model parameters are strongly coupled with force fields, five different force field/solvation model combinations are examined and compared in this study, namely the explicit solvent model: OPLSAA/SPC model, and the implicit solvent models: OPLSAA/SGB (Surface GB), AMBER94/GBSA (GB with Solvent Accessible Surface Area), AMBER96/GBSA, and AMBER99/GBSA. Surprisingly, we find that the free energy landscapes from implicit solvent models are quite different from that of the explicit solvent model. Except for AMBER96/GBSA, all other implicit solvent models find the lowest free energy state not the native state. All implicit solvent models show erroneous salt-bridge effects between charged residues, particularly in OPLSAA/SGB model, where the overly strong salt-bridge effect results in an overweighting of a non-native structure with one hydrophobic residue F52 expelled from the hydrophobic core in order to make better salt bridges. On the other hand, both AMBER94/GBSA and AMBER99/GBSA models turn the beta-hairpin in to an alpha-helix, and the alpha-helical content is much higher than the previously reported alpha-helices in an explicit solvent simulation with AMBER94 (AMBER94/TIP3P). Only AMBER96/GBSA shows a reasonable free energy landscape with the lowest free energy structure the native one despite an erroneous salt-bridge between D47 and K50. Detailed results on free energy contour maps, lowest free energy structures, distribution of native contacts, alpha-helical content during the folding process, NOE comparison with NMR, and temperature dependences are reported and discussed for all five models. Copyright 2003 Wiley-Liss, Inc.

  13. Evaluation of protein-protein docking model structures using all-atom molecular dynamics simulations combined with the solution theory in the energy representation

    NASA Astrophysics Data System (ADS)

    Takemura, Kazuhiro; Guo, Hao; Sakuraba, Shun; Matubayasi, Nobuyuki; Kitao, Akio

    2012-12-01

    We propose a method to evaluate binding free energy differences among distinct protein-protein complex model structures through all-atom molecular dynamics simulations in explicit water using the solution theory in the energy representation. Complex model structures are generated from a pair of monomeric structures using the rigid-body docking program ZDOCK. After structure refinement by side chain optimization and all-atom molecular dynamics simulations in explicit water, complex models are evaluated based on the sum of their conformational and solvation free energies, the latter calculated from the energy distribution functions obtained from relatively short molecular dynamics simulations of the complex in water and of pure water based on the solution theory in the energy representation. We examined protein-protein complex model structures of two protein-protein complex systems, bovine trypsin/CMTI-1 squash inhibitor (PDB ID: 1PPE) and RNase SA/barstar (PDB ID: 1AY7), for which both complex and monomer structures were determined experimentally. For each system, we calculated the energies for the crystal complex structure and twelve generated model structures including the model most similar to the crystal structure and very different from it. In both systems, the sum of the conformational and solvation free energies tended to be lower for the structure similar to the crystal. We concluded that our energy calculation method is useful for selecting low energy complex models similar to the crystal structure from among a set of generated models.

  14. Evaluation of protein-protein docking model structures using all-atom molecular dynamics simulations combined with the solution theory in the energy representation.

    PubMed

    Takemura, Kazuhiro; Guo, Hao; Sakuraba, Shun; Matubayasi, Nobuyuki; Kitao, Akio

    2012-12-07

    We propose a method to evaluate binding free energy differences among distinct protein-protein complex model structures through all-atom molecular dynamics simulations in explicit water using the solution theory in the energy representation. Complex model structures are generated from a pair of monomeric structures using the rigid-body docking program ZDOCK. After structure refinement by side chain optimization and all-atom molecular dynamics simulations in explicit water, complex models are evaluated based on the sum of their conformational and solvation free energies, the latter calculated from the energy distribution functions obtained from relatively short molecular dynamics simulations of the complex in water and of pure water based on the solution theory in the energy representation. We examined protein-protein complex model structures of two protein-protein complex systems, bovine trypsin/CMTI-1 squash inhibitor (PDB ID: 1PPE) and RNase SA/barstar (PDB ID: 1AY7), for which both complex and monomer structures were determined experimentally. For each system, we calculated the energies for the crystal complex structure and twelve generated model structures including the model most similar to the crystal structure and very different from it. In both systems, the sum of the conformational and solvation free energies tended to be lower for the structure similar to the crystal. We concluded that our energy calculation method is useful for selecting low energy complex models similar to the crystal structure from among a set of generated models.

  15. Rethinking the solar flare paradigm

    NASA Astrophysics Data System (ADS)

    D, B. MELROSE

    2018-07-01

    It is widely accepted that solar flares involve release of magnetic energy stored in the solar corona above an active region, but existing models do not include the explicitly time-dependent electrodynamics needed to describe such energy release. A flare paradigm is discussed that includes the electromotive force (EMF) as the driver of the flare, and the flare-associated current that links different regions where magnetic reconnection, electron acceleration, the acceleration of mass motions and current closure occur. The EMF becomes localized across regions where energy conversion occurs, and is involved in energy propagation between these regions.

  16. Energy consumption for shortcuts to adiabaticity

    NASA Astrophysics Data System (ADS)

    Torrontegui, E.; Lizuain, I.; González-Resines, S.; Tobalina, A.; Ruschhaupt, A.; Kosloff, R.; Muga, J. G.

    2017-08-01

    Shortcuts to adiabaticity let a system reach the results of a slow adiabatic process in a shorter time. We propose to quantify the "energy cost" of the shortcut by the energy consumption of the system enlarged by including the control device. A mechanical model where the dynamics of the system and control device can be explicitly described illustrates that a broad range of possible values for the consumption is possible, including zero (above the adiabatic energy increment) when friction is negligible and the energy given away as negative power is stored and reused by perfect regenerative braking.

  17. Precision tools and models to narrow in on the 750 GeV diphoton resonance

    NASA Astrophysics Data System (ADS)

    Staub, Florian; Athron, Peter; Basso, Lorenzo; Goodsell, Mark D.; Harries, Dylan; Krauss, Manuel E.; Nickel, Kilian; Opferkuch, Toby; Ubaldi, Lorenzo; Vicente, Avelino; Voigt, Alexander

    2016-09-01

    The hints for a new resonance at 750 GeV from ATLAS and CMS have triggered a significant amount of attention. Since the simplest extensions of the standard model cannot accommodate the observation, many alternatives have been considered to explain the excess. Here we focus on several proposed renormalisable weakly-coupled models and revisit results given in the literature. We point out that physically important subtleties are often missed or neglected. To facilitate the study of the excess we have created a collection of 40 model files, selected from recent literature, for the Mathematica package SARAH. With SARAH one can generate files to perform numerical studies using the tailor-made spectrum generators FlexibleSUSY and SPheno. These have been extended to automatically include crucial higher order corrections to the diphoton and digluon decay rates for both CP-even and CP-odd scalars. Additionally, we have extended the UFO and CalcHep interfaces of SARAH, to pass the precise information about the effective vertices from the spectrum generator to a Monte-Carlo tool. Finally, as an example to demonstrate the power of the entire setup, we present a new supersymmetric model that accommodates the diphoton excess, explicitly demonstrating how a large width can be obtained. We explicitly show several steps in detail to elucidate the use of these public tools in the precision study of this model.

  18. Cohesive phase-field fracture and a PDE constrained optimization approach to fracture inverse problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tupek, Michael R.

    2016-06-30

    In recent years there has been a proliferation of modeling techniques for forward predictions of crack propagation in brittle materials, including: phase-field/gradient damage models, peridynamics, cohesive-zone models, and G/XFEM enrichment techniques. However, progress on the corresponding inverse problems has been relatively lacking. Taking advantage of key features of existing modeling approaches, we propose a parabolic regularization of Barenblatt cohesive models which borrows extensively from previous phase-field and gradient damage formulations. An efficient explicit time integration strategy for this type of nonlocal fracture model is then proposed and justified. In addition, we present a C++ computational framework for computing in- putmore » parameter sensitivities efficiently for explicit dynamic problems using the adjoint method. This capability allows for solving inverse problems involving crack propagation to answer interesting engineering questions such as: 1) what is the optimal design topology and material placement for a heterogeneous structure to maximize fracture resistance, 2) what loads must have been applied to a structure for it to have failed in an observed way, 3) what are the existing cracks in a structure given various experimental observations, etc. In this work, we focus on the first of these engineering questions and demonstrate a capability to automatically and efficiently compute optimal designs intended to minimize crack propagation in structures.« less

  19. Application of the θ-method to a telegraphic model of fluid flow in a dual-porosity medium

    NASA Astrophysics Data System (ADS)

    González-Calderón, Alfredo; Vivas-Cruz, Luis X.; Herrera-Hernández, Erik César

    2018-01-01

    This work focuses mainly on the study of numerical solutions, which are obtained using the θ-method, of a generalized Warren and Root model that includes a second-order wave-like equation in its formulation. The solutions approximately describe the single-phase hydraulic head in fractures by considering the finite velocity of propagation by means of a Cattaneo-like equation. The corresponding discretized model is obtained by utilizing a non-uniform grid and a non-uniform time step. A simple relationship is proposed to give the time-step distribution. Convergence is analyzed by comparing results from explicit, fully implicit, and Crank-Nicolson schemes with exact solutions: a telegraphic model of fluid flow in a single-porosity reservoir with relaxation dynamics, the Warren and Root model, and our studied model, which is solved with the inverse Laplace transform. We find that the flux and the hydraulic head have spurious oscillations that most often appear in small-time solutions but are attenuated as the solution time progresses. Furthermore, we show that the finite difference method is unable to reproduce the exact flux at time zero. Obtaining results for oilfield production times, which are in the order of months in real units, is only feasible using parallel implicit schemes. In addition, we propose simple parallel algorithms for the memory flux and for the explicit scheme.

  20. The Computerized Anatomical Man (CAM) model

    NASA Technical Reports Server (NTRS)

    Billings, M. P.; Yucker, W. R.

    1973-01-01

    A computerized anatomical man (CAM) model, representing the most detailed and anatomically correct geometrical model of the human body yet prepared, has been developed for use in analyzing radiation dose distribution in man. This model of a 50-percentile standing USAF man comprises some 1100 unique geometric surfaces and some 2450 solid regions. Internal body geometry such as organs, voids, bones, and bone marrow are explicitly modeled. A computer program called CAMERA has also been developed for performing analyses with the model. Such analyses include tracing rays through the CAM geometry, placing results on magnetic tape in various forms, collapsing areal density data from ray tracing information to areal density distributions, preparing cross section views, etc. Numerous computer drawn cross sections through the CAM model are presented.

  1. CP violation in heavy MSSM Higgs scenarios

    DOE PAGES

    Carena, M.; Ellis, J.; Lee, J. S.; ...

    2016-02-18

    We introduce and explore new heavy Higgs scenarios in the Minimal Supersymmetric Standard Model (MSSM) with explicit CP violation, which have important phenomenological implications that may be testable at the LHC. For soft supersymmetry-breaking scales M S above a few TeV and a charged Higgs boson mass M H+ above a few hundred GeV, new physics effects including those from explicit CP violation decouple from the light Higgs boson sector. However, such effects can significantly alter the phenomenology of the heavy Higgs bosons while still being consistent with constraints from low-energy observables, for instance electric dipole moments. To consider scenariosmore » with a charged Higgs boson much heavier than the Standard Model (SM) particles but much lighter than the supersymmetric particles, we revisit previous calculations of the MSSM Higgs sector. We compute the Higgs boson masses in the presence of CP violating phases, implementing improved matching and renormalization-group (RG) effects, as well as two-loop RG effects from the effective two-Higgs Doublet Model (2HDM) scale M H± to the scale M S. Here, we illustrate the possibility of non-decoupling CP-violating effects in the heavy Higgs sector using new benchmark scenarios named.« less

  2. Heteroskedasticity as a leading indicator of desertification in spatially explicit data.

    PubMed

    Seekell, David A; Dakos, Vasilis

    2015-06-01

    Regime shifts are abrupt transitions between alternate ecosystem states including desertification in arid regions due to drought or overgrazing. Regime shifts may be preceded by statistical anomalies such as increased autocorrelation, indicating declining resilience and warning of an impending shift. Tests for conditional heteroskedasticity, a type of clustered variance, have proven powerful leading indicators for regime shifts in time series data, but an analogous indicator for spatial data has not been evaluated. A spatial analog for conditional heteroskedasticity might be especially useful in arid environments where spatial interactions are critical in structuring ecosystem pattern and process. We tested the efficacy of a test for spatial heteroskedasticity as a leading indicator of regime shifts with simulated data from spatially extended vegetation models with regular and scale-free patterning. These models simulate shifts from extensive vegetative cover to bare, desert-like conditions. The magnitude of spatial heteroskedasticity increased consistently as the modeled systems approached a regime shift from vegetated to desert state. Relative spatial autocorrelation, spatial heteroskedasticity increased earlier and more consistently. We conclude that tests for spatial heteroskedasticity can contribute to the growing toolbox of early warning indicators for regime shifts analyzed with spatially explicit data.

  3. Bee++: An Object-Oriented, Agent-Based Simulator for Honey Bee Colonies

    PubMed Central

    Betti, Matthew; LeClair, Josh; Wahl, Lindi M.; Zamir, Mair

    2017-01-01

    We present a model and associated simulation package (www.beeplusplus.ca) to capture the natural dynamics of a honey bee colony in a spatially-explicit landscape, with temporally-variable, weather-dependent parameters. The simulation tracks bees of different ages and castes, food stores within the colony, pollen and nectar sources and the spatial position of individual foragers outside the hive. We track explicitly the intake of pesticides in individual bees and their ability to metabolize these toxins, such that the impact of sub-lethal doses of pesticides can be explored. Moreover, pathogen populations (in particular, Nosema apis, Nosema cerenae and Varroa mites) have been included in the model and may be introduced at any time or location. The ability to study interactions among pesticides, climate, biodiversity and pathogens in this predictive framework should prove useful to a wide range of researchers studying honey bee populations. To this end, the simulation package is written in open source, object-oriented code (C++) and can be easily modified by the user. Here, we demonstrate the use of the model by exploring the effects of sub-lethal pesticide exposure on the flight behaviour of foragers. PMID:28287445

  4. CP violation in heavy MSSM Higgs scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carena, M.; Ellis, J.; Lee, J. S.

    We introduce and explore new heavy Higgs scenarios in the Minimal Supersymmetric Standard Model (MSSM) with explicit CP violation, which have important phenomenological implications that may be testable at the LHC. For soft supersymmetry-breaking scales M S above a few TeV and a charged Higgs boson mass M H+ above a few hundred GeV, new physics effects including those from explicit CP violation decouple from the light Higgs boson sector. However, such effects can significantly alter the phenomenology of the heavy Higgs bosons while still being consistent with constraints from low-energy observables, for instance electric dipole moments. To consider scenariosmore » with a charged Higgs boson much heavier than the Standard Model (SM) particles but much lighter than the supersymmetric particles, we revisit previous calculations of the MSSM Higgs sector. We compute the Higgs boson masses in the presence of CP violating phases, implementing improved matching and renormalization-group (RG) effects, as well as two-loop RG effects from the effective two-Higgs Doublet Model (2HDM) scale M H± to the scale M S. Here, we illustrate the possibility of non-decoupling CP-violating effects in the heavy Higgs sector using new benchmark scenarios named.« less

  5. Spatially Explicit Simulation of Mesotopographic Controls on Peatland Hydrology and Carbon Fluxes

    NASA Astrophysics Data System (ADS)

    Sonnentag, O.; Chen, J. M.; Roulet, N. T.

    2006-12-01

    A number of field carbon flux measurements, paleoecological records, and model simulations have acknowledged the importance of northern peatlands in terrestrial carbon cycling and methane emissions. An important parameter in peatlands that influences both net primary productivity, the net gain of carbon through photosynthesis, and decomposition under aerobic and anaerobic conditions, is the position of the water table. Biological and physical processes involved in peatland carbon dynamics and their hydrological controls operate at different spatial scales. The highly variable hydraulic characteristics of the peat profile and the overall shape of the peat body as defined by its surface topography at the mesoscale (104 m2) are of major importance for peatland water table dynamics. Common types of peatlands include bogs with a slightly domed centre. As a result of the convex profile, their water supply is restricted to atmospheric inputs, and water is mainly shed by shallow subsurface flow. From a modelling perspective the influence of mesotopographic controls on peatland hydrology and thus carbon balance requires that process-oriented models that examine the links between peatland hydrology, ecosystem functioning, and climate must incorporate some form of lateral subsurface flow consideration. Most hydrological and ecological modelling studies in complex terrain explicitly account for the topographic controls on lateral subsurface flow through digital elevation models. However, modelling studies in peatlands often employ simple empirical parameterizations of lateral subsurface flow, neglecting the influence of peatlands low relief mesoscale topography. Our objective is to explicitly simulate the mesotopographic controls on peatland hydrology and carbon fluxes using the Boreal Ecosystem Productivity Simulator (BEPS) adapted to northern peatlands. BEPS is a process-oriented ecosystem model in a remote sensing framework that takes into account peatlands multi-layer canopy through vertically stratified mapped leaf area index. Model outputs are validated against multi-year measurements taken at an eddy-covariance flux tower located within Mer Bleue bog, a typical raised bog near Ottawa, Ontario, Canada. Model results for seasonal water table dynamics and evapotranspiration at daily time steps in 2003 are in good agreement with measurements with R2=0.74 and R2=0.79, respectively, and indicate the suitability of our pursued approach.

  6. A specific implicit sequence learning deficit as an underlying cause of dyslexia? Investigating the role of attention in implicit learning tasks.

    PubMed

    Staels, Eva; Van den Broeck, Wim

    2017-05-01

    Recently, a general implicit sequence learning deficit was proposed as an underlying cause of dyslexia. This new hypothesis was investigated in the present study by including a number of methodological improvements, for example, the inclusion of appropriate control conditions. The second goal of the study was to explore the role of attentional functioning in implicit and explicit learning tasks. In a 2 × 2 within-subjects design 4 tasks were administered in 30 dyslexic and 38 control children: an implicit and explicit serial reaction time (RT) task and an implicit and explicit contextual cueing task. Attentional functioning was also administered. The entire learning curves of all tasks were analyzed using latent growth curve modeling in order to compare performances between groups and to examine the role of attentional functioning on the learning curves. The amount of implicit learning was similar for both groups. However, the dyslexic group showed slower RTs throughout the entire task. This group difference reduced and became nonsignificant after controlling for attentional functioning. Both implicit learning tasks, but none of the explicit learning tasks, were significantly affected by attentional functioning. Dyslexic children do not suffer from a specific implicit sequence learning deficit. The slower RTs of the dyslexic children throughout the entire implicit sequence learning process are caused by their comorbid attention problems and overall slowness. A key finding of the present study is that, in contrast to what was assumed for a long time, implicit learning relies on attentional resources, perhaps even more than explicit learning does. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. Explicit and implicit springback simulation in sheet metal forming using fully coupled ductile damage and distortional hardening model

    NASA Astrophysics Data System (ADS)

    Yetna n'jock, M.; Houssem, B.; Labergere, C.; Saanouni, K.; Zhenming, Y.

    2018-05-01

    The springback is an important phenomenon which accompanies the forming of metallic sheets especially for high strength materials. A quantitative prediction of springback becomes very important for newly developed material with high mechanical characteristics. In this work, a numerical methodology is developed to quantify this undesirable phenomenon. This methodoly is based on the use of both explicit and implicit finite element solvers of Abaqus®. The most important ingredient of this methodology consists on the use of highly predictive mechanical model. A thermodynamically-consistent, non-associative and fully anisotropic elastoplastic constitutive model strongly coupled with isotropic ductile damage and accounting for distortional hardening is then used. An algorithm for local integration of the complete set of the constitutive equations is developed. This algorithm considers the rotated frame formulation (RFF) to ensure the incremental objectivity of the model in the framework of finite strains. This algorithm is implemented in both explicit (Abaqus/Explicit®) and implicit (Abaqus/Standard®) solvers of Abaqus® through the users routine VUMAT and UMAT respectively. The implicit solver of Abaqus® has been used to study spingback as it is generally a quasi-static unloading. In order to compare the methods `efficiency, the explicit method (Dynamic Relaxation Method) proposed by Rayleigh has been also used for springback prediction. The results obtained within U draw/bending benchmark are studied, discussed and compared with experimental results as reference. Finally, the purpose of this work is to evaluate the reliability of different methods predict efficiently springback in sheet metal forming.

  8. Free energy landscapes of small peptides in an implicit solvent model determined by force-biased multicanonical molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Watanabe, Yukihisa S.; Kim, Jae Gil; Fukunishi, Yoshifumi; Nakamura, Haruki

    2004-12-01

    In order to investigate whether the implicit solvent (GB/SA) model could reproduce the free energy landscapes of peptides, the potential of mean forces (PMFs) of eight tripeptides was examined and compared with the PMFs of the explicit water model. The force-biased multicanonical molecular dynamics method was used for the enhanced conformational sampling. Consequently, the GB/SA model reproduced almost all the global and local minima in the PMFs observed with the explicit water model. However, the GB/SA model overestimated frequencies of the structures that are stabilized by intra-peptide hydrogen bonds.

  9. Modeling of Density-Dependent Flow based on the Thermodynamically Constrained Averaging Theory

    NASA Astrophysics Data System (ADS)

    Weigand, T. M.; Schultz, P. B.; Kelley, C. T.; Miller, C. T.; Gray, W. G.

    2016-12-01

    The thermodynamically constrained averaging theory (TCAT) has been used to formulate general classes of porous medium models, including new models for density-dependent flow. The TCAT approach provides advantages that include a firm connection between the microscale, or pore scale, and the macroscale; a thermodynamically consistent basis; explicit inclusion of factors such as a diffusion that arises from gradients associated with pressure and activity and the ability to describe both high and low concentration displacement. The TCAT model is presented and closure relations for the TCAT model are postulated based on microscale averages and a parameter estimation is performed on a subset of the experimental data. Due to the sharpness of the fronts, an adaptive moving mesh technique was used to ensure grid independent solutions within the run time constraints. The optimized parameters are then used for forward simulations and compared to the set of experimental data not used for the parameter estimation.

  10. THE MAYAK WORKER DOSIMETRY SYSTEM (MWDS-2013) FOR INTERNALLY DEPOSITED PLUTONIUM: AN OVERVIEW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birchall, A.; Vostrotin, V.; Puncher, M.

    The Mayak Worker Dosimetry System (MWDS-2013) is a system for interpreting measurement data from Mayak workers from both internal and external sources. This paper is concerned with the calculation of annual organ doses for Mayak workers exposed to plutonium aerosols, where the measurement data consists mainly of activity of plutonium in urine samples. The system utilises the latest biokinetic and dosimetric models, and unlike its predecessors, takes explicit account of uncertainties in both the measurement data and model parameters. The aim of this paper is to describe the complete MWDS-2013 system (including model parameter values and their uncertainties) and themore » methodology used (including all the relevant equations) and the assumptions made. Where necessary, supplementary papers which justify specific assumptions are cited.« less

  11. A watershed-based spatially-explicit demonstration of an integrated environmental modeling framework for ecosystem services in the Coal River Basin (WV, USA)

    Treesearch

    John M. Johnston; Mahion C. Barber; Kurt Wolfe; Mike Galvin; Mike Cyterski; Rajbir Parmar; Luis Suarez

    2016-01-01

    We demonstrate a spatially-explicit regional assessment of current condition of aquatic ecoservices in the Coal River Basin (CRB), with limited sensitivity analysis for the atmospheric contaminant mercury. The integrated modeling framework (IMF) forecasts water quality and quantity, habitat suitability for aquatic biota, fish biomasses, population densities, ...

  12. Multiscale Simulation of Porous Ceramics Based on Movable Cellular Automaton Method

    NASA Astrophysics Data System (ADS)

    Smolin, A.; Smolin, I.; Eremina, G.; Smolina, I.

    2017-10-01

    The paper presents a model for simulating mechanical behaviour of multiscale porous ceramics based on movable cellular automaton method, which is a novel particle method in computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the random unique position in space. As a result, we get the average values of Young’s modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behaviour at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via the effective properties determined at the previous scale level. If the pore size distribution function of the material has N maxima we need to perform computations for N - 1 levels in order to get the properties from the lowest scale up to the macroscale step by step. The proposed approach was applied to modelling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behaviour of the model sample at the macroscale.

  13. A Mechanism for Reducing Delay Discounting by Altering Temporal Attention

    PubMed Central

    Radu, Peter T; Yi, Richard; Bickel, Warren K; Gross, James J; McClure, Samuel M

    2011-01-01

    Rewards that are not immediately available are discounted compared to rewards that are immediately available. The more a person discounts a delayed reward, the more likely that person is to have a range of behavioral problems, including clinical disorders. This latter observation has motivated the search for interventions that reduce discounting. One surprisingly simple method to reduce discounting is an “explicit-zero” reframing that states default or null outcomes. Reframing a classical discounting choice as “something now but nothing later” versus “nothing now but more later” decreases discount rates. However, it is not clear how this “explicit-zero” framing intervention works. The present studies delineate and test two possible mechanisms to explain the phenomenon. One mechanism proposes that the explicit-zero framing creates the impression of an improving sequence, thereby enhancing the present value of the delayed reward. A second possible mechanism posits an increase in attention allocation to temporally distant reward representations. In four experiments, we distinguish between these two hypothesized mechanisms and conclude that the temporal attention hypothesis is superior for explaining our results. We propose a model of temporal attention whereby framing affects intertemporal preferences by modifying present bias. PMID:22084496

  14. A Comparison of the neural correlates that underlie rule-based and information-integration category learning.

    PubMed

    Carpenter, Kathryn L; Wills, Andy J; Benattayallah, Abdelmalek; Milton, Fraser

    2016-10-01

    The influential competition between verbal and implicit systems (COVIS) model proposes that category learning is driven by two competing neural systems-an explicit, verbal, system, and a procedural-based, implicit, system. In the current fMRI study, participants learned either a conjunctive, rule-based (RB), category structure that is believed to engage the explicit system, or an information-integration category structure that is thought to preferentially recruit the implicit system. The RB and information-integration category structures were matched for participant error rate, the number of relevant stimulus dimensions, and category separation. Under these conditions, considerable overlap in brain activation, including the prefrontal cortex, basal ganglia, and the hippocampus, was found between the RB and information-integration category structures. Contrary to the predictions of COVIS, the medial temporal lobes and in particular the hippocampus, key regions for explicit memory, were found to be more active in the information-integration condition than in the RB condition. No regions were more activated in RB than information-integration category learning. The implications of these results for theories of category learning are discussed. Hum Brain Mapp 37:3557-3574, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  15. QSAR modeling based on structure-information for properties of interest in human health.

    PubMed

    Hall, L H; Hall, L M

    2005-01-01

    The development of QSAR models based on topological structure description is presented for problems in human health. These models are based on the structure-information approach to quantitative biological modeling and prediction, in contrast to the mechanism-based approach. The structure-information approach is outlined, starting with basic structure information developed from the chemical graph (connection table). Information explicit in the connection table (element identity and skeletal connections) leads to significant (implicit) structure information that is useful for establishing sound models of a wide range of properties of interest in drug design. Valence state definition leads to relationships for valence state electronegativity and atom/group molar volume. Based on these important aspects of molecules, together with skeletal branching patterns, both the electrotopological state (E-state) and molecular connectivity (chi indices) structure descriptors are developed and described. A summary of four QSAR models indicates the wide range of applicability of these structure descriptors and the predictive quality of QSAR models based on them: aqueous solubility (5535 chemically diverse compounds, 938 in external validation), percent oral absorption (%OA, 417 therapeutic drugs, 195 drugs in external validation testing), AMES mutagenicity (2963 compounds including 290 therapeutic drugs, 400 in external validation), fish toxicity (92 substituted phenols, anilines and substituted aromatics). These models are established independent of explicit three-dimensional (3-D) structure information and are directly interpretable in terms of the implicit structure information useful to the drug design process.

  16. Sending Nudes: Sex, Self-Rated Mate Value, and Trait Machiavellianism Predict Sending Unsolicited Explicit Images

    PubMed Central

    March, Evita; Wagstaff, Danielle L.

    2017-01-01

    Modern dating platforms have given rise to new dating and sexual behaviors. In the current study, we examine predictors of sending unsolicited explicit images, a particularly underexplored online sexual behavior. The aim of the current study was to explore the utility of dark personality traits (i.e., narcissism, Machiavellianism, psychopathy, and sadism) and self-rated mate value in predicting attitudes toward and behavior of sending unsolicited explicit images. Two hundred and forty participants (72% female; Mage = 25.96, SD = 9.79) completed an online questionnaire which included a measure of self-rated mate value, a measure of dark personality traits, and questions regarding sending unsolicited explicit images (operationalized as the explicit image scale). Men, compared to women, were found to have higher explicit image scale scores, and both self-rated mate value and trait Machiavellianism were positive predictors of explicit image scale scores. Interestingly, there were no significant interactions between sex and these variables. Further, Machiavellianism mediated all relationships between other dark traits and explicit image scale scores, indicating this behavior is best explained by the personality trait associated with behavioral strategies. In sum, these results provide support for the premise that sending unsolicited explicit images may be a tactic of a short-term mating strategy; however, future research should further explore this claim. PMID:29326632

  17. Towards more accurate isoscapes encouraging results from wine, water and marijuana data/model and model/model comparisons.

    NASA Astrophysics Data System (ADS)

    West, J. B.; Ehleringer, J. R.; Cerling, T.

    2006-12-01

    Understanding how the biosphere responds to change it at the heart of biogeochemistry, ecology, and other Earth sciences. The dramatic increase in human population and technological capacity over the past 200 years or so has resulted in numerous, simultaneous changes to biosphere structure and function. This, then, has lead to increased urgency in the scientific community to try to understand how systems have already responded to these changes, and how they might do so in the future. Since all biospheric processes exhibit some patchiness or patterns over space, as well as time, we believe that understanding the dynamic interactions between natural systems and human technological manipulations can be improved if these systems are studied in an explicitly spatial context. We present here results of some of our efforts to model the spatial variation in the stable isotope ratios (δ2H and δ18O) of plants over large spatial extents, and how these spatial model predictions compare to spatially explicit data. Stable isotopes trace and record ecological processes and as such, if modeled correctly over Earth's surface allow us insights into changes in biosphere states and processes across spatial scales. The data-model comparisons show good agreement, in spite of the remaining uncertainties (e.g., plant source water isotopic composition). For example, inter-annual changes in climate are recorded in wine stable isotope ratios. Also, a much simpler model of leaf water enrichment driven with spatially continuous global rasters of precipitation and climate normals largely agrees with complex GCM modeling that includes leaf water δ18O. Our results suggest that modeling plant stable isotope ratios across large spatial extents may be done with reasonable accuracy, including over time. These spatial maps, or isoscapes, can now be utilized to help understand spatially distributed data, as well as to help guide future studies designed to understand ecological change across landscapes.

  18. Analytical validation of an explicit finite element model of a rolling element bearing with a localised line spall

    NASA Astrophysics Data System (ADS)

    Singh, Sarabjeet; Howard, Carl Q.; Hansen, Colin H.; Köpke, Uwe G.

    2018-03-01

    In this paper, numerically modelled vibration response of a rolling element bearing with a localised outer raceway line spall is presented. The results were obtained from a finite element (FE) model of the defective bearing solved using an explicit dynamics FE software package, LS-DYNA. Time domain vibration signals of the bearing obtained directly from the FE modelling were processed further to estimate time-frequency and frequency domain results, such as spectrogram and power spectrum, using standard signal processing techniques pertinent to the vibration-based monitoring of rolling element bearings. A logical approach to analyses of the numerically modelled results was developed with an aim to presenting the analytical validation of the modelled results. While the time and frequency domain analyses of the results show that the FE model generates accurate bearing kinematics and defect frequencies, the time-frequency analysis highlights the simulation of distinct low- and high-frequency characteristic vibration signals associated with the unloading and reloading of the rolling elements as they move in and out of the defect, respectively. Favourable agreement of the numerical and analytical results demonstrates the validation of the results from the explicit FE modelling of the bearing.

  19. Assessing conditions influencing the longitudinal distribution of exotic brown trout (Salmo trutta) in a mountain stream: a spatially-explicit modeling approach

    USGS Publications Warehouse

    Meredith, Christy S.; Budy, Phaedra; Hooten, Mevin B.; Oliveira Prates, Marcos

    2017-01-01

    Trout species often segregate along elevational gradients, yet the mechanisms driving this pattern are not fully understood. On the Logan River, Utah, USA, exotic brown trout (Salmo trutta) dominate at low elevations but are near-absent from high elevations with native Bonneville cutthroat trout (Onchorhynchus clarkii utah). We used a spatially-explicit Bayesian modeling approach to evaluate how abiotic conditions (describing mechanisms related to temperature and physical habitat) as well as propagule pressure explained the distribution of brown trout in this system. Many covariates strongly explained redd abundance based on model performance and coefficient strength, including average annual temperature, average summer temperature, gravel availability, distance from a concentrated stocking area, and anchor ice-impeded distance from a concentrated stocking area. In contrast, covariates that exhibited low performance in models and/or a weak relationship to redd abundance included reach-average water depth, stocking intensity to the reach, average winter temperature, and number of days with anchor ice. Even if climate change creates more suitable summer temperature conditions for brown trout at high elevations, our findings suggest their success may be limited by other conditions. The potential role of anchor ice in limiting movement upstream is compelling considering evidence suggesting anchor ice prevalence on the Logan River has decreased significantly over the last several decades, likely in response to climatic changes. Further experimental and field research is needed to explore the role of anchor ice, spawning gravel availability, and locations of historical stocking in structuring brown trout distributions on the Logan River and elsewhere.

  20. Diffusion in confinement: kinetic simulations of self- and collective diffusion behavior of adsorbed gases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abouelnasr, MKF; Smit, B

    2012-01-01

    The self- and collective-diffusion behaviors of adsorbed methane, helium, and isobutane in zeolite frameworks LTA, MFI, AFI, and SAS were examined at various concentrations using a range of molecular simulation techniques including Molecular Dynamics (MD), Monte Carlo (MC), Bennett-Chandler (BC), and kinetic Monte Carlo (kMC). This paper has three main results. (1) A novel model for the process of adsorbate movement between two large cages was created, allowing the formulation of a mixing rule for the re-crossing coefficient between two cages of unequal loading. The predictions from this mixing rule were found to agree quantitatively with explicit simulations. (2) Amore » new approach to the dynamically corrected Transition State Theory method to analytically calculate self-diffusion properties was developed, explicitly accounting for nanoscale fluctuations in concentration. This approach was demonstrated to quantitatively agree with previous methods, but is uniquely suited to be adapted to a kMC simulation that can simulate the collective-diffusion behavior. (3) While at low and moderate loadings the self- and collective-diffusion behaviors in LTA are observed to coincide, at higher concentrations they diverge. A change in the adsorbate packing scheme was shown to cause this divergence, a trait which is replicated in a kMC simulation that explicitly models this behavior. These phenomena were further investigated for isobutane in zeolite MFI, where MD results showed a separation in self- and collective-diffusion behavior that was reproduced with kMC simulations.« less

  1. Diffusion in confinement: kinetic simulations of self- and collective diffusion behavior of adsorbed gases.

    PubMed

    Abouelnasr, Mahmoud K F; Smit, Berend

    2012-09-07

    The self- and collective-diffusion behaviors of adsorbed methane, helium, and isobutane in zeolite frameworks LTA, MFI, AFI, and SAS were examined at various concentrations using a range of molecular simulation techniques including Molecular Dynamics (MD), Monte Carlo (MC), Bennett-Chandler (BC), and kinetic Monte Carlo (kMC). This paper has three main results. (1) A novel model for the process of adsorbate movement between two large cages was created, allowing the formulation of a mixing rule for the re-crossing coefficient between two cages of unequal loading. The predictions from this mixing rule were found to agree quantitatively with explicit simulations. (2) A new approach to the dynamically corrected Transition State Theory method to analytically calculate self-diffusion properties was developed, explicitly accounting for nanoscale fluctuations in concentration. This approach was demonstrated to quantitatively agree with previous methods, but is uniquely suited to be adapted to a kMC simulation that can simulate the collective-diffusion behavior. (3) While at low and moderate loadings the self- and collective-diffusion behaviors in LTA are observed to coincide, at higher concentrations they diverge. A change in the adsorbate packing scheme was shown to cause this divergence, a trait which is replicated in a kMC simulation that explicitly models this behavior. These phenomena were further investigated for isobutane in zeolite MFI, where MD results showed a separation in self- and collective- diffusion behavior that was reproduced with kMC simulations.

  2. Increasing the sampling efficiency of protein conformational transition using velocity-scaling optimized hybrid explicit/implicit solvent REMD simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Yuqi; Wang, Jinan; Shao, Qiang, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn

    2015-03-28

    The application of temperature replica exchange molecular dynamics (REMD) simulation on protein motion is limited by its huge requirement of computational resource, particularly when explicit solvent model is implemented. In the previous study, we developed a velocity-scaling optimized hybrid explicit/implicit solvent REMD method with the hope to reduce the temperature (replica) number on the premise of maintaining high sampling efficiency. In this study, we utilized this method to characterize and energetically identify the conformational transition pathway of a protein model, the N-terminal domain of calmodulin. In comparison to the standard explicit solvent REMD simulation, the hybrid REMD is much lessmore » computationally expensive but, meanwhile, gives accurate evaluation of the structural and thermodynamic properties of the conformational transition which are in well agreement with the standard REMD simulation. Therefore, the hybrid REMD could highly increase the computational efficiency and thus expand the application of REMD simulation to larger-size protein systems.« less

  3. Tutorial in medical decision modeling incorporating waiting lines and queues using discrete event simulation.

    PubMed

    Jahn, Beate; Theurl, Engelbert; Siebert, Uwe; Pfeiffer, Karl-Peter

    2010-01-01

    In most decision-analytic models in health care, it is assumed that there is treatment without delay and availability of all required resources. Therefore, waiting times caused by limited resources and their impact on treatment effects and costs often remain unconsidered. Queuing theory enables mathematical analysis and the derivation of several performance measures of queuing systems. Nevertheless, an analytical approach with closed formulas is not always possible. Therefore, simulation techniques are used to evaluate systems that include queuing or waiting, for example, discrete event simulation. To include queuing in decision-analytic models requires a basic knowledge of queuing theory and of the underlying interrelationships. This tutorial introduces queuing theory. Analysts and decision-makers get an understanding of queue characteristics, modeling features, and its strength. Conceptual issues are covered, but the emphasis is on practical issues like modeling the arrival of patients. The treatment of coronary artery disease with percutaneous coronary intervention including stent placement serves as an illustrative queuing example. Discrete event simulation is applied to explicitly model resource capacities, to incorporate waiting lines and queues in the decision-analytic modeling example.

  4. Modeling association among demographic parameters in analysis of open population capture-recapture data.

    PubMed

    Link, William A; Barker, Richard J

    2005-03-01

    We present a hierarchical extension of the Cormack-Jolly-Seber (CJS) model for open population capture-recapture data. In addition to recaptures of marked animals, we model first captures of animals and losses on capture. The parameter set includes capture probabilities, survival rates, and birth rates. The survival rates and birth rates are treated as a random sample from a bivariate distribution, thus the model explicitly incorporates correlation in these demographic rates. A key feature of the model is that the likelihood function, which includes a CJS model factor, is expressed entirely in terms of identifiable parameters; losses on capture can be factored out of the model. Since the computational complexity of classical likelihood methods is prohibitive, we use Markov chain Monte Carlo in a Bayesian analysis. We describe an efficient candidate-generation scheme for Metropolis-Hastings sampling of CJS models and extensions. The procedure is illustrated using mark-recapture data for the moth Gonodontis bidentata.

  5. Modeling association among demographic parameters in analysis of open population capture-recapture data

    USGS Publications Warehouse

    Link, William A.; Barker, Richard J.

    2005-01-01

    We present a hierarchical extension of the Cormack–Jolly–Seber (CJS) model for open population capture–recapture data. In addition to recaptures of marked animals, we model first captures of animals and losses on capture. The parameter set includes capture probabilities, survival rates, and birth rates. The survival rates and birth rates are treated as a random sample from a bivariate distribution, thus the model explicitly incorporates correlation in these demographic rates. A key feature of the model is that the likelihood function, which includes a CJS model factor, is expressed entirely in terms of identifiable parameters; losses on capture can be factored out of the model. Since the computational complexity of classical likelihood methods is prohibitive, we use Markov chain Monte Carlo in a Bayesian analysis. We describe an efficient candidate-generation scheme for Metropolis–Hastings sampling of CJS models and extensions. The procedure is illustrated using mark-recapture data for the moth Gonodontis bidentata.

  6. Leveraging human decision making through the optimal management of centralized resources

    NASA Astrophysics Data System (ADS)

    Hyden, Paul; McGrath, Richard G.

    2016-05-01

    Combining results from mixed integer optimization, stochastic modeling and queuing theory, we will advance the interdisciplinary problem of efficiently and effectively allocating centrally managed resources. Academia currently fails to address this, as the esoteric demands of each of these large research areas limits work across traditional boundaries. The commercial space does not currently address these challenges due to the absence of a profit metric. By constructing algorithms that explicitly use inputs across boundaries, we are able to incorporate the advantages of using human decision makers. Key improvements in the underlying algorithms are made possible by aligning decision maker goals with the feedback loops introduced between the core optimization step and the modeling of the overall stochastic process of supply and demand. A key observation is that human decision-makers must be explicitly included in the analysis for these approaches to be ultimately successful. Transformative access gives warfighters and mission owners greater understanding of global needs and allows for relationships to guide optimal resource allocation decisions. Mastery of demand processes and optimization bottlenecks reveals long term maximum marginal utility gaps in capabilities.

  7. Comparison of AGE and Spectral Methods for the Simulation of Far-Wakes

    NASA Technical Reports Server (NTRS)

    Bisset, D. K.; Rogers, M. M.; Kega, Dennis (Technical Monitor)

    1999-01-01

    Turbulent flow simulation methods based on finite differences are attractive for their simplicity, flexibility and efficiency, but not always for accuracy or stability. This report demonstrates that a good compromise is possible with the Advected Grid Explicit (AGE) method. AGE has proven to be both efficient and accurate for simulating turbulent free-shear flows, including planar mixing layers and planar jets. Its efficiency results from its localized fully explicit finite difference formulation (Bisset 1998a,b) that is very straightforward to compute, outweighing the need for a fairly small timestep. Also, most of the successful simulations were slightly under-resolved, and therefore they were, in effect, large-eddy simulations (LES) without a sub-grid-scale (SGS) model, rather than direct numerical simulations (DNS). The principle is that the role of the smallest scales of turbulent motion (when the Reynolds number is not too low) is to dissipate turbulent energy, and therefore they do not have to be simulated when the numerical method is inherently dissipative at its resolution limits. Such simulations are termed 'auto-LES' (LES with automatic SGS modeling) in this report.

  8. A time-spectral approach to numerical weather prediction

    NASA Astrophysics Data System (ADS)

    Scheffel, Jan; Lindvall, Kristoffer; Yik, Hiu Fai

    2018-05-01

    Finite difference methods are traditionally used for modelling the time domain in numerical weather prediction (NWP). Time-spectral solution is an attractive alternative for reasons of accuracy and efficiency and because time step limitations associated with causal CFL-like criteria, typical for explicit finite difference methods, are avoided. In this work, the Lorenz 1984 chaotic equations are solved using the time-spectral algorithm GWRM (Generalized Weighted Residual Method). Comparisons of accuracy and efficiency are carried out for both explicit and implicit time-stepping algorithms. It is found that the efficiency of the GWRM compares well with these methods, in particular at high accuracy. For perturbative scenarios, the GWRM was found to be as much as four times faster than the finite difference methods. A primary reason is that the GWRM time intervals typically are two orders of magnitude larger than those of the finite difference methods. The GWRM has the additional advantage to produce analytical solutions in the form of Chebyshev series expansions. The results are encouraging for pursuing further studies, including spatial dependence, of the relevance of time-spectral methods for NWP modelling.

  9. Predictive Validity of Explicit and Implicit Threat Overestimation in Contamination Fear

    PubMed Central

    Green, Jennifer S.; Teachman, Bethany A.

    2012-01-01

    We examined the predictive validity of explicit and implicit measures of threat overestimation in relation to contamination-fear outcomes using structural equation modeling. Undergraduate students high in contamination fear (N = 56) completed explicit measures of contamination threat likelihood and severity, as well as looming vulnerability cognitions, in addition to an implicit measure of danger associations with potential contaminants. Participants also completed measures of contamination-fear symptoms, as well as subjective distress and avoidance during a behavioral avoidance task, and state looming vulnerability cognitions during an exposure task. The latent explicit (but not implicit) threat overestimation variable was a significant and unique predictor of contamination fear symptoms and self-reported affective and cognitive facets of contamination fear. On the contrary, the implicit (but not explicit) latent measure predicted behavioral avoidance (at the level of a trend). Results are discussed in terms of differential predictive validity of implicit versus explicit markers of threat processing and multiple fear response systems. PMID:24073390

  10. Explicit and Implicit Processes Constitute the Fast and Slow Processes of Sensorimotor Learning.

    PubMed

    McDougle, Samuel D; Bond, Krista M; Taylor, Jordan A

    2015-07-01

    A popular model of human sensorimotor learning suggests that a fast process and a slow process work in parallel to produce the canonical learning curve (Smith et al., 2006). Recent evidence supports the subdivision of sensorimotor learning into explicit and implicit processes that simultaneously subserve task performance (Taylor et al., 2014). We set out to test whether these two accounts of learning processes are homologous. Using a recently developed method to assay explicit and implicit learning directly in a sensorimotor task, along with a computational modeling analysis, we show that the fast process closely resembles explicit learning and the slow process approximates implicit learning. In addition, we provide evidence for a subdivision of the slow/implicit process into distinct manifestations of motor memory. We conclude that the two-state model of motor learning is a close approximation of sensorimotor learning, but it is unable to describe adequately the various implicit learning operations that forge the learning curve. Our results suggest that a wider net be cast in the search for the putative psychological mechanisms and neural substrates underlying the multiplicity of processes involved in motor learning. Copyright © 2015 the authors 0270-6474/15/359568-12$15.00/0.

  11. Investigating the predictive validity of implicit and explicit measures of motivation in problem-solving behavioural tasks.

    PubMed

    Keatley, David; Clarke, David D; Hagger, Martin S

    2013-09-01

    Research into the effects of individuals'autonomous motivation on behaviour has traditionally adopted explicit measures and self-reported outcome assessment. Recently, there has been increased interest in the effects of implicit motivational processes underlying behaviour from a self-determination theory (SDT) perspective. The aim of the present research was to provide support for the predictive validity of an implicit measure of autonomous motivation on behavioural persistence on two objectively measurable tasks. SDT and a dual-systems model were adopted as frameworks to explain the unique effects offered by explicit and implicit autonomous motivational constructs on behavioural persistence. In both studies, implicit autonomous motivation significantly predicted unique variance in time spent on each task. Several explicit measures of autonomous motivation also significantly predicted persistence. Results provide support for the proposed model and the inclusion of implicit measures in research on motivated behaviour. In addition, implicit measures of autonomous motivation appear to be better suited to explaining variance in behaviours that are more spontaneous or unplanned. Future implications for research examining implicit motivation from dual-systems models and SDT approaches are outlined. © 2012 The British Psychological Society.

  12. Modeling Active Aging and Explicit Memory: An Empirical Study.

    PubMed

    Ponce de León, Laura Ponce; Lévy, Jean Pierre; Fernández, Tomás; Ballesteros, Soledad

    2015-08-01

    The rapid growth of the population of older adults and their concomitant psychological status and health needs have captured the attention of researchers and health professionals. To help fill the void of literature available to social workers interested in mental health promotion and aging, the authors provide a model for active aging that uses psychosocial variables. Structural equation modeling was used to examine the relationships among the latent variables of the state of explicit memory, the perception of social resources, depression, and the perception of quality of life in a sample of 184 older adults. The results suggest that explicit memory is not a direct indicator of the perception of quality of life, but it could be considered an indirect indicator as it is positively correlated with perception of social resources and negatively correlated with depression. These last two variables influenced the perception of quality of life directly, the former positively and the latter negatively. The main outcome suggests that the perception of social support improves explicit memory and quality of life and reduces depression in active older adults. The findings also suggest that gerontological professionals should design memory training programs, improve available social resources, and offer environments with opportunities to exercise memory.

  13. Explicit and Implicit Processes Constitute the Fast and Slow Processes of Sensorimotor Learning

    PubMed Central

    Bond, Krista M.; Taylor, Jordan A.

    2015-01-01

    A popular model of human sensorimotor learning suggests that a fast process and a slow process work in parallel to produce the canonical learning curve (Smith et al., 2006). Recent evidence supports the subdivision of sensorimotor learning into explicit and implicit processes that simultaneously subserve task performance (Taylor et al., 2014). We set out to test whether these two accounts of learning processes are homologous. Using a recently developed method to assay explicit and implicit learning directly in a sensorimotor task, along with a computational modeling analysis, we show that the fast process closely resembles explicit learning and the slow process approximates implicit learning. In addition, we provide evidence for a subdivision of the slow/implicit process into distinct manifestations of motor memory. We conclude that the two-state model of motor learning is a close approximation of sensorimotor learning, but it is unable to describe adequately the various implicit learning operations that forge the learning curve. Our results suggest that a wider net be cast in the search for the putative psychological mechanisms and neural substrates underlying the multiplicity of processes involved in motor learning. PMID:26134640

  14. Independence polynomial and matching polynomial of the Koch network

    NASA Astrophysics Data System (ADS)

    Liao, Yunhua; Xie, Xiaoliang

    2015-11-01

    The lattice gas model and the monomer-dimer model are two classical models in statistical mechanics. It is well known that the partition functions of these two models are associated with the independence polynomial and the matching polynomial in graph theory, respectively. Both polynomials have been shown to belong to the “#P-complete” class, which indicate the problems are computationally “intractable”. We consider these two polynomials of the Koch networks which are scale-free with small-world effects. Explicit recurrences are derived, and explicit formulae are presented for the number of independent sets of a certain type.

  15. Towards Better Simulation of US Maize Yield Responses to Climate in the Community Earth System Model

    NASA Astrophysics Data System (ADS)

    Peng, B.; Guan, K.; Chen, M.; Lawrence, D. M.; Jin, Z.; Bernacchi, C.; Ainsworth, E. A.; DeLucia, E. H.; Lombardozzi, D. L.; Lu, Y.

    2017-12-01

    Global food security is undergoing continuing pressure from increased population and climate change despites the potential advancement in breeding and management technologies. Earth system models (ESMs) are essential tools to study the impacts of historical and future climate on regional and global food production, as well as to assess the effectiveness of possible adaptations and their potential feedback to climate. Here we developed an improved maize representation within the Community Earth System Model (CESM) by combining the strengths of both the Community Land Model version 4.5 (CLM4.5) and the Agricultural Production Systems sIMulator (APSIM) models. Specifically, we modified the maize planting scheme, incorporated the phenology scheme adopted from the APSIM model, added a new carbon allocation scheme into CLM4.5, and improved the estimation of canopy structure parameters including leaf area index (LAI) and canopy height. Unique features of the new model (CLM-APSIM) include more detailed phenology stages, an explicit implementation of the impacts of various abiotic environmental stresses (including nitrogen, water, temperature and heat stresses) on maize phenology and carbon allocation, as well as an explicit simulation of grain number and grain size. We conducted a regional simulation of this new model over the US Corn Belt during 1990 to 2010. The simulated maize yield as well as its responses to climate (growing season mean temperature and precipitation) are benchmarked with data from UADA NASS statistics. Our results show that the CLM-APSIM model outperforms the CLM4.5 in simulating county-level maize yield production and reproduces more realistic yield responses to climate variations than CLM4.5. However, some critical processes (such as crop failure due to frost and inundation and suboptimal growth condition due to biotic stresses) are still missing in both CLM-APSIM and CLM4.5, making the simulated yield responses to climate slightly deviate from the reality. Our results demonstrate that with improved paramterization of crop growth, the ESMs can be powerful tools for realistically simulating agricultural production, which is gaining increasing interests and critical to study of global food security and food-energy-water nexus.

  16. Modeling Quantum Dynamics in Multidimensional Systems

    NASA Astrophysics Data System (ADS)

    Liss, Kyle; Weinacht, Thomas; Pearson, Brett

    2017-04-01

    Coupling between different degrees-of-freedom is an inherent aspect of dynamics in multidimensional quantum systems. As experiments and theory begin to tackle larger molecular structures and environments, models that account for vibrational and/or electronic couplings are essential for interpretation. Relevant processes include intramolecular vibrational relaxation, conical intersections, and system-bath coupling. We describe a set of simulations designed to model coupling processes in multidimensional molecular systems, focusing on models that provide insight and allow visualization of the dynamics. Undergraduates carried out much of the work as part of a senior research project. In addition to the pedagogical value, the simulations allow for comparison between both explicit and implicit treatments of a system's many degrees-of-freedom.

  17. Numerical analysis of behaviour of cross laminated timber (CLT) in blast loading

    NASA Astrophysics Data System (ADS)

    Šliseris, J.; Gaile, L.; Pakrastiņš, L.

    2017-10-01

    A non-linear computation model for CLT wall element that includes explicit dynamics and composite damage constitutive model was developed. The numerical model was compared with classical beam theory and it turned out that shear wood layer has significant shear deformations that must be taken into account when designing CLT. It turned out that impulse duration time has a major effect on the strength of CLT. Special attention must be payed when designing CLT wall, window and door architectural system in order to guarantee the robustness of structure. The proposed numerical modelling framework can be used when designing CLT buildings that can be affected by blast loading, whilst structural robustness must be guaranteed.

  18. A scattering function of star polymers including excluded volume effects

    DOE PAGES

    Li, Xin; Do, Changwoo; Liu, Yun; ...

    2014-11-04

    In this work we present a new model for the form factor of a star polymer consisting of self-avoiding branches. This new model incorporates excluded volume effects and is derived from the two point correlation function for a star polymer.. We compare this model to small angle neutron scattering (SANS) measurements from polystyrene (PS) stars immersed in a good solvent, tetrahydrofuran (THF). It is shown that this model provides a good description of the scattering signature originating from the excluded volume effect and it explicitly elucidates the connection between the global conformation of a star polymer and the local stiffnessmore » of its constituent branch.« less

  19. The Evolution of Data-Information-Knowledge-Wisdom in Nursing Informatics.

    PubMed

    Ronquillo, Charlene; Currie, Leanne M; Rodney, Paddy

    2016-01-01

    The data-information-knowledge-wisdom (DIKW) model has been widely adopted in nursing informatics. In this article, we examine the evolution of DIKW in nursing informatics while incorporating critiques from other disciplines. This includes examination of assumptions of linearity and hierarchy and an exploration of the implicit philosophical grounding of the model. Two guiding questions are considered: (1) Does DIKW serve clinical information systems, nurses, or both? and (2) What level of theory does DIKW occupy? The DIKW model has been valuable in advancing the independent field of nursing informatics. We offer that if the model is to continue to move forward, its role and functions must be explicitly addressed.

  20. Not the Same Old Thing: Establishing the Unique Contribution of Drinking Identity as a Predictor of Alcohol Consumption and Problems Over Time

    PubMed Central

    Lindgren, Kristen P.; Ramirez, Jason J.; Olin, Cecilia C.; Neighbors, Clayton

    2016-01-01

    Drinking identity – how much individuals view themselves as drinkers– is a promising cognitive factor that predicts problem drinking. Implicit and explicit measures of drinking identity have been developed (the former assesses more reflexive/automatic cognitive processes; the latter more reflective/controlled cognitive processes): each predicts unique variance in alcohol consumption and problems. However, implicit and explicit identity’s utility and uniqueness as a predictor relative to cognitive factors important for problem drinking screening and intervention has not been evaluated. Thus, the current study evaluated implicit and explicit drinking identity as predictors of consumption and problems over time. Baseline measures of drinking identity, social norms, alcohol expectancies, and drinking motives were evaluated as predictors of consumption and problems (evaluated every three months over two academic years) in a sample of 506 students (57% female) in their first or second year of college. Results found that baseline identity measures predicted unique variance in consumption and problems over time. Further, when compared to each set of cognitive factors, the identity measures predicted unique variance in consumption and problems over time. Findings were more robust for explicit, versus, implicit identity and in models that did not control for baseline drinking. Drinking identity appears to be a unique predictor of problem drinking relative to social norms, alcohol expectancies, and drinking motives. Intervention and theory could benefit from including and considering drinking identity. PMID:27428756

  1. Towards an explicit account of implicit learning.

    PubMed

    Forkstam, Christian; Petersson, Karl Magnus

    2005-08-01

    The human brain supports acquisition mechanisms that can extract structural regularities implicitly from experience without the induction of an explicit model. Reber defined the process by which an individual comes to respond appropriately to the statistical structure of the input ensemble as implicit learning. He argued that the capacity to generalize to new input is based on the acquisition of abstract representations that reflect underlying structural regularities in the acquisition input. We focus this review of the implicit learning literature on studies published during 2004 and 2005. We will not review studies of repetition priming ('implicit memory'). Instead we focus on two commonly used experimental paradigms: the serial reaction time task and artificial grammar learning. Previous comprehensive reviews can be found in Seger's 1994 article and the Handbook of Implicit Learning. Emerging themes include the interaction between implicit and explicit processes, the role of the medial temporal lobe, developmental aspects of implicit learning, age-dependence, the role of sleep and consolidation. The attempts to characterize the interaction between implicit and explicit learning are promising although not well understood. The same can be said about the role of sleep and consolidation. Despite the fact that lesion studies have relatively consistently suggested that the medial temporal lobe memory system is not necessary for implicit learning, a number of functional magnetic resonance studies have reported medial temporal lobe activation in implicit learning. This issue merits further research. Finally, the clinical relevance of implicit learning remains to be determined.

  2. Not the same old thing: Establishing the unique contribution of drinking identity as a predictor of alcohol consumption and problems over time.

    PubMed

    Lindgren, Kristen P; Ramirez, Jason J; Olin, Cecilia C; Neighbors, Clayton

    2016-09-01

    Drinking identity-how much individuals view themselves as drinkers-is a promising cognitive factor that predicts problem drinking. Implicit and explicit measures of drinking identity have been developed (the former assesses more reflexive/automatic cognitive processes; the latter more reflective/controlled cognitive processes): each predicts unique variance in alcohol consumption and problems. However, implicit and explicit identity's utility and uniqueness as predictors relative to cognitive factors important for problem drinking screening and intervention has not been evaluated. Thus, the current study evaluated implicit and explicit drinking identity as predictors of consumption and problems over time. Baseline measures of drinking identity, social norms, alcohol expectancies, and drinking motives were evaluated as predictors of consumption and problems (evaluated every 3 months over 2 academic years) in a sample of 506 students (57% female) in their first or second year of college. Results found that baseline identity measures predicted unique variance in consumption and problems over time. Further, when compared to each set of cognitive factors, the identity measures predicted unique variance in consumption and problems over time. Findings were more robust for explicit versus implicit identity and in models that did not control for baseline drinking. Drinking identity appears to be a unique predictor of problem drinking relative to social norms, alcohol expectancies, and drinking motives. Intervention and theory could benefit from including and considering drinking identity. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. Assessment of an Explicit Algebraic Reynolds Stress Model

    NASA Technical Reports Server (NTRS)

    Carlson, Jan-Renee

    2005-01-01

    This study assesses an explicit algebraic Reynolds stress turbulence model in the in the three-dimensional Reynolds averaged Navier-Stokes (RANS) solver, ISAAC (Integrated Solution Algorithm for Arbitrary Con gurations). Additionally, it compares solutions for two select configurations between ISAAC and the RANS solver PAB3D. This study compares with either direct numerical simulation data, experimental data, or empirical models for several different geometries with compressible, separated, and high Reynolds number flows. In general, the turbulence model matched data or followed experimental trends well, and for the selected configurations, the computational results of ISAAC closely matched those of PAB3D using the same turbulence model.

  4. Hierarchical modeling and inference in ecology: The analysis of data from populations, metapopulations and communities

    USGS Publications Warehouse

    Royle, J. Andrew; Dorazio, Robert M.

    2008-01-01

    A guide to data collection, modeling and inference strategies for biological survey data using Bayesian and classical statistical methods. This book describes a general and flexible framework for modeling and inference in ecological systems based on hierarchical models, with a strict focus on the use of probability models and parametric inference. Hierarchical models represent a paradigm shift in the application of statistics to ecological inference problems because they combine explicit models of ecological system structure or dynamics with models of how ecological systems are observed. The principles of hierarchical modeling are developed and applied to problems in population, metapopulation, community, and metacommunity systems. The book provides the first synthetic treatment of many recent methodological advances in ecological modeling and unifies disparate methods and procedures. The authors apply principles of hierarchical modeling to ecological problems, including * occurrence or occupancy models for estimating species distribution * abundance models based on many sampling protocols, including distance sampling * capture-recapture models with individual effects * spatial capture-recapture models based on camera trapping and related methods * population and metapopulation dynamic models * models of biodiversity, community structure and dynamics.

  5. Logistic Mixed Models to Investigate Implicit and Explicit Belief Tracking.

    PubMed

    Lages, Martin; Scheel, Anne

    2016-01-01

    We investigated the proposition of a two-systems Theory of Mind in adults' belief tracking. A sample of N = 45 participants predicted the choice of one of two opponent players after observing several rounds in an animated card game. Three matches of this card game were played and initial gaze direction on target and subsequent choice predictions were recorded for each belief task and participant. We conducted logistic regressions with mixed effects on the binary data and developed Bayesian logistic mixed models to infer implicit and explicit mentalizing in true belief and false belief tasks. Although logistic regressions with mixed effects predicted the data well a Bayesian logistic mixed model with latent task- and subject-specific parameters gave a better account of the data. As expected explicit choice predictions suggested a clear understanding of true and false beliefs (TB/FB). Surprisingly, however, model parameters for initial gaze direction also indicated belief tracking. We discuss why task-specific parameters for initial gaze directions are different from choice predictions yet reflect second-order perspective taking.

  6. Characterizing Aeroelastic Systems Using Eigenanalysis, Explicitly Retaining The Aerodynamic Degrees of Freedom

    NASA Technical Reports Server (NTRS)

    Heeg, Jennifer; Dowell, Earl H.

    2001-01-01

    Discrete time aeroelastic models with explicitly retained aerodynamic modes have been generated employing a time marching vortex lattice aerodynamic model. This paper presents analytical results from eigenanalysis of these models. The potential of these models to calculate the behavior of modes that represent damped system motion (noncritical modes) in addition to the simple harmonic modes is explored. A typical section with only structural freedom in pitch is examined. The eigenvalues are examined and compared to experimental data. Issues regarding the convergence of the solution with regard to refining the aerodynamic discretization are investigated. Eigenvector behavior is examined; the eigenvector associated with a particular eigenvalue can be viewed as the set of modal participation factors for that particular mode. For the present formulation of the equations of motion, the vorticity for each aerodynamic element appears explicitly as an element of each eigenvector in addition to the structural dynamic generalized coordinates. Thus, modal participation of the aerodynamic degrees of freedom can be assessed in M addition to participation of structural degrees of freedom.

  7. Affect, Risk and Uncertainty in Decision-Marking an Integrated Computational-Empirical Approach

    DTIC Science & Technology

    2009-07-26

    OF ABSTRACT UU 18. NUMBER O PAGES 61 19a. NAME OF RESPONSIBLE PERSON Eva Hudlicka, Ph.D. 19b. TELEPHONE NUMBER (include area code...developed by Hudlicka (2002; 2003). MAMID was designed with the explicit purpose to model the effects of affective states and personality traits on...influenced by risk and uncertainty? • How do personality traits and affective states facilitate or prevent the expression of particular types of

  8. Plasmonic Metallurgy Enabled by DNA.

    PubMed

    Ross, Michael B; Ku, Jessie C; Lee, Byeongdu; Mirkin, Chad A; Schatz, George C

    2016-04-13

    Mixed silver and gold plasmonic nanoparticle architectures are synthesized using DNA-programmable assembly, unveiling exquisitely tunable optical properties that are predicted and explained both by effective thin-film models and explicit electrodynamic simulations. These data demonstrate that the manner and ratio with which multiple metallic components are arranged can greatly alter optical properties, including tunable color and asymmetric reflectivity behavior of relevance for thin-film applications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Blast and the Consequences on Traumatic Brain Injury-Multiscale Mechanical Modeling of Brain

    DTIC Science & Technology

    2011-02-17

    blast simulation. LS-DYNA as an explicit FE code has been employed to simulate this multi- material fluid –structure interaction problem. The 3-D head...formulation is implemented to model the air-blast simulation. LS-DYNA as an explicit FE code has been employed to simulate this multi-material fluid ...Biomechanics Study of Influencing Parameters for brain under Impact ............................... 12 5.1 The Impact of Cerebrospinal Fluid

  10. Traveling waves in a spring-block chain sliding down a slope

    NASA Astrophysics Data System (ADS)

    Morales, J. E.; James, G.; Tonnelier, A.

    2017-07-01

    Traveling waves are studied in a spring slider-block model. We explicitly construct front waves (kinks) for a piecewise-linear spinodal friction force. Pulse waves are obtained as the matching of two traveling fronts with identical speeds. Explicit formulas are obtained for the wavespeed and the wave form in the anticontinuum limit. The link with localized waves in a Burridge-Knopoff model of an earthquake fault is briefly discussed.

  11. Traveling waves in a spring-block chain sliding down a slope.

    PubMed

    Morales, J E; James, G; Tonnelier, A

    2017-07-01

    Traveling waves are studied in a spring slider-block model. We explicitly construct front waves (kinks) for a piecewise-linear spinodal friction force. Pulse waves are obtained as the matching of two traveling fronts with identical speeds. Explicit formulas are obtained for the wavespeed and the wave form in the anticontinuum limit. The link with localized waves in a Burridge-Knopoff model of an earthquake fault is briefly discussed.

  12. Water solvent effects using continuum and discrete models: The nitromethane molecule, CH3NO2.

    PubMed

    Modesto-Costa, Lucas; Uhl, Elmar; Borges, Itamar

    2015-11-15

    The first three valence transitions of the two nitromethane conformers (CH3NO2) are two dark n → π* transitions and a very intense π → π* transition. In this work, these transitions in gas-phase and solvated in water of both conformers were investigated theoretically. The polarizable continuum model (PCM), two conductor-like screening (COSMO) models, and the discrete sequential quantum mechanics/molecular mechanics (S-QM/MM) method were used to describe the solvation effect on the electronic spectra. Time dependent density functional theory (TDDFT), configuration interaction including all single substitutions and perturbed double excitations (CIS(D)), the symmetry-adapted-cluster CI (SAC-CI), the multistate complete active space second order perturbation theory (CASPT2), and the algebraic-diagrammatic construction (ADC(2)) electronic structure methods were used. Gas-phase CASPT2, SAC-CI, and ADC(2) results are in very good agreement with published experimental and theoretical spectra. Among the continuum models, PCM combined either with CASPT2, SAC-CI, or B3LYP provided good agreement with available experimental data. COSMO combined with ADC(2) described the overall trends of the transition energy shifts. The effect of increasing the number of explicit water molecules in the S-QM/MM approach was discussed and the formation of hydrogen bonds was clearly established. By including explicitly 24 water molecules corresponding to the complete first solvation shell in the S-QM/MM approach, the ADC(2) method gives more accurate results as compared to the TDDFT approach and with similar computational demands. The ADC(2) with S-QM/MM model is, therefore, the best compromise for accurate solvent calculations in a polar environment. © 2015 Wiley Periodicals, Inc.

  13. The conscious, the unconscious, and familiarity.

    PubMed

    Scott, Ryan B; Dienes, Zoltán

    2008-09-01

    This article examines the role of subjective familiarity in the implicit and explicit learning of artificial grammars. Experiment 1 found that objective measures of similarity (including fragment frequency and repetition structure) predicted ratings of familiarity, that familiarity ratings predicted grammaticality judgments, and that the extremity of familiarity ratings predicted confidence. Familiarity was further shown to predict judgments in the absence of confidence, hence contributing to above-chance guessing. Experiment 2 found that confidence developed as participants refined their knowledge of the distribution of familiarity and that differences in familiarity could be exploited prior to confidence developing. Experiment 3 found that familiarity was consciously exploited to make grammaticality judgments including those made without confidence and that familiarity could in some instances influence participants' grammaticality judgments apparently without their awareness. All 3 experiments found that knowledge distinct from familiarity was derived only under deliberate learning conditions. The results provide decisive evidence that familiarity is the essential source of knowledge in artificial grammar learning while also supporting a dual-process model of implicit and explicit learning. (c) 2008 APA, all rights reserved.

  14. Spin-orbit splitted excited states using explicitly-correlated equation-of-motion coupled-cluster singles and doubles eigenvectors

    NASA Astrophysics Data System (ADS)

    Bokhan, Denis; Trubnikov, Dmitrii N.; Perera, Ajith; Bartlett, Rodney J.

    2018-04-01

    An explicitly-correlated method of calculation of excited states with spin-orbit couplings, has been formulated and implemented. Developed approach utilizes left and right eigenvectors of equation-of-motion coupled-cluster model, which is based on the linearly approximated explicitly correlated coupled-cluster singles and doubles [CCSD(F12)] method. The spin-orbit interactions are introduced by using the spin-orbit mean field (SOMF) approximation of the Breit-Pauli Hamiltonian. Numerical tests for several atoms and molecules show good agreement between explicitly-correlated results and the corresponding values, calculated in complete basis set limit (CBS); the highly-accurate excitation energies can be obtained already at triple- ζ level.

  15. Heavy-light mesons in chiral AdS/QCD

    NASA Astrophysics Data System (ADS)

    Liu, Yizhuang; Zahed, Ismail

    2017-06-01

    We discuss a minimal holographic model for the description of heavy-light and light mesons with chiral symmetry, defined in a slab of AdS space. The model consists of a pair of chiral Yang-Mills and tachyon fields with specific boundary conditions that break spontaneously chiral symmetry in the infrared. The heavy-light spectrum and decay constants are evaluated explicitly. In the heavy mass limit the model exhibits both heavy-quark and chiral symmetry and allows for the explicit derivation of the one-pion axial couplings to the heavy-light mesons.

  16. Nonminimally coupled massive scalar field in a 2D black hole: Exactly solvable model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frolov, V.; Zelnikov, A.

    2001-06-15

    We study a nonminimal massive scalar field in the background of a two-dimensional black hole spacetime. We consider the black hole which is the solution of the 2D dilaton gravity derived from string-theoretical models. We find an explicit solution in a closed form for all modes and the Green function of the scalar field with an arbitrary mass and a nonminimal coupling to the curvature. Greybody factors, the Hawking radiation, and 2>{sup ren} are calculated explicitly for this exactly solvable model.

  17. Test-Case Generation using an Explicit State Model Checker Final Report

    NASA Technical Reports Server (NTRS)

    Heimdahl, Mats P. E.; Gao, Jimin

    2003-01-01

    In the project 'Test-Case Generation using an Explicit State Model Checker' we have extended an existing tools infrastructure for formal modeling to export Java code so that we can use the NASA Ames tool Java Pathfinder (JPF) for test case generation. We have completed a translator from our source language RSML(exp -e) to Java and conducted initial studies of how JPF can be used as a testing tool. In this final report, we provide a detailed description of the translation approach as implemented in our tools.

  18. Thickness-shear mode quartz crystal resonators in viscoelastic fluid media

    NASA Astrophysics Data System (ADS)

    Arnau, A.; Jiménez, Y.; Sogorb, T.

    2000-10-01

    An extended Butterworth-Van Dyke (EBVD) model to characterize a thickness-shear mode quartz crystal resonator in a semi-infinite viscoelastic medium is derived by means of analysis of the lumped elements model described by Cernosek et al. [R. W. Cernosek, S. J. Martin, A. R. Hillman, and H. L. Bandey, IEEE Trans. Ultrason. Ferroelectr. Freq. Control 45, 1399 (1998)]. The EBVD model parameters are related to the viscoelastic properties of the medium. A capacitance added to the motional branch of the EBVD model has to be included when the elastic properties of the fluid are considered. From this model, an explicit expression for the frequency shift of a quartz crystal sensor in viscoelastic media is obtained. By combining the expressions for shifts in the motional series resonant frequency and in the motional resistance, a simple equation that relates only one unknown (the loss factor of the fluid) to those measurable quantities, and two simple explicit expressions for determining the viscoelastic properties of semi-infinite fluid media have been derived. The proposed expression for the parameter Δf/ΔR is compared with the corresponding ratio obtained with data computed from the complete admittance model. Relative errors below 4.5%, 3%, and 1.2% (for the ratios of the load surface mechanical impedance to the quartz shear characteristic impedance of 0.3, 0.25, and 0.1, respectively), are obtained in the range of the cases analyzed. Experimental data from the literature are used to validate the model.

  19. Different Mechanisms of Soil Microbial Response to Global Change Result in Different Outcomes in the MIMICS-CN Model

    NASA Astrophysics Data System (ADS)

    Kyker-Snowman, E.; Wieder, W. R.; Grandy, S.

    2017-12-01

    Microbial-explicit models of soil carbon (C) and nitrogen (N) cycling have improved upon simulations of C and N stocks and flows at site-to-global scales relative to traditional first-order linear models. However, the response of microbial-explicit soil models to global change factors depends upon which parameters and processes in a model are altered by those factors. We used the MIcrobial-MIneral Carbon Stabilization Model with coupled N cycling (MIMICS-CN) to compare modeled responses to changes in temperature and plant inputs at two previously-modeled sites (Harvard Forest and Kellogg Biological Station). We spun the model up to equilibrium, applied each perturbation, and evaluated 15 years of post-perturbation C and N pools and fluxes. To model the effect of increasing temperatures, we independently examined the impact of decreasing microbial C use efficiency (CUE), increasing the rate of microbial turnover, and increasing Michaelis-Menten kinetic rates of litter decomposition, plus several combinations of the three. For plant inputs, we ran simulations with stepwise increases in metabolic litter, structural litter, whole litter (structural and metabolic), or labile soil C. The cumulative change in soil C or N varied in both sign and magnitude across simulations. For example, increasing kinetic rates of litter decomposition resulted in net releases of both C and N from soil pools, while decreasing CUE produced short-term increases in respiration but long-term accumulation of C in litter pools and shifts in soil C:N as microbial demand for C increased and biomass declined. Given that soil N cycling constrains the response of plant productivity to global change and that soils generate a large amount of uncertainty in current earth system models, microbial-explicit models are a critical opportunity to advance the modeled representation of soils. However, microbial-explicit models must be improved by experiments to isolate the physiological and stoichiometric parameters of soil microbes that shift under global change.

  20. Aerosol-cloud interactions in a multi-scale modeling framework

    NASA Astrophysics Data System (ADS)

    Lin, G.; Ghan, S. J.

    2017-12-01

    Atmospheric aerosols play an important role in changing the Earth's climate through scattering/absorbing solar and terrestrial radiation and interacting with clouds. However, quantification of the aerosol effects remains one of the most uncertain aspects of current and future climate projection. Much of the uncertainty results from the multi-scale nature of aerosol-cloud interactions, which is very challenging to represent in traditional global climate models (GCMs). In contrast, the multi-scale modeling framework (MMF) provides a viable solution, which explicitly resolves the cloud/precipitation in the cloud resolved model (CRM) embedded in the GCM grid column. In the MMF version of community atmospheric model version 5 (CAM5), aerosol processes are treated with a parameterization, called the Explicit Clouds Parameterized Pollutants (ECPP). It uses the cloud/precipitation statistics derived from the CRM to treat the cloud processing of aerosols on the GCM grid. However, this treatment treats clouds on the CRM grid but aerosols on the GCM grid, which is inconsistent with the reality that cloud-aerosol interactions occur on the cloud scale. To overcome the limitation, here, we propose a new aerosol treatment in the MMF: Explicit Clouds Explicit Aerosols (ECEP), in which we resolve both clouds and aerosols explicitly on the CRM grid. We first applied the MMF with ECPP to the Accelerated Climate Modeling for Energy (ACME) model to have an MMF version of ACME. Further, we also developed an alternative version of ACME-MMF with ECEP. Based on these two models, we have conducted two simulations: one with the ECPP and the other with ECEP. Preliminary results showed that the ECEP simulations tend to predict higher aerosol concentrations than ECPP simulations, because of the more efficient vertical transport from the surface to the higher atmosphere but the less efficient wet removal. We also found that the cloud droplet number concentrations are also different between the two simulations due to the difference in the cloud droplet lifetime. Next, we will explore how the ECEP treatment affects the anthropogenic aerosol forcing, particularly the aerosol indirect forcing, by comparing present-day and pre-industrial simulations.

  1. Flexible language constructs for large parallel programs

    NASA Technical Reports Server (NTRS)

    Rosing, Matthew; Schnabel, Robert

    1993-01-01

    The goal of the research described is to develop flexible language constructs for writing large data parallel numerical programs for distributed memory (MIMD) multiprocessors. Previously, several models have been developed to support synchronization and communication. Models for global synchronization include SIMD (Single Instruction Multiple Data), SPMD (Single Program Multiple Data), and sequential programs annotated with data distribution statements. The two primary models for communication include implicit communication based on shared memory and explicit communication based on messages. None of these models by themselves seem sufficient to permit the natural and efficient expression of the variety of algorithms that occur in large scientific computations. An overview of a new language that combines many of these programming models in a clean manner is given. This is done in a modular fashion such that different models can be combined to support large programs. Within a module, the selection of a model depends on the algorithm and its efficiency requirements. An overview of the language and discussion of some of the critical implementation details is given.

  2. Large-deviation properties of Brownian motion with dry friction.

    PubMed

    Chen, Yaming; Just, Wolfram

    2014-10-01

    We investigate piecewise-linear stochastic models with regard to the probability distribution of functionals of the stochastic processes, a question that occurs frequently in large deviation theory. The functionals that we are looking into in detail are related to the time a stochastic process spends at a phase space point or in a phase space region, as well as to the motion with inertia. For a Langevin equation with discontinuous drift, we extend the so-called backward Fokker-Planck technique for non-negative support functionals to arbitrary support functionals, to derive explicit expressions for the moments of the functional. Explicit solutions for the moments and for the distribution of the so-called local time, the occupation time, and the displacement are derived for the Brownian motion with dry friction, including quantitative measures to characterize deviation from Gaussian behavior in the asymptotic long time limit.

  3. Computational assignment of redox states to Coulomb blockade diamonds.

    PubMed

    Olsen, Stine T; Arcisauskaite, Vaida; Hansen, Thorsten; Kongsted, Jacob; Mikkelsen, Kurt V

    2014-09-07

    With the advent of molecular transistors, electrochemistry can now be studied at the single-molecule level. Experimentally, the redox chemistry of the molecule manifests itself as features in the observed Coulomb blockade diamonds. We present a simple theoretical method for explicit construction of the Coulomb blockade diamonds of a molecule. A combined quantum mechanical/molecular mechanical method is invoked to calculate redox energies and polarizabilities of the molecules, including the screening effect of the metal leads. This direct approach circumvents the need for explicit modelling of the gate electrode. From the calculated parameters the Coulomb blockade diamonds are constructed using simple theory. We offer a theoretical tool for assignment of Coulomb blockade diamonds to specific redox states in particular, and a study of chemical details in the diamonds in general. With the ongoing experimental developments in molecular transistor experiments, our tool could find use in molecular electronics, electrochemistry, and electrocatalysis.

  4. Intervention mapping: a process for developing theory- and evidence-based health education programs.

    PubMed

    Bartholomew, L K; Parcel, G S; Kok, G

    1998-10-01

    The practice of health education involves three major program-planning activities: needs assessment, program development, and evaluation. Over the past 20 years, significant enhancements have been made to the conceptual base and practice of health education. Models that outline explicit procedures and detailed conceptualization of community assessment and evaluation have been developed. Other advancements include the application of theory to health education and promotion program development and implementation. However, there remains a need for more explicit specification of the processes by which one uses theory and empirical findings to develop interventions. This article presents the origins, purpose, and description of Intervention Mapping, a framework for health education intervention development. Intervention Mapping is composed of five steps: (1) creating a matrix of proximal program objectives, (2) selecting theory-based intervention methods and practical strategies, (3) designing and organizing a program, (4) specifying adoption and implementation plans, and (5) generating program evaluation plans.

  5. ORILAM, a three-moment lognormal aerosol scheme for mesoscale atmospheric model: Online coupling into the Meso-NH-C model and validation on the Escompte campaign

    NASA Astrophysics Data System (ADS)

    Tulet, Pierre; Crassier, Vincent; Cousin, Frederic; Suhre, Karsten; Rosset, Robert

    2005-09-01

    Classical aerosol schemes use either a sectional (bin) or lognormal approach. Both approaches have particular capabilities and interests: the sectional approach is able to describe every kind of distribution, whereas the lognormal one makes assumption of the distribution form with a fewer number of explicit variables. For this last reason we developed a three-moment lognormal aerosol scheme named ORILAM to be coupled in three-dimensional mesoscale or CTM models. This paper presents the concept and hypothesis of a range of aerosol processes such as nucleation, coagulation, condensation, sedimentation, and dry deposition. One particular interest of ORILAM is to keep explicit the aerosol composition and distribution (mass of each constituent, mean radius, and standard deviation of the distribution are explicit) using the prediction of three-moment (m0, m3, and m6). The new model was evaluated by comparing simulations to measurements from the Escompte campaign and to a previously published aerosol model. The numerical cost of the lognormal mode is lower than two bins of the sectional one.

  6. Quantum decay model with exact explicit analytical solution

    NASA Astrophysics Data System (ADS)

    Marchewka, Avi; Granot, Er'El

    2009-01-01

    A simple decay model is introduced. The model comprises a point potential well, which experiences an abrupt change. Due to the temporal variation, the initial quantum state can either escape from the well or stay localized as a new bound state. The model allows for an exact analytical solution while having the necessary features of a decay process. The results show that the decay is never exponential, as classical dynamics predicts. Moreover, at short times the decay has a fractional power law, which differs from perturbation quantum method predictions. At long times the decay includes oscillations with an envelope that decays algebraically. This is a model where the final state can be either continuous or localized, and that has an exact analytical solution.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Huys, Otti, E-mail: otti.dhuys@phy.duke.edu; Haynes, Nicholas D.; Lohmann, Johannes

    Autonomous Boolean networks are commonly used to model the dynamics of gene regulatory networks and allow for the prediction of stable dynamical attractors. However, most models do not account for time delays along the network links and noise, which are crucial features of real biological systems. Concentrating on two paradigmatic motifs, the toggle switch and the repressilator, we develop an experimental testbed that explicitly includes both inter-node time delays and noise using digital logic elements on field-programmable gate arrays. We observe transients that last millions to billions of characteristic time scales and scale exponentially with the amount of time delaysmore » between nodes, a phenomenon known as super-transient scaling. We develop a hybrid model that includes time delays along network links and allows for stochastic variation in the delays. Using this model, we explain the observed super-transient scaling of both motifs and recreate the experimentally measured transient distributions.« less

  8. Material Model Evaluation of a Composite Honeycomb Energy Absorber

    NASA Technical Reports Server (NTRS)

    Jackson, Karen E.; Annett, Martin S.; Fasanella, Edwin L.; Polanco, Michael A.

    2012-01-01

    A study was conducted to evaluate four different material models in predicting the dynamic crushing response of solid-element-based models of a composite honeycomb energy absorber, designated the Deployable Energy Absorber (DEA). Dynamic crush tests of three DEA components were simulated using the nonlinear, explicit transient dynamic code, LS-DYNA . In addition, a full-scale crash test of an MD-500 helicopter, retrofitted with DEA blocks, was simulated. The four material models used to represent the DEA included: *MAT_CRUSHABLE_FOAM (Mat 63), *MAT_HONEYCOMB (Mat 26), *MAT_SIMPLIFIED_RUBBER/FOAM (Mat 181), and *MAT_TRANSVERSELY_ANISOTROPIC_CRUSHABLE_FOAM (Mat 142). Test-analysis calibration metrics included simple percentage error comparisons of initial peak acceleration, sustained crush stress, and peak compaction acceleration of the DEA components. In addition, the Roadside Safety Verification and Validation Program (RSVVP) was used to assess similarities and differences between the experimental and analytical curves for the full-scale crash test.

  9. Time series models of environmental exposures: Good predictions or good understanding.

    PubMed

    Barnett, Adrian G; Stephen, Dimity; Huang, Cunrui; Wolkewitz, Martin

    2017-04-01

    Time series data are popular in environmental epidemiology as they make use of the natural experiment of how changes in exposure over time might impact on disease. Many published time series papers have used parameter-heavy models that fully explained the second order patterns in disease to give residuals that have no short-term autocorrelation or seasonality. This is often achieved by including predictors of past disease counts (autoregression) or seasonal splines with many degrees of freedom. These approaches give great residuals, but add little to our understanding of cause and effect. We argue that modelling approaches should rely more on good epidemiology and less on statistical tests. This includes thinking about causal pathways, making potential confounders explicit, fitting a limited number of models, and not over-fitting at the cost of under-estimating the true association between exposure and disease. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Moving forward socio-economically focused models of deforestation.

    PubMed

    Dezécache, Camille; Salles, Jean-Michel; Vieilledent, Ghislain; Hérault, Bruno

    2017-09-01

    Whilst high-resolution spatial variables contribute to a good fit of spatially explicit deforestation models, socio-economic processes are often beyond the scope of these models. Such a low level of interest in the socio-economic dimension of deforestation limits the relevancy of these models for decision-making and may be the cause of their failure to accurately predict observed deforestation trends in the medium term. This study aims to propose a flexible methodology for taking into account multiple drivers of deforestation in tropical forested areas, where the intensity of deforestation is explicitly predicted based on socio-economic variables. By coupling a model of deforestation location based on spatial environmental variables with several sub-models of deforestation intensity based on socio-economic variables, we were able to create a map of predicted deforestation over the period 2001-2014 in French Guiana. This map was compared to a reference map for accuracy assessment, not only at the pixel scale but also over cells ranging from 1 to approximately 600 sq. km. Highly significant relationships were explicitly established between deforestation intensity and several socio-economic variables: population growth, the amount of agricultural subsidies, gold and wood production. Such a precise characterization of socio-economic processes allows to avoid overestimation biases in high deforestation areas, suggesting a better integration of socio-economic processes in the models. Whilst considering deforestation as a purely geographical process contributes to the creation of conservative models unable to effectively assess changes in the socio-economic and political contexts influencing deforestation trends, this explicit characterization of the socio-economic dimension of deforestation is critical for the creation of deforestation scenarios in REDD+ projects. © 2017 John Wiley & Sons Ltd.

  11. Improved pKa Prediction of Substituted Alcohols, Phenols, and Hydroperoxides in Aqueous Medium Using Density Functional Theory and a Cluster-Continuum Solvation Model.

    PubMed

    Thapa, Bishnu; Schlegel, H Bernhard

    2017-06-22

    Acid dissociation constants (pK a 's) are key physicochemical properties that are needed to understand the structure and reactivity of molecules in solution. Theoretical pK a 's have been calculated for a set of 72 organic compounds with -OH and -OOH groups (48 with known experimental pK a 's). This test set includes 17 aliphatic alcohols, 25 substituted phenols, and 30 hydroperoxides. Calculations in aqueous medium have been carried out with SMD implicit solvation and three hybrid DFT functionals (B3LYP, ωB97XD, and M06-2X) with two basis sets (6-31+G(d,p) and 6-311++G(d,p)). The effect of explicit water molecules on calculated pK a 's was assessed by including up to three water molecules. pK a 's calculated with only SMD implicit solvation are found to have average errors greater than 6 pK a units. Including one explicit water reduces the error by about 3 pK a units, but the error is still far from chemical accuracy. With B3LYP/6-311++G(d,p) and three explicit water molecules in SMD solvation, the mean signed error and standard deviation are only -0.02 ± 0.55; a linear fit with zero intercept has a slope of 1.005 and R 2 = 0.97. Thus, this level of theory can be used to calculate pK a 's directly without the need for linear correlations or thermodynamic cycles. Estimated pK a values are reported for 24 hydroperoxides that have not yet been determined experimentally.

  12. Assessing chemistry schemes and constraints in air quality models used to predict ozone in London against the detailed Master Chemical Mechanism.

    PubMed

    Malkin, Tamsin L; Heard, Dwayne E; Hood, Christina; Stocker, Jenny; Carruthers, David; MacKenzie, Ian A; Doherty, Ruth M; Vieno, Massimo; Lee, James; Kleffmann, Jörg; Laufs, Sebastian; Whalley, Lisa K

    2016-07-18

    Air pollution is the environmental factor with the greatest impact on human health in Europe. Understanding the key processes driving air quality across the relevant spatial scales, especially during pollution exceedances and episodes, is essential to provide effective predictions for both policymakers and the public. It is particularly important for policy regulators to understand the drivers of local air quality that can be regulated by national policies versus the contribution from regional pollution transported from mainland Europe or elsewhere. One of the main objectives of the Coupled Urban and Regional processes: Effects on AIR quality (CUREAIR) project is to determine local and regional contributions to ozone events. A detailed zero-dimensional (0-D) box model run with the Master Chemical Mechanism (MCMv3.2) is used as the benchmark model against which the less explicit chemistry mechanisms of the Generic Reaction Set (GRS) and the Common Representative Intermediates (CRIv2-R5) schemes are evaluated. GRS and CRI are used by the Atmospheric Dispersion Modelling System (ADMS-Urban) and the regional chemistry transport model EMEP4UK, respectively. The MCM model uses a near-explicit chemical scheme for the oxidation of volatile organic compounds (VOCs) and is constrained to observations of VOCs, NOx, CO, HONO (nitrous acid), photolysis frequencies and meteorological parameters measured during the ClearfLo (Clean Air for London) campaign. The sensitivity of the less explicit chemistry schemes to different model inputs has been investigated: Constraining GRS to the total VOC observed during ClearfLo as opposed to VOC derived from ADMS-Urban dispersion calculations, including emissions and background concentrations, led to a significant increase (674% during winter) in modelled ozone. The inclusion of HONO chemistry in this mechanism, particularly during wintertime when other radical sources are limited, led to substantial increases in the ozone levels predicted (223%). When the GRS and CRIv2-R5 schemes are run with the equivalent model constraints to the MCM, they are able to reproduce the level of ozone predicted by the near-explicit MCM to within 40% and 20% respectively for the majority of the time. An exception to this trend was observed during pollution episodes experienced in the summer, when anticyclonic conditions favoured increased temperatures and elevated O3. The in situ O3 predicted by the MCM was heavily influenced by biogenic VOCs during these conditions and the low GRS [O3] : MCM [O3] ratio (and low CRIv2-R5 [O3] : MCM [O3] ratio) demonstrates that these less explicit schemes under-represent the full O3 creation potential of these VOCs. To fully assess the influence of the in situ O3 generated from local emissions versus O3 generated upwind of London and advected in, the time since emission (and, hence, how far the real atmosphere is from steady state) must be determined. From estimates of the mean transport time determined from the NOx : NOy ratio observed at North Kensington during the summer and comparison of the O3 predicted by the MCM model after this time, ∼60% of the median observed [O3] could be generated from local emissions. During the warmer conditions experienced during the easterly flows, however, the observed [O3] may be even more heavily influenced by London's emissions.

  13. A MULTIPLE GRID APPROACH FOR OPEN CHANNEL FLOWS WITH STRONG SHOCKS. (R825200)

    EPA Science Inventory

    Abstract

    Explicit finite difference schemes are being widely used for modeling open channel flows accompanied with shocks. A characteristic feature of explicit schemes is the small time step, which is limited by the CFL stability condition. To overcome this limitation,...

  14. New explicit global asymptotic stability criteria for higher order difference equations

    NASA Astrophysics Data System (ADS)

    El-Morshedy, Hassan A.

    2007-12-01

    New explicit sufficient conditions for the asymptotic stability of the zero solution of higher order difference equations are obtained. These criteria can be applied to autonomous and nonautonomous equations. The celebrated Clark asymptotic stability criterion is improved. Also, applications to models from mathematical biology and macroeconomics are given.

  15. Explicit Processing Demands Reveal Language Modality-Specific Organization of Working Memory

    ERIC Educational Resources Information Center

    Rudner, Mary; Ronnberg, Jerker

    2008-01-01

    The working memory model for Ease of Language Understanding (ELU) predicts that processing differences between language modalities emerge when cognitive demands are explicit. This prediction was tested in three working memory experiments with participants who were Deaf Signers (DS), Hearing Signers (HS), or Hearing Nonsigners (HN). Easily nameable…

  16. Feasibility of Explicit Instruction in Adult Basic Education: Instructor-Learner Interaction Patterns

    ERIC Educational Resources Information Center

    Mellard, Daryl; Scanlon, David

    2006-01-01

    A strategic instruction model introduced into adult basic education classrooms yields insight into the feasibility of using direct and explicit instruction with adults with learning disabilities or other cognitive barriers to learning. Ecobehavioral assessment was used to describe and compare instructor-learner interaction patterns during learning…

  17. State-space based analysis and forecasting of macroscopic road safety trends in Greece.

    PubMed

    Antoniou, Constantinos; Yannis, George

    2013-11-01

    In this paper, macroscopic road safety trends in Greece are analyzed using state-space models and data for 52 years (1960-2011). Seemingly unrelated time series equations (SUTSE) models are developed first, followed by richer latent risk time-series (LRT) models. As reliable estimates of vehicle-kilometers are not available for Greece, the number of vehicles in circulation is used as a proxy to the exposure. Alternative considered models are presented and discussed, including diagnostics for the assessment of their model quality and recommendations for further enrichment of this model. Important interventions were incorporated in the models developed (1986 financial crisis, 1991 old-car exchange scheme, 1996 new road fatality definition) and found statistically significant. Furthermore, the forecasting results using data up to 2008 were compared with final actual data (2009-2011) indicating that the models perform properly, even in unusual situations, like the current strong financial crisis in Greece. Forecasting results up to 2020 are also presented and compared with the forecasts of a model that explicitly considers the currently on-going recession. Modeling the recession, and assuming that it will end by 2013, results in more reasonable estimates of risk and vehicle-kilometers for the 2020 horizon. This research demonstrates the benefits of using advanced state-space modeling techniques for modeling macroscopic road safety trends, such as allowing the explicit modeling of interventions. The challenges associated with the application of such state-of-the-art models for macroscopic phenomena, such as traffic fatalities in a region or country, are also highlighted. Furthermore, it is demonstrated that it is possible to apply such complex models using the relatively short time-series that are available in macroscopic road safety analysis. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Uncertainty analysis of an inflow forecasting model: extension of the UNEEC machine learning-based method

    NASA Astrophysics Data System (ADS)

    Pianosi, Francesca; Lal Shrestha, Durga; Solomatine, Dimitri

    2010-05-01

    This research presents an extension of UNEEC (Uncertainty Estimation based on Local Errors and Clustering, Shrestha and Solomatine, 2006, 2008 & Solomatine and Shrestha, 2009) method in the direction of explicit inclusion of parameter uncertainty. UNEEC method assumes that there is an optimal model and the residuals of the model can be used to assess the uncertainty of the model prediction. It is assumed that all sources of uncertainty including input, parameter and model structure uncertainty are explicitly manifested in the model residuals. In this research, theses assumptions are relaxed, and the UNEEC method is extended to consider parameter uncertainty as well (abbreviated as UNEEC-P). In UNEEC-P, first we use Monte Carlo (MC) sampling in parameter space to generate N model realizations (each of which is a time series), estimate the prediction quantiles based on the empirical distribution functions of the model residuals considering all the residual realizations, and only then apply the standard UNEEC method that encapsulates the uncertainty of a hydrologic model (expressed by quantiles of the error distribution) in a machine learning model (e.g., ANN). UNEEC-P is applied first to a linear regression model of synthetic data, and then to a real case study of forecasting inflow to lake Lugano in northern Italy. The inflow forecasting model is a stochastic heteroscedastic model (Pianosi and Soncini-Sessa, 2009). The preliminary results show that the UNEEC-P method produces wider uncertainty bounds, which is consistent with the fact that the method considers also parameter uncertainty of the optimal model. In the future UNEEC method will be further extended to consider input and structure uncertainty which will provide more realistic estimation of model predictions.

  19. Pilot-optimal augmentation synthesis

    NASA Technical Reports Server (NTRS)

    Schmidt, D. K.

    1978-01-01

    An augmentation synthesis method usable in the absence of quantitative handling qualities specifications, and yet explicitly including design objectives based on pilot-rating concepts, is presented. The algorithm involves the unique approach of simultaneously solving for the stability augmentation system (SAS) gains, pilot equalization and pilot rating prediction via optimal control techniques. Simultaneous solution is required in this case since the pilot model (gains, etc.) depends upon the augmented plant dynamics, and the augmentation is obviously not a priori known. Another special feature is the use of the pilot's objective function (from which the pilot model evolves) to design the SAS.

  20. Effects of reducing attentional resources on implicit and explicit memory after severe traumatic brain injury.

    PubMed

    Watt, S; Shores, E A; Kinoshita, S

    1999-07-01

    Implicit and explicit memory were examined in individuals with severe traumatic brain injury (TBI) under conditions of full and divided attention. Participants included 12 individuals with severe TBI and 12 matched controls. In Experiment 1, participants carried out an implicit test of word-stem completion and an explicit test of cued recall. Results demonstrated that TBI participants exhibited impaired explicit memory but preserved implicit memory. In Experiment 2, a significant reduction in the explicit memory performance of both TBI and control participants, as well as a significant decrease in the implicit memory performance of TBI participants, was achieved by reducing attentional resources at encoding. These results indicated that performance on an implicit task of word-stem completion may require the availability of additional attentional resources that are not preserved after severe TBI.

Top