Combining Model-driven and Schema-based Program Synthesis
NASA Technical Reports Server (NTRS)
Denney, Ewen; Whittle, John
2004-01-01
We describe ongoing work which aims to extend the schema-based program synthesis paradigm with explicit models. In this context, schemas can be considered as model-to-model transformations. The combination of schemas with explicit models offers a number of advantages, namely, that building synthesis systems becomes much easier since the models can be used in verification and in adaptation of the synthesis systems. We illustrate our approach using an example from signal processing.
Characterizing the tradeoffs and costs associated with transportation congestion in supply chains.
DOT National Transportation Integrated Search
2010-01-21
We consider distribution and location-planning models for supply chains that explicitly : account for traffic congestion effects. The majority of facility location and transportation : planning models in the operations research literature consider fa...
Continuum Fatigue Damage Modeling for Use in Life Extending Control
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.
1994-01-01
This paper develops a simplified continuum (continuous wrp to time, stress, etc.) fatigue damage model for use in Life Extending Controls (LEC) studies. The work is based on zero mean stress local strain cyclic damage modeling. New nonlinear explicit equation forms of cyclic damage in terms of stress amplitude are derived to facilitate the continuum modeling. Stress based continuum models are derived. Extension to plastic strain-strain rate models are also presented. Application of these models to LEC applications is considered. Progress toward a nonzero mean stress based continuum model is presented. Also, new nonlinear explicit equation forms in terms of stress amplitude are also derived for this case.
Classification of NLO operators for composite Higgs models
NASA Astrophysics Data System (ADS)
Alanne, Tommi; Bizot, Nicolas; Cacciapaglia, Giacomo; Sannino, Francesco
2018-04-01
We provide a general classification of template operators, up to next-to-leading order, that appear in chiral perturbation theories based on the two flavor patterns of spontaneous symmetry breaking SU (NF)/Sp (NF) and SU (NF)/SO (NF). All possible explicit-breaking sources parametrized by spurions transforming in the fundamental and in the two-index representations of the flavor symmetry are included. While our general framework can be applied to any model of strong dynamics, we specialize to composite-Higgs models, where the main explicit breaking sources are a current mass, the gauging of flavor symmetries, and the Yukawa couplings (for the top). For the top, we consider both bilinear couplings and linear ones à la partial compositeness. Our templates provide a basis for lattice calculations in specific models. As a special example, we consider the SU (4 )/Sp (4 )≅SO (6 )/SO (5 ) pattern which corresponds to the minimal fundamental composite-Higgs model. We further revisit issues related to the misalignment of the vacuum. In particular, we shed light on the physical properties of the singlet η , showing that it cannot develop a vacuum expectation value without explicit C P violation in the underlying theory.
Nonminimally coupled massive scalar field in a 2D black hole: Exactly solvable model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frolov, V.; Zelnikov, A.
2001-06-15
We study a nonminimal massive scalar field in the background of a two-dimensional black hole spacetime. We consider the black hole which is the solution of the 2D dilaton gravity derived from string-theoretical models. We find an explicit solution in a closed form for all modes and the Green function of the scalar field with an arbitrary mass and a nonminimal coupling to the curvature. Greybody factors, the Hawking radiation, and 2>{sup ren} are calculated explicitly for this exactly solvable model.
Uncertainty in spatially explicit animal dispersal models
Mooij, Wolf M.; DeAngelis, Donald L.
2003-01-01
Uncertainty in estimates of survival of dispersing animals is a vexing difficulty in conservation biology. The current notion is that this uncertainty decreases the usefulness of spatially explicit population models in particular. We examined this problem by comparing dispersal models of three levels of complexity: (1) an event-based binomial model that considers only the occurrence of mortality or arrival, (2) a temporally explicit exponential model that employs mortality and arrival rates, and (3) a spatially explicit grid-walk model that simulates the movement of animals through an artificial landscape. Each model was fitted to the same set of field data. A first objective of the paper is to illustrate how the maximum-likelihood method can be used in all three cases to estimate the means and confidence limits for the relevant model parameters, given a particular set of data on dispersal survival. Using this framework we show that the structure of the uncertainty for all three models is strikingly similar. In fact, the results of our unified approach imply that spatially explicit dispersal models, which take advantage of information on landscape details, suffer less from uncertainly than do simpler models. Moreover, we show that the proposed strategy of model development safeguards one from error propagation in these more complex models. Finally, our approach shows that all models related to animal dispersal, ranging from simple to complex, can be related in a hierarchical fashion, so that the various approaches to modeling such dispersal can be viewed from a unified perspective.
Independence polynomial and matching polynomial of the Koch network
NASA Astrophysics Data System (ADS)
Liao, Yunhua; Xie, Xiaoliang
2015-11-01
The lattice gas model and the monomer-dimer model are two classical models in statistical mechanics. It is well known that the partition functions of these two models are associated with the independence polynomial and the matching polynomial in graph theory, respectively. Both polynomials have been shown to belong to the “#P-complete” class, which indicate the problems are computationally “intractable”. We consider these two polynomials of the Koch networks which are scale-free with small-world effects. Explicit recurrences are derived, and explicit formulae are presented for the number of independent sets of a certain type.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cooper, F.
We review the assumptions and domain of applicability of Landau`s Hydrodynamical Model. By considering two models of particle production, pair production from strong electric fields and particle production in the linear {sigma} model, we demonstrate that many of Landau`s ideas are verified in explicit field theory calculations.
Green-Ampt approximations: A comprehensive analysis
NASA Astrophysics Data System (ADS)
Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.
2016-04-01
Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.
NASA Technical Reports Server (NTRS)
Gould, Kevin E.; Satyanarayana, Arunkumar; Bogert, Philip B.
2016-01-01
Analysis performed in this study substantiates the need for high fidelity vehicle level progressive damage analyses (PDA) structural models for use in the verification and validation of proposed sub-scale structural models and to support required full-scale vehicle level testing. PDA results are presented that capture and correlate the responses of sub-scale 3-stringer and 7-stringer panel models and an idealized 8-ft diameter fuselage model, which provides a vehicle level environment for the 7-stringer sub-scale panel model. Two unique skin-stringer attachment assumptions are considered and correlated in the models analyzed: the TIE constraint interface versus the cohesive element (COH3D8) interface. Evaluating different interfaces allows for assessing a range of predicted damage modes, including delamination and crack propagation responses. Damage models considered in this study are the ABAQUS built-in Hashin procedure and the COmplete STress Reduction (COSTR) damage procedure implemented through a VUMAT user subroutine using the ABAQUS/Explicit code.
Explicit criteria for prioritization of cataract surgery
Ma Quintana, José; Escobar, Antonio; Bilbao, Amaia
2006-01-01
Background Consensus techniques have been used previously to create explicit criteria to prioritize cataract extraction; however, the appropriateness of the intervention was not included explicitly in previous studies. We developed a prioritization tool for cataract extraction according to the RAND method. Methods Criteria were developed using a modified Delphi panel judgment process. A panel of 11 ophthalmologists was assembled. Ratings were analyzed regarding the level of agreement among panelists. We studied the effect of all variables on the final panel score using general linear and logistic regression models. Priority scoring systems were developed by means of optimal scaling and general linear models. The explicit criteria developed were summarized by means of regression tree analysis. Results Eight variables were considered to create the indications. Of the 310 indications that the panel evaluated, 22.6% were considered high priority, 52.3% intermediate priority, and 25.2% low priority. Agreement was reached for 31.9% of the indications and disagreement for 0.3%. Logistic regression and general linear models showed that the preoperative visual acuity of the cataractous eye, visual function, and anticipated visual acuity postoperatively were the most influential variables. Alternative and simple scoring systems were obtained by optimal scaling and general linear models where the previous variables were also the most important. The decision tree also shows the importance of the previous variables and the appropriateness of the intervention. Conclusion Our results showed acceptable validity as an evaluation and management tool for prioritizing cataract extraction. It also provides easy algorithms for use in clinical practice. PMID:16512893
All possible electroweak models from Z orbifold
NASA Astrophysics Data System (ADS)
Sato, Hikaru; Kataoka, H.; Munakata, H.; Tanaka, S.
1992-02-01
Considering all possible combinations of two Wilson lines, it is shown that only three independent electroweak models with three generations are obtained from Z orbifold compactification. We obtain this result by analyzing particle spectra of both untwisted and twisted sectors explicitly.
All possible electroweak models from Z orbifold
NASA Astrophysics Data System (ADS)
Sato, H.; Kataoka, H.; Munakata, H.; Tanaka, S.
Considering all possible combinations of two Wilson lines it is shown that only three independent electroweak models with three generations are obtained from Z orbifold compactification. We obtain this result by analyzing particle spectra of both untwisted and twisted sectors explicitly.
DISCRETE VOLUME-ELEMENT METHOD FOR NETWORK WATER- QUALITY MODELS
An explicit dynamic water-quality modeling algorithm is developed for tracking dissolved substances in water-distribution networks. The algorithm is based on a mass-balance relation within pipes that considers both advective transport and reaction kinetics. Complete mixing of m...
On the application of multilevel modeling in environmental and ecological studies
Qian, Song S.; Cuffney, Thomas F.; Alameddine, Ibrahim; McMahon, Gerard; Reckhow, Kenneth H.
2010-01-01
This paper illustrates the advantages of a multilevel/hierarchical approach for predictive modeling, including flexibility of model formulation, explicitly accounting for hierarchical structure in the data, and the ability to predict the outcome of new cases. As a generalization of the classical approach, the multilevel modeling approach explicitly models the hierarchical structure in the data by considering both the within- and between-group variances leading to a partial pooling of data across all levels in the hierarchy. The modeling framework provides means for incorporating variables at different spatiotemporal scales. The examples used in this paper illustrate the iterative process of model fitting and evaluation, a process that can lead to improved understanding of the system being studied.
A review on symmetries for certain Aedes aegypti models
NASA Astrophysics Data System (ADS)
Freire, Igor Leite; Torrisi, Mariano
2015-04-01
We summarize our results related with mathematical modeling of Aedes aegypti and its Lie symmetries. Moreover, some explicit, group-invariant solutions are also shown. Weak equivalence transformations of more general reaction diffusion systems are also considered. New classes of solutions are obtained.
Effects of electrostatic interactions on ligand dissociation kinetics
NASA Astrophysics Data System (ADS)
Erbaş, Aykut; de la Cruz, Monica Olvera; Marko, John F.
2018-02-01
We study unbinding of multivalent cationic ligands from oppositely charged polymeric binding sites sparsely grafted on a flat neutral substrate. Our molecular dynamics simulations are suggested by single-molecule studies of protein-DNA interactions. We consider univalent salt concentrations spanning roughly a 1000-fold range, together with various concentrations of excess ligands in solution. To reveal the ionic effects on unbinding kinetics of spontaneous and facilitated dissociation mechanisms, we treat electrostatic interactions both at a Debye-Hückel (DH) (or implicit ions, i.e., use of an electrostatic potential with a prescribed decay length) level and by the more precise approach of considering all ionic species explicitly in the simulations. We find that the DH approach systematically overestimates unbinding rates, relative to the calculations where all ion pairs are present explicitly in solution, although many aspects of the two types of calculation are qualitatively similar. For facilitated dissociation (FD) (acceleration of unbinding by free ligands in solution) explicit-ion simulations lead to unbinding at lower free-ligand concentrations. Our simulations predict a variety of FD regimes as a function of free-ligand and ion concentrations; a particularly interesting regime is at intermediate concentrations of ligands where nonelectrostatic binding strength controls FD. We conclude that explicit-ion electrostatic modeling is an essential component to quantitatively tackle problems in molecular ligand dissociation, including nucleic-acid-binding proteins.
Isabelle, Boulangeat; Damien, Georges; Wilfried, Thuiller
2014-01-01
During the last decade, despite strenuous efforts to develop new models and compare different approaches, few conclusions have been drawn on their ability to provide robust biodiversity projections in an environmental change context. The recurring suggestions are that models should explicitly (i) include spatiotemporal dynamics; (ii) consider multiple species in interactions; and (iii) account for the processes shaping biodiversity distribution. This paper presents a biodiversity model (FATE-HD) that meets this challenge at regional scale by combining phenomenological and process-based approaches and using well-defined plant functional groups. FATE-HD has been tested and validated in a French National Park, demonstrating its ability to simulate vegetation dynamics, structure and diversity in response to disturbances and climate change. The analysis demonstrated the importance of considering biotic interactions, spatio-temporal dynamics, and disturbances in addition to abiotic drivers to simulate vegetation dynamics. The distribution of pioneer trees was particularly improved, as were all undergrowth functional groups. PMID:24214499
Default contagion risks in Russian interbank market
NASA Astrophysics Data System (ADS)
Leonidov, A. V.; Rumyantsev, E. L.
2016-06-01
Systemic risks of default contagion in the Russian interbank market are investigated. The analysis is based on considering the bow-tie structure of the weighted oriented graph describing the structure of the interbank loans. A probabilistic model of interbank contagion explicitly taking into account the empirical bow-tie structure reflecting functionality of the corresponding nodes (borrowers, lenders, borrowers and lenders simultaneously), degree distributions and disassortativity of the interbank network under consideration based on empirical data is developed. The characteristics of contagion-related systemic risk calculated with this model are shown to be in agreement with those of explicit stress tests.
Effect of explicit dimension instruction on speech category learning
Chandrasekaran, Bharath; Yi, Han-Gyol; Smayda, Kirsten E.; Maddox, W. Todd
2015-01-01
Learning non-native speech categories is often considered a challenging task in adulthood. This difficulty is driven by cross-language differences in weighting critical auditory dimensions that differentiate speech categories. For example, previous studies have shown that differentiating Mandarin tonal categories requires attending to dimensions related to pitch height and direction. Relative to native speakers of Mandarin, the pitch direction dimension is under-weighted by native English speakers. In the current study, we examined the effect of explicit instructions (dimension instruction) on native English speakers' Mandarin tone category learning within the framework of a dual-learning systems (DLS) model. This model predicts that successful speech category learning is initially mediated by an explicit, reflective learning system that frequently utilizes unidimensional rules, with an eventual switch to a more implicit, reflexive learning system that utilizes multidimensional rules. Participants were explicitly instructed to focus and/or ignore the pitch height dimension, the pitch direction dimension, or were given no explicit prime. Our results show that instruction instructing participants to focus on pitch direction, and instruction diverting attention away from pitch height resulted in enhanced tone categorization. Computational modeling of participant responses suggested that instruction related to pitch direction led to faster and more frequent use of multidimensional reflexive strategies, and enhanced perceptual selectivity along the previously underweighted pitch direction dimension. PMID:26542400
Two-dimensional habitat modeling in the Yellowstone/Upper Missouri River system
Waddle, T. J.; Bovee, K.D.; Bowen, Z.H.
1997-01-01
This study is being conducted to provide the aquatic biology component of a decision support system being developed by the U.S. Bureau of Reclamation. In an attempt to capture the habitat needs of Great Plains fish communities we are looking beyond previous habitat modeling methods. Traditional habitat modeling approaches have relied on one-dimensional hydraulic models and lumped compositional habitat metrics to describe aquatic habitat. A broader range of habitat descriptors is available when both composition and configuration of habitats is considered. Habitat metrics that consider both composition and configuration can be adapted from terrestrial biology. These metrics are most conveniently accessed with spatially explicit descriptors of the physical variables driving habitat composition. Two-dimensional hydrodynamic models have advanced to the point that they may provide the spatially explicit description of physical parameters needed to address this problem. This paper reports progress to date on applying two-dimensional hydraulic and habitat models on the Yellowstone and Missouri Rivers and uses examples from the Yellowstone River to illustrate the configurational metrics as a new tool for assessing riverine habitats.
NASA Astrophysics Data System (ADS)
Falugi, P.; Olaru, S.; Dumur, D.
2010-08-01
This article proposes an explicit robust predictive control solution based on linear matrix inequalities (LMIs). The considered predictive control strategy uses different local descriptions of the system dynamics and uncertainties and thus allows the handling of less conservative input constraints. The computed control law guarantees constraint satisfaction and asymptotic stability. The technique is effective for a class of nonlinear systems embedded into polytopic models. A detailed discussion of the procedures which adapt the partition of the state space is presented. For the practical implementation the construction of suitable (explicit) descriptions of the control law are described upon concrete algorithms.
USDA-ARS?s Scientific Manuscript database
Assessing the performance of Low Impact Development (LID) practices at a catchment scale is important in managing urban watersheds. Few modeling tools exist that are capable of explicitly representing the hydrological mechanisms of LIDs while considering the diverse land uses of urban watersheds. ...
NASA Astrophysics Data System (ADS)
Pianosi, Francesca; Lal Shrestha, Durga; Solomatine, Dimitri
2010-05-01
This research presents an extension of UNEEC (Uncertainty Estimation based on Local Errors and Clustering, Shrestha and Solomatine, 2006, 2008 & Solomatine and Shrestha, 2009) method in the direction of explicit inclusion of parameter uncertainty. UNEEC method assumes that there is an optimal model and the residuals of the model can be used to assess the uncertainty of the model prediction. It is assumed that all sources of uncertainty including input, parameter and model structure uncertainty are explicitly manifested in the model residuals. In this research, theses assumptions are relaxed, and the UNEEC method is extended to consider parameter uncertainty as well (abbreviated as UNEEC-P). In UNEEC-P, first we use Monte Carlo (MC) sampling in parameter space to generate N model realizations (each of which is a time series), estimate the prediction quantiles based on the empirical distribution functions of the model residuals considering all the residual realizations, and only then apply the standard UNEEC method that encapsulates the uncertainty of a hydrologic model (expressed by quantiles of the error distribution) in a machine learning model (e.g., ANN). UNEEC-P is applied first to a linear regression model of synthetic data, and then to a real case study of forecasting inflow to lake Lugano in northern Italy. The inflow forecasting model is a stochastic heteroscedastic model (Pianosi and Soncini-Sessa, 2009). The preliminary results show that the UNEEC-P method produces wider uncertainty bounds, which is consistent with the fact that the method considers also parameter uncertainty of the optimal model. In the future UNEEC method will be further extended to consider input and structure uncertainty which will provide more realistic estimation of model predictions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skrypnyk, T., E-mail: taras.skrypnyk@unimib.it, E-mail: tskrypnyk@imath.kiev.ua
Using the technique of classical r-matrices and quantum Lax operators, we construct the most general form of the quantum integrable “n-level, many-mode” spin-boson Jaynes-Cummings-Dicke-type hamiltonians describing an interaction of a molecule of N n-level atoms with many modes of electromagnetic field and containing, in general, additional non-linear interaction terms. We explicitly obtain the corresponding quantum Lax operators and spin-boson analogs of the generalized Gaudin hamiltonians and prove their quantum commutativity. We investigate symmetries of the obtained models that are associated with the geometric symmetries of the classical r-matrices and construct the corresponding algebra of quantum integrals. We consider in detailmore » three classes of non-skew-symmetric classical r-matrices with spectral parameters and explicitly obtain the corresponding quantum Lax operators and Jaynes-Cummings-Dicke-type hamiltonians depending on the considered r-matrix.« less
Blackwood, Julie C; Hastings, Alan; Mumby, Peter J
2011-10-01
The interaction between multiple stressors on Caribbean coral reefs, namely, fishing effort and hurricane impacts, is a key element in the future sustainability of reefs. We develop an analytic model of coral-algal interactions and explicitly consider grazing by herbivorous reef fish. Further, we consider changes in structural complexity, or rugosity, in addition to the direct impacts of hurricanes, which are implemented as stochastic jump processes. The model simulations consider various levels of fishing effort corresponding to' several hurricane frequencies and impact levels dependent on geographic location. We focus on relatively short time scales so we do not explicitly include changes in ocean temperature, chemistry, or sea level rise. The general features of our approach would, however, apply to these other stressors and to the management of other systems in the face of multiple stressors. It is determined that the appropriate management policy, either local reef restoration or fisheries management, greatly depends on hurricane frequency and impact level. For sufficiently low hurricane impact and macroalgal growth rate, our results indicate that regions with lower-frequency hurricanes require stricter fishing regulations, whereas management in regions with higher-frequency hurricanes might be less concerned with enhancing grazing and instead consider whether local-scale restorative activities to increase vertical structure are cost-effective.
Modeling Active Aging and Explicit Memory: An Empirical Study.
Ponce de León, Laura Ponce; Lévy, Jean Pierre; Fernández, Tomás; Ballesteros, Soledad
2015-08-01
The rapid growth of the population of older adults and their concomitant psychological status and health needs have captured the attention of researchers and health professionals. To help fill the void of literature available to social workers interested in mental health promotion and aging, the authors provide a model for active aging that uses psychosocial variables. Structural equation modeling was used to examine the relationships among the latent variables of the state of explicit memory, the perception of social resources, depression, and the perception of quality of life in a sample of 184 older adults. The results suggest that explicit memory is not a direct indicator of the perception of quality of life, but it could be considered an indirect indicator as it is positively correlated with perception of social resources and negatively correlated with depression. These last two variables influenced the perception of quality of life directly, the former positively and the latter negatively. The main outcome suggests that the perception of social support improves explicit memory and quality of life and reduces depression in active older adults. The findings also suggest that gerontological professionals should design memory training programs, improve available social resources, and offer environments with opportunities to exercise memory.
NASA Astrophysics Data System (ADS)
He, Hongxing; Meyer, Astrid; Jansson, Per-Erik; Svensson, Magnus; Rütting, Tobias; Klemedtsson, Leif
2018-02-01
The symbiosis between plants and Ectomycorrhizal fungi (ECM) is shown to considerably influence the carbon (C) and nitrogen (N) fluxes between the soil, rhizosphere, and plants in boreal forest ecosystems. However, ECM are either neglected or presented as an implicit, undynamic term in most ecosystem models, which can potentially reduce the predictive power of models.
In order to investigate the necessity of an explicit consideration of ECM in ecosystem models, we implement the previously developed MYCOFON model into a detailed process-based, soil-plant-atmosphere model, Coup-MYCOFON, which explicitly describes the C and N fluxes between ECM and roots. This new Coup-MYCOFON model approach (ECM explicit) is compared with two simpler model approaches: one containing ECM implicitly as a dynamic uptake of organic N considering the plant roots to represent the ECM (ECM implicit), and the other a static N approach in which plant growth is limited to a fixed N level (nonlim). Parameter uncertainties are quantified using Bayesian calibration in which the model outputs are constrained to current forest growth and soil C / N ratio for four forest sites along a climate and N deposition gradient in Sweden and simulated over a 100-year period.
The nonlim
approach could not describe the soil C / N ratio due to large overestimation of soil N sequestration but simulate the forest growth reasonably well. The ECM implicit
and explicit
approaches both describe the soil C / N ratio well but slightly underestimate the forest growth. The implicit approach simulated lower litter production and soil respiration than the explicit approach. The ECM explicit Coup-MYCOFON model provides a more detailed description of internal ecosystem fluxes and feedbacks of C and N between plants, soil, and ECM. Our modeling highlights the need to incorporate ECM and organic N uptake into ecosystem models, and the nonlim approach is not recommended for future long-term soil C and N predictions. We also provide a key set of posterior fungal parameters that can be further investigated and evaluated in future ECM studies.
Discrete ordinates solutions of nongray radiative transfer with diffusely reflecting walls
NASA Technical Reports Server (NTRS)
Menart, J. A.; Lee, Haeok S.; Kim, Tae-Kuk
1993-01-01
Nongray gas radiation in a plane parallel slab bounded by gray, diffusely reflecting walls is studied using the discrete ordinates method. The spectral equation of transfer is averaged over a narrow wavenumber interval preserving the spectral correlation effect. The governing equations are derived by considering the history of multiple reflections between two reflecting wails. A closure approximation is applied so that only a finite number of reflections have to be explicitly included. The closure solutions express the physics of the problem to a very high degree and show relatively little error. Numerical solutions are obtained by applying a statistical narrow-band model for gas properties and a discrete ordinates code. The net radiative wail heat fluxes and the radiative source distributions are obtained for different temperature profiles. A zeroth-degree formulation, where no wall reflection is handled explicitly, is sufficient to predict the radiative transfer accurately for most cases considered, when compared with increasingly accurate solutions based on explicitly tracing a larger number of wail reflections without any closure approximation applied.
NASA Technical Reports Server (NTRS)
Rosenchein, Stanley J.; Burns, J. Brian; Chapman, David; Kaelbling, Leslie P.; Kahn, Philip; Nishihara, H. Keith; Turk, Matthew
1993-01-01
This report is concerned with agents that act to gain information. In previous work, we developed agent models combining qualitative modeling with real-time control. That work, however, focused primarily on actions that affect physical states of the environment. The current study extends that work by explicitly considering problems of active information-gathering and by exploring specialized aspects of information-gathering in computational perception, learning, and language. In our theoretical investigations, we analyzed agents into their perceptual and action components and identified these with elements of a state-machine model of control. The mathematical properties of each was developed in isolation and interactions were then studied. We considered the complexity dimension and the uncertainty dimension and related these to intelligent-agent design issues. We also explored active information gathering in visual processing. Working within the active vision paradigm, we developed a concept of 'minimal meaningful measurements' suitable for demand-driven vision. We then developed and tested an architecture for ongoing recognition and interpretation of visual information. In the area of information gathering through learning, we explored techniques for coping with combinatorial complexity. We also explored information gathering through explicit linguistic action by considering the nature of conversational rules, coordination, and situated communication behavior.
Integrated earth system dynamic modeling for life cycle impact assessment of ecosystem services.
Arbault, Damien; Rivière, Mylène; Rugani, Benedetto; Benetto, Enrico; Tiruta-Barna, Ligia
2014-02-15
Despite the increasing awareness of our dependence on Ecosystem Services (ES), Life Cycle Impact Assessment (LCIA) does not explicitly and fully assess the damages caused by human activities on ES generation. Recent improvements in LCIA focus on specific cause-effect chains, mainly related to land use changes, leading to Characterization Factors (CFs) at the midpoint assessment level. However, despite the complexity and temporal dynamics of ES, current LCIA approaches consider the environmental mechanisms underneath ES to be independent from each other and devoid of dynamic character, leading to constant CFs whose representativeness is debatable. This paper takes a step forward and is aimed at demonstrating the feasibility of using an integrated earth system dynamic modeling perspective to retrieve time- and scenario-dependent CFs that consider the complex interlinkages between natural processes delivering ES. The GUMBO (Global Unified Metamodel of the Biosphere) model is used to quantify changes in ES production in physical terms - leading to midpoint CFs - and changes in human welfare indicators, which are considered here as endpoint CFs. The interpretation of the obtained results highlights the key methodological challenges to be solved to consider this approach as a robust alternative to the mainstream rationale currently adopted in LCIA. Further research should focus on increasing the granularity of environmental interventions in the modeling tools to match current standards in LCA and on adapting the conceptual approach to a spatially-explicit integrated model. Copyright © 2013 Elsevier B.V. All rights reserved.
Five challenges for spatial epidemic models
Riley, Steven; Eames, Ken; Isham, Valerie; Mollison, Denis; Trapman, Pieter
2015-01-01
Infectious disease incidence data are increasingly available at the level of the individual and include high-resolution spatial components. Therefore, we are now better able to challenge models that explicitly represent space. Here, we consider five topics within spatial disease dynamics: the construction of network models; characterising threshold behaviour; modelling long-distance interactions; the appropriate scale for interventions; and the representation of population heterogeneity. PMID:25843387
Fluid and Electrolyte Balance model (FEB)
NASA Technical Reports Server (NTRS)
Fitzjerrell, D. G.
1973-01-01
The effects of various oral input water loads on solute and water distribution throughout the body are presented in the form of a model. The model was a three compartment model; the three compartments being plasma, interstitial fluid and cellular fluid. Sodium, potassium, chloride and urea were the only major solutes considered explicitly. The control of body water and electrolyte distribution was affected via drinking and hormone levels.
NASA Astrophysics Data System (ADS)
Zhang, Junhua; Lohmann, Ulrike
2003-08-01
The single column model of the Canadian Centre for Climate Modeling and Analysis (CCCma) climate model is used to simulate Arctic spring cloud properties observed during the Surface Heat Budget of the Arctic Ocean (SHEBA) experiment. The model is driven by the rawinsonde observations constrained European Center for Medium-Range Weather Forecasts (ECMWF) reanalysis data. Five cloud parameterizations, including three statistical and two explicit schemes, are compared and the sensitivity to mixed phase cloud parameterizations is studied. Using the original mixed phase cloud parameterization of the model, the statistical cloud schemes produce more cloud cover, cloud water, and precipitation than the explicit schemes and in general agree better with observations. The mixed phase cloud parameterization from ECMWF decreases the initial saturation specific humidity threshold of cloud formation. This improves the simulated cloud cover in the explicit schemes and reduces the difference between the different cloud schemes. On the other hand, because the ECMWF mixed phase cloud scheme does not consider the Bergeron-Findeisen process, less ice crystals are formed. This leads to a higher liquid water path and less precipitation than what was observed.
NASA Astrophysics Data System (ADS)
Govind, Ajit; Chen, Jing Ming; Margolis, Hank; Ju, Weimin; Sonnentag, Oliver; Giasson, Marc-André
2009-04-01
SummaryA spatially explicit, process-based hydro-ecological model, BEPS-TerrainLab V2.0, was developed to improve the representation of ecophysiological, hydro-ecological and biogeochemical processes of boreal ecosystems in a tightly coupled manner. Several processes unique to boreal ecosystems were implemented including the sub-surface lateral water fluxes, stratification of vegetation into distinct layers for explicit ecophysiological representation, inclusion of novel spatial upscaling strategies and biogeochemical processes. To account for preferential water fluxes common in humid boreal ecosystems, a novel scheme was introduced based on laboratory analyses. Leaf-scale ecophysiological processes were upscaled to canopy-scale by explicitly considering leaf physiological conditions as affected by light and water stress. The modified model was tested with 2 years of continuous measurements taken at the Eastern Old Black Spruce Site of the Fluxnet-Canada Research Network located in a humid boreal watershed in eastern Canada. Comparison of the simulated and measured ET, water-table depth (WTD), volumetric soil water content (VSWC) and gross primary productivity (GPP) revealed that BEPS-TerrainLab V2.0 simulates hydro-ecological processes with reasonable accuracy. The model was able to explain 83% of the ET, 92% of the GPP variability and 72% of the WTD dynamics. The model suggests that in humid ecosystems such as eastern North American boreal watersheds, topographically driven sub-surface baseflow is the main mechanism of soil water partitioning which significantly affects the local-scale hydrological conditions.
Role of seasonality on predator-prey-subsidy population dynamics.
Levy, Dorian; Harrington, Heather A; Van Gorder, Robert A
2016-05-07
The role of seasonality on predator-prey interactions in the presence of a resource subsidy is examined using a system of non-autonomous ordinary differential equations (ODEs). The problem is motivated by the Arctic, inhabited by the ecological system of arctic foxes (predator), lemmings (prey), and seal carrion (subsidy). We construct two nonlinear, nonautonomous systems of ODEs named the Primary Model, and the n-Patch Model. The Primary Model considers spatial factors implicitly, and the n-Patch Model considers space explicitly as a "Stepping Stone" system. We establish the boundedness of the dynamics, as well as the necessity of sufficiently nutritional food for the survival of the predator. We investigate the importance of including the resource subsidy explicitly in the model, and the importance of accounting for predator mortality during migration. We find a variety of non-equilibrium dynamics for both systems, obtaining both limit cycles and chaotic oscillations. We were then able to discuss relevant implications for biologically interesting predator-prey systems including subsidy under seasonal effects. Notably, we can observe the extinction or persistence of a species when the corresponding autonomous system might predict the opposite. Copyright © 2016 Elsevier Ltd. All rights reserved.
FUEL3-D: A Spatially Explicit Fractal Fuel Distribution Model
Russell A. Parsons
2006-01-01
Efforts to quantitatively evaluate the effectiveness of fuels treatments are hampered by inconsistencies between the spatial scale at which fuel treatments are implemented and the spatial scale, and detail, with which we model fire and fuel interactions. Central to this scale inconsistency is the resolution at which variability within the fuel bed is considered. Crown...
ERIC Educational Resources Information Center
Demuth, Carolin; Keller, Heidi; Yovsi, Relindis D.
2012-01-01
Child rearing is a universal task, yet there are differing solutions according to the dynamics of socio-cultural milieu in which children are raised. Cultural models of what is considered good or bad parenting become explicit in everyday routine practices. Focusing on early mother-infant interactions in this article we examine the discursive…
A simple model of fluid flow and electrolyte balance in the body
NASA Technical Reports Server (NTRS)
White, R. J.; Neal, L.
1973-01-01
The model is basically a three-compartment model, the three compartments being the plasma, interstitial fluid and cellular fluid. Sodium, potassium, chloride and urea are the only major solutes considered explicitly. The control of body water and electrolyte distribution is affected via drinking and hormone levels. Basically, the model follows the effect of various oral input water loads on solute and water distribution throughout the body.
Modeling delamination of FRP laminates under low velocity impact
NASA Astrophysics Data System (ADS)
Jiang, Z.; Wen, H. M.; Ren, S. L.
2017-09-01
Fiber reinforced plastic laminates (FRP) have been increasingly used in various engineering such as aeronautics, astronautics, transportation, naval architecture and their impact response and failure are a major concern in academic community. A new numerical model is suggested for fiber reinforced plastic composites. The model considers that FRP laminates has been constituted by unidirectional laminated plates with adhesive layers. A modified adhesive layer damage model that considering strain rate effects is incorporated into the ABAQUS / EXPLICIT finite element program by the user-defined material subroutine VUMAT. It transpires that the present model predicted delamination is in good agreement with the experimental results for low velocity impact.
Organic aerosol sources an partitioning in CMAQv5.2
We describe a major CMAQ update, available in version 5.2, which explicitly treats the semivolatile mass transfer of primary organic aerosol compounds, in agreement with available field and laboratory observations. Until this model release, CMAQ has considered these compounds to ...
SPATIAL EXPLICIT POPULATION MODELS FOR RISK ASSESSMENT: COMMON LOONS AND MERCURY AS A CASE STUDY
Factors that significantly impact population dynamics, such as resource availability and exposure to stressors, frequently vary over space and thereby determine the heterogeneous spatial distributions of organisms. Considering this fact, the US Environmental Protection Agency's ...
NASA Astrophysics Data System (ADS)
Yetna n'jock, M.; Houssem, B.; Labergere, C.; Saanouni, K.; Zhenming, Y.
2018-05-01
The springback is an important phenomenon which accompanies the forming of metallic sheets especially for high strength materials. A quantitative prediction of springback becomes very important for newly developed material with high mechanical characteristics. In this work, a numerical methodology is developed to quantify this undesirable phenomenon. This methodoly is based on the use of both explicit and implicit finite element solvers of Abaqus®. The most important ingredient of this methodology consists on the use of highly predictive mechanical model. A thermodynamically-consistent, non-associative and fully anisotropic elastoplastic constitutive model strongly coupled with isotropic ductile damage and accounting for distortional hardening is then used. An algorithm for local integration of the complete set of the constitutive equations is developed. This algorithm considers the rotated frame formulation (RFF) to ensure the incremental objectivity of the model in the framework of finite strains. This algorithm is implemented in both explicit (Abaqus/Explicit®) and implicit (Abaqus/Standard®) solvers of Abaqus® through the users routine VUMAT and UMAT respectively. The implicit solver of Abaqus® has been used to study spingback as it is generally a quasi-static unloading. In order to compare the methods `efficiency, the explicit method (Dynamic Relaxation Method) proposed by Rayleigh has been also used for springback prediction. The results obtained within U draw/bending benchmark are studied, discussed and compared with experimental results as reference. Finally, the purpose of this work is to evaluate the reliability of different methods predict efficiently springback in sheet metal forming.
On integrability of the Yang-Baxter {sigma}-model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klimcik, Ctirad
2009-04-15
We prove that the recently introduced Yang-Baxter {sigma}-model can be considered as an integrable deformation of the principal chiral model. We find also an explicit one-to-one map transforming every solution of the principal chiral model into a solution of the deformed model. With the help of this map, the standard procedure of the dressing of the principal chiral solutions can be directly transferred into the deformed Yang-Baxter context.
Microphysical response of cloud droplets in a fluctuating updraft. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Harding, D. D.
1977-01-01
The effect of a fluctuating updraft upon a distribution of cloud droplets is examined. Computations are performed for fourteen vertical velocity patterns; each allows a closed parcel of cloud air to undergo downward as well as upward motion. Droplet solution and curvature effects are included. The classical equations for the growth rate of an individual droplet by vapor condensation relies on simplifying assumptions. Those assumptions are isolated and examined. A unique approach is presented in which all energy sources and sinks of a droplet may be considered and is termed the explicit model. It is speculated that the explicit model may enhance the growth of large droplets at greater heights. Such a model is beneficial to the studies of pollution scavenging and acid rain.
Vibration signal models for fault diagnosis of planet bearings
NASA Astrophysics Data System (ADS)
Feng, Zhipeng; Ma, Haoqun; Zuo, Ming J.
2016-05-01
Rolling element bearings are key components of planetary gearboxes. Among them, the motion of planet bearings is very complex, encompassing spinning and revolution. Therefore, planet bearing vibrations are highly intricate and their fault characteristics are completely different from those of fixed-axis case, making planet bearing fault diagnosis a difficult topic. In order to address this issue, we derive the explicit equations for calculating the characteristic frequency of outer race, rolling element and inner race fault, considering the complex motion of planet bearings. We also develop the planet bearing vibration signal model for each fault case, considering the modulation effects of load zone passing, time-varying angle between the gear pair mesh and fault induced impact force, as well as the time-varying vibration transfer path. Based on the developed signal models, we derive the explicit equations of Fourier spectrum in each fault case, and summarize the vibration spectral characteristics respectively. The theoretical derivations are illustrated by numerical simulation, and further validated experimentally and all the three fault cases (i.e. outer race, rolling element and inner race localized fault) are diagnosed.
Mass balance modelling of contaminants in river basins: a flexible matrix approach.
Warren, Christopher; Mackay, Don; Whelan, Mick; Fox, Kay
2005-12-01
A novel and flexible approach is described for simulating the behaviour of chemicals in river basins. A number (n) of river reaches are defined and their connectivity is described by entries in an n x n matrix. Changes in segmentation can be readily accommodated by altering the matrix entries, without the need for model revision. Two models are described. The simpler QMX-R model only considers advection and an overall loss due to the combined processes of volatilization, net transfer to sediment and degradation. The rate constant for the overall loss is derived from fugacity calculations for a single segment system. The more rigorous QMX-F model performs fugacity calculations for each segment and explicitly includes the processes of advection, evaporation, water-sediment exchange and degradation in both water and sediment. In this way chemical exposure in all compartments (including equilibrium concentrations in biota) can be estimated. Both models are designed to serve as intermediate-complexity exposure assessment tools for river basins with relatively low data requirements. By considering the spatially explicit nature of emission sources and the changes in concentration which occur with transport in the channel system, the approach offers significant advantages over simple one-segment simulations while being more readily applicable than more sophisticated, highly segmented, GIS-based models.
Phase averaging method for the modeling of the multiprobe and cutaneous cryosurgery
NASA Astrophysics Data System (ADS)
E Shilnikov, K.; Kudryashov, N. A.; Y Gaiur, I.
2017-12-01
In this paper we consider the problem of planning and optimization of the cutaneous and multiprobe cryosurgery operations. An explicit scheme based on the finite volume approximation of phase averaged Pennes bioheat transfer model is applied. The flux relaxation method is used for the stability improvement of scheme. Skin tissue is considered as strongly inhomogeneous media. Computerized planning tool is tested on model cryotip-based and cutaneous cryosurgery problems. For the case of cutaneous cryosurgery the method of an additional freezing element mounting is studied as an approach to optimize the cellular necrosis front propagation.
Latash, M L; Gottlieb, G L
1991-09-01
We describe a model for the regulation of fast, single-joint movements, based on the equilibrium-point hypothesis. Limb movement follows constant rate shifts of independently regulated neuromuscular variables. The independently regulated variables are tentatively identified as thresholds of a length sensitive reflex for each of the participating muscles. We use the model to predict EMG patterns associated with changes in the conditions of movement execution, specifically, changes in movement times, velocities, amplitudes, and moments of limb inertia. The approach provides a theoretical neural framework for the dual-strategy hypothesis, which considers certain movements to be results of one of two basic, speed-sensitive or speed-insensitive strategies. This model is advanced as an alternative to pattern-imposing models based on explicit regulation of timing and amplitudes of signals that are explicitly manifest in the EMG patterns.
Learning with Hypertext Learning Environments: Theory, Design, and Research.
ERIC Educational Resources Information Center
Jacobson, Michael J.; And Others
1996-01-01
Studied 69 undergraduates who used conceptually-indexed hypertext learning environments with differently structured thematic criss-crossing (TCC) treatments: guided and learner selected. Found that students need explicit modeling and scaffolding support to learn complex knowledge from these learning environments, and considers implications for…
Ecological systems are generally considered among the most complex because they are characterized by a large number of diverse components, nonlinear interactions, scale multiplicity, and spatial heterogeneity. Hierarchy theory, as well as empirical evidence, suggests that comp...
Exploratory Bi-factor Analysis: The Oblique Case.
Jennrich, Robert I; Bentler, Peter M
2012-07-01
Bi-factor analysis is a form of confirmatory factor analysis originally introduced by Holzinger and Swineford (Psychometrika 47:41-54, 1937). The bi-factor model has a general factor, a number of group factors, and an explicit bi-factor structure. Jennrich and Bentler (Psychometrika 76:537-549, 2011) introduced an exploratory form of bi-factor analysis that does not require one to provide an explicit bi-factor structure a priori. They use exploratory factor analysis and a bifactor rotation criterion designed to produce a rotated loading matrix that has an approximate bi-factor structure. Among other things this can be used as an aid in finding an explicit bi-factor structure for use in a confirmatory bi-factor analysis. They considered only orthogonal rotation. The purpose of this paper is to consider oblique rotation and to compare it to orthogonal rotation. Because there are many more oblique rotations of an initial loading matrix than orthogonal rotations, one expects the oblique results to approximate a bi-factor structure better than orthogonal rotations and this is indeed the case. A surprising result arises when oblique bi-factor rotation methods are applied to ideal data.
Five challenges for spatial epidemic models.
Riley, Steven; Eames, Ken; Isham, Valerie; Mollison, Denis; Trapman, Pieter
2015-03-01
Infectious disease incidence data are increasingly available at the level of the individual and include high-resolution spatial components. Therefore, we are now better able to challenge models that explicitly represent space. Here, we consider five topics within spatial disease dynamics: the construction of network models; characterising threshold behaviour; modelling long-distance interactions; the appropriate scale for interventions; and the representation of population heterogeneity. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Verification of Methods for Assessing the Sustainability of Monitored Natural Attenuation (MNA)
2013-01-01
surface CVOC chlorinated volatile organic compound DCE cis-1,2-Dichloroethylene DNAPL dense non-aqueous phase liquid DO dissolved oxygen DOC...considered detailed representations of aquifer heterogeneity, DNAPL distributions, and interfacial surface area. Thus, the upscaled SZD function considers...the effects of decreases in interfacial surface area with time as NAPL mass depletes, but not in an explicit manner. Likewise, the upscaled model is
Dissipation models for central difference schemes
NASA Astrophysics Data System (ADS)
Eliasson, Peter
1992-12-01
In this paper different flux limiters are used to construct dissipation models. The flux limiters are usually of Total Variation Diminishing (TVD type and are applied to the characteristic variables for the hyperbolic Euler equations in one, two or three dimensions. A number of simplified dissipation models with a reduced number of limiters are considered to reduce the computational effort. The most simplified methods use only one limiter, the dissipation model by Jameson belongs to this class since the Jameson pressure switch is considered as a limiter, not TVD though. Other one-limiter models with TVD limiters are also investigated. Models in between the most simplified one-limiter models and the full model with limiters on all the different characteristics are considered where different dissipation models are applied to the linear and non-linear characteristcs. In this paper the theory by Yee is extended to a general explicit Runge-Kutta type of schemes.
Performance of the reverse Helmbold universal portfolio
NASA Astrophysics Data System (ADS)
Tan, Choon Peng; Kuang, Kee Seng; Lee, Yap Jia
2017-04-01
The universal portfolio is an important investment strategy in a stock market where no stochastic model is assumed for the stock prices. The zero-gradient set of the objective function estimating the next-day portfolio which contains the reverse Kullback-Leibler order-alpha divergence is considered. From the zero-gradient set, the explicit, reverse Helmbold universal portfolio is obtained. The performance of the explicit, reverse Helmbold universal portfolio is studied by running them on some stock-price data sets from the local stock exchange. It is possible to increase the wealth of the investor by using these portfolios in investment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schulze-Halberg, Axel, E-mail: xbataxel@gmail.com; García-Ravelo, Jesús; Pacheco-García, Christian
We consider the Schrödinger equation in the Thomas–Fermi field, a model that has been used for describing electron systems in δ-doped semiconductors. It is shown that the problem becomes exactly-solvable if a particular effective (position-dependent) mass distribution is incorporated. Orthogonal sets of normalizable bound state solutions are constructed in explicit form, and the associated energies are determined. We compare our results with the corresponding findings on the constant-mass problem discussed by Ioriatti (1990) [13]. -- Highlights: ► We introduce an exactly solvable, position-dependent mass model for the Thomas–Fermi potential. ► Orthogonal sets of solutions to our model are constructed inmore » closed form. ► Relation to delta-doped semiconductors is discussed. ► Explicit subband bottom energies are calculated and compared to results obtained in a previous study.« less
NASA Astrophysics Data System (ADS)
Fernandez, Pablo; Nguyen, Ngoc-Cuong; Peraire, Jaime
2017-11-01
Over the past few years, high-order discontinuous Galerkin (DG) methods for Large-Eddy Simulation (LES) have emerged as a promising approach to solve complex turbulent flows. Despite the significant research investment, the relation between the discretization scheme, the Riemann flux, the subgrid-scale (SGS) model and the accuracy of the resulting LES solver remains unclear. In this talk, we investigate the role of the Riemann solver and the SGS model in the ability to predict a variety of flow regimes, including transition to turbulence, wall-free turbulence, wall-bounded turbulence, and turbulence decay. The Taylor-Green vortex problem and the turbulent channel flow at various Reynolds numbers are considered. Numerical results show that DG methods implicitly introduce numerical dissipation in under-resolved turbulence simulations and, even in the high Reynolds number limit, this implicit dissipation provides a more accurate representation of the actual subgrid-scale dissipation than that by explicit models.
The kinetics of heterogeneous nucleation and growth: an approach based on a grain explicit model
NASA Astrophysics Data System (ADS)
Rouet-Leduc, B.; Maillet, J.-B.; Denoual, C.
2014-04-01
A model for phase transitions initiated on grain boundaries is proposed and tested against numerical simulations: this approach, based on a grain explicit model, allows us to consider the granular structure, resulting in accurate predictions for a wide span of nucleation processes. Comparisons are made with classical models of homogeneous (JMAK: Johnson and Mehl 1939 Trans. Am. Inst. Min. Eng. 135 416; Avrami 1939 J. Chem. Phys. 7 1103; Kolmogorov 1937 Bull. Acad. Sci. USSR, Mat. Ser. 1 335) as well as heterogeneous (Cahn 1996 Thermodynamics and Kinetics of Phase Transformations Im et al (Pittsburgh: Materials Research Society)) nucleation. A transition scale based on material properties is proposed, allowing us to discriminate between random and site-saturated regimes. Finally, we discuss the relationship between an Avrami-type exponent and the transition regime, establishing conditions for its extraction from experiments.
Moving forward socio-economically focused models of deforestation.
Dezécache, Camille; Salles, Jean-Michel; Vieilledent, Ghislain; Hérault, Bruno
2017-09-01
Whilst high-resolution spatial variables contribute to a good fit of spatially explicit deforestation models, socio-economic processes are often beyond the scope of these models. Such a low level of interest in the socio-economic dimension of deforestation limits the relevancy of these models for decision-making and may be the cause of their failure to accurately predict observed deforestation trends in the medium term. This study aims to propose a flexible methodology for taking into account multiple drivers of deforestation in tropical forested areas, where the intensity of deforestation is explicitly predicted based on socio-economic variables. By coupling a model of deforestation location based on spatial environmental variables with several sub-models of deforestation intensity based on socio-economic variables, we were able to create a map of predicted deforestation over the period 2001-2014 in French Guiana. This map was compared to a reference map for accuracy assessment, not only at the pixel scale but also over cells ranging from 1 to approximately 600 sq. km. Highly significant relationships were explicitly established between deforestation intensity and several socio-economic variables: population growth, the amount of agricultural subsidies, gold and wood production. Such a precise characterization of socio-economic processes allows to avoid overestimation biases in high deforestation areas, suggesting a better integration of socio-economic processes in the models. Whilst considering deforestation as a purely geographical process contributes to the creation of conservative models unable to effectively assess changes in the socio-economic and political contexts influencing deforestation trends, this explicit characterization of the socio-economic dimension of deforestation is critical for the creation of deforestation scenarios in REDD+ projects. © 2017 John Wiley & Sons Ltd.
Object links in the repository
NASA Technical Reports Server (NTRS)
Beck, Jon; Eichmann, David
1991-01-01
Some of the architectural ramifications of extending the Eichmann/Atkins lattice-based classification scheme to encompass the assets of the full life-cycle of software development are explored. In particular, we wish to consider a model which provides explicit links between objects in addition to the edges connecting classification vertices in the standard lattice. The model we consider uses object-oriented terminology. Thus, the lattice is viewed as a data structure which contains class objects which exhibit inheritance. A description of the types of objects in the repository is presented, followed by a discussion of how they interrelate. We discuss features of the object-oriented model which support these objects and their links, and consider behavior which an implementation of the model should exhibit. Finally, we indicate some thoughts on implementing a prototype of this repository architecture.
Multilingual Thesauri for the Modern World - No Ideal Solution?
ERIC Educational Resources Information Center
Jorna, Kerstin; Davies, Sylvie
2001-01-01
Discusses thesauri as tools for multilingual information retrieval and cross-cultural communication. Considers the need for multilingual thesauri and the importance of explicit conceptual structures, and introduces a pilot thesaurus, InfoDEFT (Information Deutsch-English-Francais Thesaurus), as a possible model for new online thesauri which are…
Advanced hierarchical distance sampling
Royle, Andy
2016-01-01
In this chapter, we cover a number of important extensions of the basic hierarchical distance-sampling (HDS) framework from Chapter 8. First, we discuss the inclusion of “individual covariates,” such as group size, in the HDS model. This is important in many surveys where animals form natural groups that are the primary observation unit, with the size of the group expected to have some influence on detectability. We also discuss HDS integrated with time-removal and double-observer or capture-recapture sampling. These “combined protocols” can be formulated as HDS models with individual covariates, and thus they have a commonality with HDS models involving group structure (group size being just another individual covariate). We cover several varieties of open-population HDS models that accommodate population dynamics. On one end of the spectrum, we cover models that allow replicate distance sampling surveys within a year, which estimate abundance relative to availability and temporary emigration through time. We consider a robust design version of that model. We then consider models with explicit dynamics based on the Dail and Madsen (2011) model and the work of Sollmann et al. (2015). The final major theme of this chapter is relatively newly developed spatial distance sampling models that accommodate explicit models describing the spatial distribution of individuals known as Point Process models. We provide novel formulations of spatial DS and HDS models in this chapter, including implementations of those models in the unmarked package using a hack of the pcount function for N-mixture models.
NASA Astrophysics Data System (ADS)
Cocco, Alex P.; Nakajo, Arata; Chiu, Wilson K. S.
2017-12-01
We present a fully analytical, heuristic model - the "Analytical Transport Network Model" - for steady-state, diffusive, potential flow through a 3-D network. Employing a combination of graph theory, linear algebra, and geometry, the model explicitly relates a microstructural network's topology and the morphology of its channels to an effective material transport coefficient (a general term meant to encompass, e.g., conductivity or diffusion coefficient). The model's transport coefficient predictions agree well with those from electrochemical fin (ECF) theory and finite element analysis (FEA), but are computed 0.5-1.5 and 5-6 orders of magnitude faster, respectively. In addition, the theory explicitly relates a number of morphological and topological parameters directly to the transport coefficient, whereby the distributions that characterize the structure are readily available for further analysis. Furthermore, ATN's explicit development provides insight into the nature of the tortuosity factor and offers the potential to apply theory from network science and to consider the optimization of a network's effective resistance in a mathematically rigorous manner. The ATN model's speed and relative ease-of-use offer the potential to aid in accelerating the design (with respect to transport), and thus reducing the cost, of energy materials.
Moderators of Implicit-Explicit Exercise Cognition Concordance.
Berry, Tanya R; Rodgers, Wendy M; Markland, David; Hall, Craig R
2016-12-01
Investigating implicit-explicit concordance can aid in understanding underlying mechanisms and possible intervention effects. This research examined the concordance between implicit associations of exercise with health or appearance and related explicit motives. Variables considered as possible moderators were behavioral regulations, explicit attitudes, and social desirability. Participants (N = 454) completed measures of implicit associations of exercise with health and appearance and questionnaire measures of health and appearance motives, attitudes, social desirability, and behavioral regulations. Attitudes significantly moderated the relationship between implicit associations of exercise with health and health motives. Identified regulations significantly moderated implicit-explicit concordance with respect to associations with appearance. These results suggest that implicit and explicit exercise-related cognitions are not necessarily independent and their relationship to each other may be moderated by attitudes or some forms of behavioral regulation. Future research that takes a dual-processing approach to exercise behavior should consider potential theoretical moderators of concordance.
Classical integrable defects as quasi Bäcklund transformations
NASA Astrophysics Data System (ADS)
Doikou, Anastasia
2016-10-01
We consider the algebraic setting of classical defects in discrete and continuous integrable theories. We derive the ;equations of motion; on the defect point via the space-like and time-like description. We then exploit the structural similarity of these equations with the discrete and continuous Bäcklund transformations. And although these equations are similar they are not exactly the same to the Bäcklund transformations. We also consider specific examples of integrable models to demonstrate our construction, i.e. the Toda chain and the sine-Gordon model. The equations of the time (space) evolution of the defect (discontinuity) degrees of freedom for these models are explicitly derived.
Mori, Takaharu; Miyashita, Naoyuki; Im, Wonpil; Feig, Michael; Sugita, Yuji
2016-01-01
This paper reviews various enhanced conformational sampling methods and explicit/implicit solvent/membrane models, as well as their recent applications to the exploration of the structure and dynamics of membranes and membrane proteins. Molecular dynamics simulations have become an essential tool to investigate biological problems, and their success relies on proper molecular models together with efficient conformational sampling methods. The implicit representation of solvent/membrane environments is reasonable approximation to the explicit all-atom models, considering the balance between computational cost and simulation accuracy. Implicit models can be easily combined with replica-exchange molecular dynamics methods to explore a wider conformational space of a protein. Other molecular models and enhanced conformational sampling methods are also briefly discussed. As application examples, we introduce recent simulation studies of glycophorin A, phospholamban, amyloid precursor protein, and mixed lipid bilayers and discuss the accuracy and efficiency of each simulation model and method. This article is part of a Special Issue entitled: Membrane Proteins. Guest Editors: J.C. Gumbart and Sergei Noskov. PMID:26766517
Interfaces Leading Groups of Learners to Make Their Shared Problem-Solving Organization Explicit
ERIC Educational Resources Information Center
Moguel, P.; Tchounikine, P.; Tricot, A.
2012-01-01
In this paper, we consider collective problem-solving challenges and a particular structuring objective: lead groups of learners to make their shared problem-solving organization explicit. Such an objective may be considered as a way to lead learners to consider building and maintaining a shared organization, and/or as a way to provide a basis for…
NASA Astrophysics Data System (ADS)
Gallard Martínez, Alejandro J.
2011-09-01
This forum considers argumentation as a means of science teaching in South African schools, through the integration of indigenous knowledge (IK). It addresses issues raised in Mariana G. Hewson and Meshach B. Ogunniyi's paper entitled: Argumentation-teaching as a method to introduce indigenous knowledge into science classrooms: opportunities and challenges. As well as Peter Easton's: Hawks and baby chickens: cultivating the sources of indigenous science education; and, Femi S. Otulaja, Ann Cameron and Audrey Msimanga's: Rethinking argumentation-teaching strategies and indigenous knowledge in South African science classrooms. The first topic addressed is that implementation of argumentation in the science classroom becomes a complex endeavor when the tensions between students' IK, the educational infrastructure (allowance for teacher professional development, etc.) and local belief systems are made explicit. Secondly, western styles of debate become mitigating factors because they do not always adequately translate to South African culture. For example, in many instances it is more culturally acceptable in South Africa to build consensus than to be confrontational. Thirdly, the tension between what is "authentic science" and what is not becomes an influencing factor when a tension is created between IK and western science. Finally, I argue that the thrust of argumentation is to set students up as "scientist-students" who will be considered through a deficit model by judging their habitus and cultural capital. Explicitly, a "scientist-student" is a student who has "learned," modeled and thoroughly assimilated the habits of western scientists, evidently—and who will be judged by and held accountable for their demonstration of explicit related behaviors in the science classroom. I propose that science teaching, to include argumentation, should consist of "listening carefully" (radical listening) to students and valuing their language, culture, and learning as a model for "science for all".
An image-based skeletal dosimetry model for the ICRP reference newborn—internal electron sources
NASA Astrophysics Data System (ADS)
Pafundi, Deanna; Rajon, Didier; Jokisch, Derek; Lee, Choonsik; Bolch, Wesley
2010-04-01
In this study, a comprehensive electron dosimetry model of newborn skeletal tissues is presented. The model is constructed using the University of Florida newborn hybrid phantom of Lee et al (2007 Phys. Med. Biol. 52 3309-33), the newborn skeletal tissue model of Pafundi et al (2009 Phys. Med. Biol. 54 4497-531) and the EGSnrc-based Paired Image Radiation Transport code of Shah et al (2005 J. Nucl. Med. 46 344-53). Target tissues include the active bone marrow (surrogate tissue for hematopoietic stem cells), shallow marrow (surrogate tissue for osteoprogenitor cells) and unossified cartilage (surrogate tissue for chondrocytes). Monoenergetic electron emissions are considered over the energy range 1 keV to 10 MeV for the following source tissues: active marrow, trabecular bone (surfaces and volumes), cortical bone (surfaces and volumes) and cartilage. Transport results are reported as specific absorbed fractions according to the MIRD schema and are given as skeletal-averaged values in the paper with bone-specific values reported in both tabular and graphic format as electronic annexes (supplementary data). The method utilized in this work uniquely includes (1) explicit accounting for the finite size and shape of newborn ossification centers (spongiosa regions), (2) explicit accounting for active and shallow marrow dose from electron emissions in cortical bone as well as sites of unossified cartilage, (3) proper accounting of the distribution of trabecular and cortical volumes and surfaces in the newborn skeleton when considering mineral bone sources and (4) explicit consideration of the marrow cellularity changes for active marrow self-irradiation as applicable to radionuclide therapy of diseased marrow in the newborn child.
Cryptography from noisy storage.
Wehner, Stephanie; Schaffner, Christian; Terhal, Barbara M
2008-06-06
We show how to implement cryptographic primitives based on the realistic assumption that quantum storage of qubits is noisy. We thereby consider individual-storage attacks; i.e., the dishonest party attempts to store each incoming qubit separately. Our model is similar to the model of bounded-quantum storage; however, we consider an explicit noise model inspired by present-day technology. To illustrate the power of this new model, we show that a protocol for oblivious transfer is secure for any amount of quantum-storage noise, as long as honest players can perform perfect quantum operations. Our model also allows us to show the security of protocols that cope with noise in the operations of the honest players and achieve more advanced tasks such as secure identification.
NASA Astrophysics Data System (ADS)
Govind, A.; Chen, J. M.; Margolis, H.
2007-12-01
Current estimates of terrestrial carbon overlook the effects of topographically-driven lateral flow of soil water. We hypothesize that this component, which occur at a landscape or watershed scale have significant influences on the spatial distribution of carbon, due to its large contribution to the local water balance. To this end, we further developed a spatially explicit ecohydrological model, BEPS-TerrainLab V2.0. We simulated the coupled hydrological and carbon cycle processes in a black spruce-moss ecosystem in central Quebec, Canada. The carbon stocks were initialized using a long term carbon cycling model, InTEC, under a climate change and disturbance scenario, the accuracy of which was determined with inventory plot measurements. Further, we simulated and validated several ecosystem indicators such as ET, GPP, NEP, water table, snow depth and soil temperature, using the measurements for two years, 2004 and 2005. After gaining confidence in the model's ability to simulate ecohydrological processes, we tested the influence of lateral water flow on the carbon cycle. We made three hydrological modeling scenarios 1) Explicit, were realistic lateral water routing was considered 2) Implicit where calculations were based on a bucket modeling approach 3) NoFlow, where the lateral water flow was turned off in the model. The results showed that pronounced anomalies exist among the scenarios for the simulated GPP, ET and NEP. In general, Implicit calculation overestimated GPP and underestimated NEP, as opposed to Explicit simulation. NoFlow underestimated GPP and overestimated NEP. The key processes controlling GPP were manifested through stomatal conductance which reduces under conditions of rapid soil saturation ( NoFlow ) or increases in the Implicit case, and, nitrogen availability which affects Vcmax, the maximum carboxylation rate. However, for NEP, the anomalies were attributed to differences in soil carbon pool decomposition, which determine the heterotrophic respiration and the resultant nitrogen mineralization which affects GPP and several other feedback mechanisms. These results suggest that lateral water flow does play a significant role in the terrestrial carbon distribution. Therefore, regional or global scale terrestrial carbon estimates could have significant errors if proper hydrological constrains are not considered for modeling ecological processes due to large topographic variations on the Earth's surface. For more info please visit: http://ajit.govind.googlepages.com/agu2007
Simulating spatial and temporal context of forest management using hypothetical landscapes
Eric J. Gustafson; Thomas R. Crow
1998-01-01
Spatially explicit models that combine remote sensing with geographic information systems (GIS) offer great promise to land managers because they consider the arrangement of landscape elements in time and space. Their visual and geographic nature facilitate the comparison of alternative landscape designs. Among various activities associated with forest management,...
Amount and type of forest cover and edge are important predictorsof golden-cheeked warbler density
Rebecca G. Peak; Frank R. III. Thompson
2013-01-01
Considered endangered by the U.S. Fish and Wildlife Service, the Golden-cheeked Warbler (Setophaga chrysoparia) breeds exclusively in the juniper--oak (Juniperus ashei--Quercus spp.) woodlands of central Texas. Large-scale, spatially explicit models that predict population density as a function of habitat and landscape variables...
A Qualitative Assessment of the Meaning of Shared Governance at a Parochial University
ERIC Educational Resources Information Center
Glover-Alves, Shaton Monique
2012-01-01
Shared governance is a treasured tradition of academe. Problems of administrative practice arise when meanings and definitions of shared governance are undefined and implicit rather than defined and explicit. What are the meanings and definitions of shared governance when several governance models are considered and how does shared governance…
Does encouraging the use of wetlands in water quality trading programs make economic sense? journal
This paper examines a proposal to incorporate the use of wetlands in water quality trading (WQT) programs in order to meet national wetlands goals and advance WQT. It develops a competitive WQT model wherein wetland services are explicitly considered. To participate in a WQT pro...
Interrelations between different canonical descriptions of dissipative systems
NASA Astrophysics Data System (ADS)
Schuch, D.; Guerrero, J.; López-Ruiz, F. F.; Aldaya, V.
2015-04-01
There are many approaches for the description of dissipative systems coupled to some kind of environment. This environment can be described in different ways; only effective models are being considered here. In the Bateman model, the environment is represented by one additional degree of freedom and the corresponding momentum. In two other canonical approaches, no environmental degree of freedom appears explicitly, but the canonical variables are connected with the physical ones via non-canonical transformations. The link between the Bateman approach and those without additional variables is achieved via comparison with a canonical approach using expanding coordinates, as, in this case, both Hamiltonians are constants of motion. This leads to constraints that allow for the elimination of the additional degree of freedom in the Bateman approach. These constraints are not unique. Several choices are studied explicitly, and the consequences for the physical interpretation of the additional variable in the Bateman model are discussed.
Tuning the cosmological constant, broken scale invariance, unitarity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Förste, Stefan; Manz, Paul; Physikalisches Institut der Universität Bonn,Nussallee 12, 53115 Bonn
2016-06-10
We study gravity coupled to a cosmological constant and a scale but not conformally invariant sector. In Minkowski vacuum, scale invariance is spontaneously broken. We consider small fluctuations around the Minkowski vacuum. At the linearised level we find that the trace of metric perturbations receives a positive or negative mass squared contribution. However, only for the Fierz-Pauli combination the theory is free of ghosts. The mass term for the trace of metric perturbations can be cancelled by explicitly breaking scale invariance. This reintroduces fine-tuning. Models based on four form field strength show similarities with explicit scale symmetry breaking due tomore » quantisation conditions.« less
Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models
Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; ...
2018-04-17
The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less
Implicit-explicit (IMEX) Runge-Kutta methods for non-hydrostatic atmospheric models
NASA Astrophysics Data System (ADS)
Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; Reynolds, Daniel R.; Ullrich, Paul A.; Woodward, Carol S.
2018-04-01
The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit-explicit (IMEX) additive Runge-Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit - vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored. The accuracy and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.
Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, David J.; Guerra, Jorge E.; Hamon, François P.
The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less
Taylor, Mark J; Taylor, Natasha
2014-12-01
England and Wales are moving toward a model of 'opt out' for use of personal confidential data in health research. Existing research does not make clear how acceptable this move is to the public. While people are typically supportive of health research, when asked to describe the ideal level of control there is a marked lack of consensus over the preferred model of consent (e.g. explicit consent, opt out etc.). This study sought to investigate a relatively unexplored difference between the consent model that people prefer and that which they are willing to accept. It also sought to explore any reasons for such acceptance.A mixed methods approach was used to gather data, incorporating a structured questionnaire and in-depth focus group discussions led by an external facilitator. The sampling strategy was designed to recruit people with different involvement in the NHS but typically with experience of NHS services. Three separate focus groups were carried out over three consecutive days.The central finding is that people are typically willing to accept models of consent other than that which they would prefer. Such acceptance is typically conditional upon a number of factors, including: security and confidentiality, no inappropriate commercialisation or detrimental use, transparency, independent overview, the ability to object to any processing considered to be inappropriate or particularly sensitive.This study suggests that most people would find research use without the possibility of objection to be unacceptable. However, the study also suggests that people who would prefer to be asked explicitly before data were used for purposes beyond direct care may be willing to accept an opt out model of consent if the reasons for not seeking explicit consent are accessible to them and they trust that data is only going to be used under conditions, and with safeguards, that they would consider to be acceptable even if not preferable.
Convergence studies of deterministic methods for LWR explicit reflector methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Canepa, S.; Hursin, M.; Ferroukhi, H.
2013-07-01
The standard approach in modem 3-D core simulators, employed either for steady-state or transient simulations, is to use Albedo coefficients or explicit reflectors at the core axial and radial boundaries. In the latter approach, few-group homogenized nuclear data are a priori produced with lattice transport codes using 2-D reflector models. Recently, the explicit reflector methodology of the deterministic CASMO-4/SIMULATE-3 code system was identified to potentially constitute one of the main sources of errors for core analyses of the Swiss operating LWRs, which are all belonging to GII design. Considering that some of the new GIII designs will rely on verymore » different reflector concepts, a review and assessment of the reflector methodology for various LWR designs appeared as relevant. Therefore, the purpose of this paper is to first recall the concepts of the explicit reflector modelling approach as employed by CASMO/SIMULATE. Then, for selected reflector configurations representative of both GII and GUI designs, a benchmarking of the few-group nuclear data produced with the deterministic lattice code CASMO-4 and its successor CASMO-5, is conducted. On this basis, a convergence study with regards to geometrical requirements when using deterministic methods with 2-D homogenous models is conducted and the effect on the downstream 3-D core analysis accuracy is evaluated for a typical GII deflector design in order to assess the results against available plant measurements. (authors)« less
Strbac, V; Pierce, D M; Vander Sloten, J; Famaey, N
2017-12-01
Finite element (FE) simulations are increasingly valuable in assessing and improving the performance of biomedical devices and procedures. Due to high computational demands such simulations may become difficult or even infeasible, especially when considering nearly incompressible and anisotropic material models prevalent in analyses of soft tissues. Implementations of GPGPU-based explicit FEs predominantly cover isotropic materials, e.g. the neo-Hookean model. To elucidate the computational expense of anisotropic materials, we implement the Gasser-Ogden-Holzapfel dispersed, fiber-reinforced model and compare solution times against the neo-Hookean model. Implementations of GPGPU-based explicit FEs conventionally rely on single-point (under) integration. To elucidate the expense of full and selective-reduced integration (more reliable) we implement both and compare corresponding solution times against those generated using underintegration. To better understand the advancement of hardware, we compare results generated using representative Nvidia GPGPUs from three recent generations: Fermi (C2075), Kepler (K20c), and Maxwell (GTX980). We explore scaling by solving the same boundary value problem (an extension-inflation test on a segment of human aorta) with progressively larger FE meshes. Our results demonstrate substantial improvements in simulation speeds relative to two benchmark FE codes (up to 300[Formula: see text] while maintaining accuracy), and thus open many avenues to novel applications in biomechanics and medicine.
Surface-Potential-Based Metal-Oxide-Silicon-Varactor Model for RF Applications
NASA Astrophysics Data System (ADS)
Miyake, Masataka; Sadachika, Norio; Navarro, Dondee; Mizukane, Yoshio; Matsumoto, Kenji; Ezaki, Tatsuya; Miura-Mattausch, Mitiko; Mattausch, Hans Juergen; Ohguro, Tatsuya; Iizuka, Takahiro; Taguchi, Masahiko; Kumashiro, Shigetaka; Miyamoto, Shunsuke
2007-04-01
We have developed a surface-potential-based metal-oxide-silicon (MOS)-varactor model valid for RF applications up to 200 GHz. The model enables the calculation of the MOS-varactor capacitance seamlessly from the depletion region to the accumulation region and explicitly considers the carrier-response delay causing a non-quasi-static (NQS) effect. It has been observed that capacitance reduction due to this non-quasi-static effect limits the MOS-varactor application to an RF regime.
NASA Astrophysics Data System (ADS)
Werner, Adrian D.; Robinson, Neville I.
2018-06-01
Existing analytical solutions for the distribution of fresh groundwater in subsea aquifers presume that the overlying offshore aquitard, represented implicitly, contains seawater. Here, we consider the case where offshore fresh groundwater is the result of freshwater discharge from onshore aquifers, and neglect paleo-freshwater sources. A recent numerical modeling investigation, involving explicit simulation of the offshore aquitard, demonstrates that offshore aquitards more likely contain freshwater in areas of upward freshwater leakage to the sea. We integrate this finding into the existing analytical solutions by providing an alternative formulation for steady interface flow in subsea aquifers, whereby the salinity in the offshore aquitard can be chosen. The new solution, taking the aquitard salinity as that of freshwater, provides a closer match to numerical modeling results in which the aquitard is represented explicitly.
Electromechanical quantum simulators
NASA Astrophysics Data System (ADS)
Tacchino, F.; Chiesa, A.; LaHaye, M. D.; Carretta, S.; Gerace, D.
2018-06-01
Digital quantum simulators are among the most appealing applications of a quantum computer. Here we propose a universal, scalable, and integrated quantum computing platform based on tunable nonlinear electromechanical nano-oscillators. It is shown that very high operational fidelities for single- and two-qubits gates can be achieved in a minimal architecture, where qubits are encoded in the anharmonic vibrational modes of mechanical nanoresonators, whose effective coupling is mediated by virtual fluctuations of an intermediate superconducting artificial atom. An effective scheme to induce large single-phonon nonlinearities in nanoelectromechanical devices is explicitly discussed, thus opening the route to experimental investigation in this direction. Finally, we explicitly show the very high fidelities that can be reached for the digital quantum simulation of model Hamiltonians, by using realistic experimental parameters in state-of-the-art devices, and considering the transverse field Ising model as a paradigmatic example.
Welch, Vivian A; Akl, Elie A; Guyatt, Gordon; Pottie, Kevin; Eslava-Schmalbach, Javier; Ansari, Mohammed T; de Beer, Hans; Briel, Matthias; Dans, Tony; Dans, Inday; Hultcrantz, Monica; Jull, Janet; Katikireddi, Srinivasa Vittal; Meerpohl, Joerg; Morton, Rachael; Mosdol, Annhild; Petkovic, Jennifer; Schünemann, Holger J; Sharaf, Ravi N; Singh, Jasvinder A; Stanev, Roger; Tonia, Thomy; Tristan, Mario; Vitols, Sigurd; Watine, Joseph; Tugwell, Peter
2017-10-01
This article introduces the rationale and methods for explicitly considering health equity in the Grading of Recommendations Assessment, Development and Evaluation (GRADE) methodology for development of clinical, public health, and health system guidelines. We searched for guideline methodology articles, conceptual articles about health equity, and examples of guidelines that considered health equity explicitly. We held three meetings with GRADE Working Group members and invited comments from the GRADE Working Group listserve. We developed three articles on incorporating equity considerations into the overall approach to guideline development, rating certainty, and assembling the evidence base and evidence to decision and/or recommendation. Clinical and public health guidelines have a role to play in promoting health equity by explicitly considering equity in the process of guideline development. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Generalized Linear Covariance Analysis
NASA Technical Reports Server (NTRS)
Carpenter, James R.; Markley, F. Landis
2014-01-01
This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
Incorporating pushing in exclusion-process models of cell migration.
Yates, Christian A; Parker, Andrew; Baker, Ruth E
2015-05-01
The macroscale movement behavior of a wide range of isolated migrating cells has been well characterized experimentally. Recently, attention has turned to understanding the behavior of cells in crowded environments. In such scenarios it is possible for cells to interact, inducing neighboring cells to move in order to make room for their own movements or progeny. Although the behavior of interacting cells has been modeled extensively through volume-exclusion processes, few models, thus far, have explicitly accounted for the ability of cells to actively displace each other in order to create space for themselves. In this work we consider both on- and off-lattice volume-exclusion position-jump processes in which cells are explicitly allowed to induce movements in their near neighbors in order to create space for themselves to move or proliferate into. We refer to this behavior as pushing. From these simple individual-level representations we derive continuum partial differential equations for the average occupancy of the domain. We find that, for limited amounts of pushing, comparison between the averaged individual-level simulations and the population-level model is nearly as good as in the scenario without pushing. Interestingly, we find that, in the on-lattice case, the diffusion coefficient of the population-level model is increased by pushing, whereas, for the particular off-lattice model that we investigate, the diffusion coefficient is reduced. We conclude, therefore, that it is important to consider carefully the appropriate individual-level model to use when representing complex cell-cell interactions such as pushing.
An information propagation model considering incomplete reading behavior in microblog
NASA Astrophysics Data System (ADS)
Su, Qiang; Huang, Jiajia; Zhao, Xiande
2015-02-01
Microblog is one of the most popular communication channels on the Internet, and has already become the third largest source of news and public opinions in China. Although researchers have studied the information propagation in microblog using the epidemic models, previous studies have not considered the incomplete reading behavior among microblog users. Therefore, the model cannot fit the real situations well. In this paper, we proposed an improved model entitled Microblog-Susceptible-Infected-Removed (Mb-SIR) for information propagation by explicitly considering the user's incomplete reading behavior. We also tested the effectiveness of the model using real data from Sina Microblog. We demonstrate that the new proposed model is more accurate in describing the information propagation in microblog. In addition, we also investigate the effects of the critical model parameters, e.g., reading rate, spreading rate, and removed rate through numerical simulations. The simulation results show that, compared with other parameters, reading rate plays the most influential role in the information propagation performance in microblog.
At the Interface: Dynamic Interactions of Explicit and Implicit Language Knowledge
ERIC Educational Resources Information Center
Ellis, Nick C.
2005-01-01
This paper considers how implicit and explicit knowledge are dissociable but cooperative. It reviews various psychological and neurobiological processes by which explicit knowledge of form-meaning associations impacts upon implicit language learning. The interface is dynamic: It happens transiently during conscious processing, but the influence…
Mori, Takaharu; Miyashita, Naoyuki; Im, Wonpil; Feig, Michael; Sugita, Yuji
2016-07-01
This paper reviews various enhanced conformational sampling methods and explicit/implicit solvent/membrane models, as well as their recent applications to the exploration of the structure and dynamics of membranes and membrane proteins. Molecular dynamics simulations have become an essential tool to investigate biological problems, and their success relies on proper molecular models together with efficient conformational sampling methods. The implicit representation of solvent/membrane environments is reasonable approximation to the explicit all-atom models, considering the balance between computational cost and simulation accuracy. Implicit models can be easily combined with replica-exchange molecular dynamics methods to explore a wider conformational space of a protein. Other molecular models and enhanced conformational sampling methods are also briefly discussed. As application examples, we introduce recent simulation studies of glycophorin A, phospholamban, amyloid precursor protein, and mixed lipid bilayers and discuss the accuracy and efficiency of each simulation model and method. This article is part of a Special Issue entitled: Membrane Proteins edited by J.C. Gumbart and Sergei Noskov. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Zhu, Qing; Iversen, Colleen M.; Riley, William J.; ...
2016-12-23
Ongoing climate warming will likely perturb vertical distributions of nitrogen availability in tundra soils through enhancing nitrogen mineralization and releasing previously inaccessible nitrogen from frozen permafrost soil. But, arctic tundra responses to such changes are uncertain, because of a lack of vertically explicit nitrogen tracer experiments and untested hypotheses of root nitrogen uptake under the stress of microbial competition implemented in land models. We conducted a vertically explicit 15N tracer experiment for three dominant tundra species to quantify plant N uptake profiles. Then we applied a nutrient competition model (N-COM), which is being integrated into the ACME Land Model, tomore » explain the observations. Observations using an 15N tracer showed that plant N uptake profiles were not consistently related to root biomass density profiles, which challenges the prevailing hypothesis that root density always exerts first-order control on N uptake. By considering essential root traits (e.g., biomass distribution and nutrient uptake kinetics) with an appropriate plant-microbe nutrient competition framework, our model reasonably reproduced the observed patterns of plant N uptake. Additionally, we show that previously applied nutrient competition hypotheses in Earth System Land Models fail to explain the diverse plant N uptake profiles we observed. These results cast doubt on current climate-scale model predictions of arctic plant responses to elevated nitrogen supply under a changing climate and highlight the importance of considering essential root traits in large-scale land models. Finally, we provided suggestions and a short synthesis of data availability for future trait-based land model development.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Qing; Iversen, Colleen M.; Riley, William J.
Ongoing climate warming will likely perturb vertical distributions of nitrogen availability in tundra soils through enhancing nitrogen mineralization and releasing previously inaccessible nitrogen from frozen permafrost soil. But, arctic tundra responses to such changes are uncertain, because of a lack of vertically explicit nitrogen tracer experiments and untested hypotheses of root nitrogen uptake under the stress of microbial competition implemented in land models. We conducted a vertically explicit 15N tracer experiment for three dominant tundra species to quantify plant N uptake profiles. Then we applied a nutrient competition model (N-COM), which is being integrated into the ACME Land Model, tomore » explain the observations. Observations using an 15N tracer showed that plant N uptake profiles were not consistently related to root biomass density profiles, which challenges the prevailing hypothesis that root density always exerts first-order control on N uptake. By considering essential root traits (e.g., biomass distribution and nutrient uptake kinetics) with an appropriate plant-microbe nutrient competition framework, our model reasonably reproduced the observed patterns of plant N uptake. Additionally, we show that previously applied nutrient competition hypotheses in Earth System Land Models fail to explain the diverse plant N uptake profiles we observed. These results cast doubt on current climate-scale model predictions of arctic plant responses to elevated nitrogen supply under a changing climate and highlight the importance of considering essential root traits in large-scale land models. Finally, we provided suggestions and a short synthesis of data availability for future trait-based land model development.« less
A new solution method for wheel/rail rolling contact.
Yang, Jian; Song, Hua; Fu, Lihua; Wang, Meng; Li, Wei
2016-01-01
To solve the problem of wheel/rail rolling contact of nonlinear steady-state curving, a three-dimensional transient finite element (FE) model is developed by the explicit software ANSYS/LS-DYNA. To improve the solving speed and efficiency, an explicit-explicit order solution method is put forward based on analysis of the features of implicit and explicit algorithm. The solution method was first applied to calculate the pre-loading of wheel/rail rolling contact with explicit algorithm, and then the results became the initial conditions in solving the dynamic process of wheel/rail rolling contact with explicit algorithm as well. Simultaneously, the common implicit-explicit order solution method is used to solve the FE model. Results show that the explicit-explicit order solution method has faster operation speed and higher efficiency than the implicit-explicit order solution method while the solution accuracy is almost the same. Hence, the explicit-explicit order solution method is more suitable for the wheel/rail rolling contact model with large scale and high nonlinearity.
Explicit Computations of Instantons and Large Deviations in Beta-Plane Turbulence
NASA Astrophysics Data System (ADS)
Laurie, J.; Bouchet, F.; Zaboronski, O.
2012-12-01
We use a path integral formalism and instanton theory in order to make explicit analytical predictions about large deviations and rare events in beta-plane turbulence. The path integral formalism is a concise way to get large deviation results in dynamical systems forced by random noise. In the most simple cases, it leads to the same results as the Freidlin-Wentzell theory, but it has a wider range of applicability. This approach is however usually extremely limited, due to the complexity of the theoretical problems. As a consequence it provides explicit results in a fairly limited number of models, often extremely simple ones with only a few degrees of freedom. Few exception exist outside the realm of equilibrium statistical physics. We will show that the barotropic model of beta-plane turbulence is one of these non-equilibrium exceptions. We describe sets of explicit solutions to the instanton equation, and precise derivations of the action functional (or large deviation rate function). The reason why such exact computations are possible is related to the existence of hidden symmetries and conservation laws for the instanton dynamics. We outline several applications of this apporach. For instance, we compute explicitly the very low probability to observe flows with an energy much larger or smaller than the typical one. Moreover, we consider regimes for which the system has multiple attractors (corresponding to different numbers of alternating jets), and discuss the computation of transition probabilities between two such attractors. These extremely rare events are of the utmost importance as the dynamics undergo qualitative macroscopic changes during such transitions.
State-space based analysis and forecasting of macroscopic road safety trends in Greece.
Antoniou, Constantinos; Yannis, George
2013-11-01
In this paper, macroscopic road safety trends in Greece are analyzed using state-space models and data for 52 years (1960-2011). Seemingly unrelated time series equations (SUTSE) models are developed first, followed by richer latent risk time-series (LRT) models. As reliable estimates of vehicle-kilometers are not available for Greece, the number of vehicles in circulation is used as a proxy to the exposure. Alternative considered models are presented and discussed, including diagnostics for the assessment of their model quality and recommendations for further enrichment of this model. Important interventions were incorporated in the models developed (1986 financial crisis, 1991 old-car exchange scheme, 1996 new road fatality definition) and found statistically significant. Furthermore, the forecasting results using data up to 2008 were compared with final actual data (2009-2011) indicating that the models perform properly, even in unusual situations, like the current strong financial crisis in Greece. Forecasting results up to 2020 are also presented and compared with the forecasts of a model that explicitly considers the currently on-going recession. Modeling the recession, and assuming that it will end by 2013, results in more reasonable estimates of risk and vehicle-kilometers for the 2020 horizon. This research demonstrates the benefits of using advanced state-space modeling techniques for modeling macroscopic road safety trends, such as allowing the explicit modeling of interventions. The challenges associated with the application of such state-of-the-art models for macroscopic phenomena, such as traffic fatalities in a region or country, are also highlighted. Furthermore, it is demonstrated that it is possible to apply such complex models using the relatively short time-series that are available in macroscopic road safety analysis. Copyright © 2013 Elsevier Ltd. All rights reserved.
Universal Low-energy Behavior in a Quantum Lorentz Gas with Gross-Pitaevskii Potentials
NASA Astrophysics Data System (ADS)
Basti, Giulia; Cenatiempo, Serena; Teta, Alessandro
2018-06-01
We consider a quantum particle interacting with N obstacles, whose positions are independently chosen according to a given probability density, through a two-body potential of the form N 2 V ( N x) (Gross-Pitaevskii potential). We show convergence of the N dependent one-particle Hamiltonian to a limiting Hamiltonian where the quantum particle experiences an effective potential depending only on the scattering length of the unscaled potential and the density of the obstacles. In this sense our Lorentz gas model exhibits a universal behavior for N large. Moreover we explicitely characterize the fluctuations around the limit operator. Our model can be considered as a simplified model for scattering of slow neutrons from condensed matter.
Hudjetz, Silvana; Lennartz, Gottfried; Krämer, Klara; Roß-Nickoll, Martina; Gergs, André; Preuss, Thomas G.
2014-01-01
The degradation of natural and semi-natural landscapes has become a matter of global concern. In Germany, semi-natural grasslands belong to the most species-rich habitat types but have suffered heavily from changes in land use. After abandonment, the course of succession at a specific site is often difficult to predict because many processes interact. In order to support decision making when managing semi-natural grasslands in the Eifel National Park, we built the WoodS-Model (Woodland Succession Model). A multimodeling approach was used to integrate vegetation dynamics in both the herbaceous and shrub/tree layer. The cover of grasses and herbs was simulated in a compartment model, whereas bushes and trees were modelled in an individual-based manner. Both models worked and interacted in a spatially explicit, raster-based landscape. We present here the model description, parameterization and testing. We show highly detailed projections of the succession of a semi-natural grassland including the influence of initial vegetation composition, neighborhood interactions and ungulate browsing. We carefully weighted the single processes against each other and their relevance for landscape development under different scenarios, while explicitly considering specific site conditions. Model evaluation revealed that the model is able to emulate successional patterns as observed in the field as well as plausible results for different population densities of red deer. Important neighborhood interactions such as seed dispersal, the protection of seedlings from browsing ungulates by thorny bushes, and the inhibition of wood encroachment by the herbaceous layer, have been successfully reproduced. Therefore, not only a detailed model but also detailed initialization turned out to be important for spatially explicit projections of a given site. The advantage of the WoodS-Model is that it integrates these many mutually interacting processes of succession. PMID:25494057
Generalized Nonlinear Yule Models
NASA Astrophysics Data System (ADS)
Lansky, Petr; Polito, Federico; Sacerdote, Laura
2016-11-01
With the aim of considering models related to random graphs growth exhibiting persistent memory, we propose a fractional nonlinear modification of the classical Yule model often studied in the context of macroevolution. Here the model is analyzed and interpreted in the framework of the development of networks such as the World Wide Web. Nonlinearity is introduced by replacing the linear birth process governing the growth of the in-links of each specific webpage with a fractional nonlinear birth process with completely general birth rates. Among the main results we derive the explicit distribution of the number of in-links of a webpage chosen uniformly at random recognizing the contribution to the asymptotics and the finite time correction. The mean value of the latter distribution is also calculated explicitly in the most general case. Furthermore, in order to show the usefulness of our results, we particularize them in the case of specific birth rates giving rise to a saturating behaviour, a property that is often observed in nature. The further specialization to the non-fractional case allows us to extend the Yule model accounting for a nonlinear growth.
Equivalence of interest rate models and lattice gases.
Pirjol, Dan
2012-04-01
We consider the class of short rate interest rate models for which the short rate is proportional to the exponential of a Gaussian Markov process x(t) in the terminal measure r(t)=a(t)exp[x(t)]. These models include the Black-Derman-Toy and Black-Karasinski models in the terminal measure. We show that such interest rate models are equivalent to lattice gases with attractive two-body interaction, V(t(1),t(2))=-Cov[x(t(1)),x(t(2))]. We consider in some detail the Black-Karasinski model with x(t) as an Ornstein-Uhlenbeck process, and show that it is similar to a lattice gas model considered by Kac and Helfand, with attractive long-range two-body interactions, V(x,y)=-α(e(-γ|x-y|)-e(-γ(x+y))). An explicit solution for the model is given as a sum over the states of the lattice gas, which is used to show that the model has a phase transition similar to that found previously in the Black-Derman-Toy model in the terminal measure.
Equivalence of interest rate models and lattice gases
NASA Astrophysics Data System (ADS)
Pirjol, Dan
2012-04-01
We consider the class of short rate interest rate models for which the short rate is proportional to the exponential of a Gaussian Markov process x(t) in the terminal measure r(t)=a(t)exp[x(t)]. These models include the Black-Derman-Toy and Black-Karasinski models in the terminal measure. We show that such interest rate models are equivalent to lattice gases with attractive two-body interaction, V(t1,t2)=-Cov[x(t1),x(t2)]. We consider in some detail the Black-Karasinski model with x(t) as an Ornstein-Uhlenbeck process, and show that it is similar to a lattice gas model considered by Kac and Helfand, with attractive long-range two-body interactions, V(x,y)=-α(e-γ|x-y|-e-γ(x+y)). An explicit solution for the model is given as a sum over the states of the lattice gas, which is used to show that the model has a phase transition similar to that found previously in the Black-Derman-Toy model in the terminal measure.
Strength of the singularities, equation of state and asymptotic expansion in Kaluza-Klein space time
NASA Astrophysics Data System (ADS)
Samanta, G. C.; Goel, Mayank; Myrzakulov, R.
2018-04-01
In this paper an explicit cosmological model which allows cosmological singularities are discussed in Kaluza-Klein space time. The generalized power-law and asymptotic expansions of the baro-tropic fluid index ω and equivalently the deceleration parameter q, in terms of cosmic time 't' are considered. Finally, the strength of the found singularities is discussed.
[Addictions: Motivated or forced care].
Cottencin, Olivier; Bence, Camille
2016-12-01
Patients presenting with addictions are often obliged to consult. This constraint can be explicit (partner, children, parents, doctor, police, justice) or can be implicit (for their children, for their families, or for their health). Thus, beyond the fact that the caregiver faces the paradox of caring for subjects who do not ask treatment, he faces as well a double bind considered to be supporter of the social order or helper of patients. The transtheoretical model of change is complex showing us that it was neither fixed in time, nor perpetual for a given individual. This model includes ambivalence, resistance and even relapse, but it still considers constraint as a brake than an effective tool. Therapist must have adequate communication tools to enable everyone (forced or not) understand that involvement in care will enable him/her to regain his free will, even though it took to go through coercion. We propose in this article to detail the first steps with the patient presenting with addiction looking for constraint (implicit or explicit), how to work with constraint, avoid making resistances ourselves and make of constraint a powerful motivator for change. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Incorporating evolution of transcription factor binding sites into annotated alignments.
Bais, Abha S; Grossmann, Stefen; Vingron, Martin
2007-08-01
Identifying transcription factor binding sites (TFBSs) is essential to elucidate putative regulatory mechanisms. A common strategy is to combine cross-species conservation with single sequence TFBS annotation to yield "conserved TFBSs". Most current methods in this field adopt a multi-step approach that segregates the two aspects. Again, it is widely accepted that the evolutionary dynamics of binding sites differ from those of the surrounding sequence. Hence, it is desirable to have an approach that explicitly takes this factor into account. Although a plethora of approaches have been proposed for the prediction of conserved TFBSs, very few explicitly model TFBS evolutionary properties, while additionally being multi-step. Recently, we introduced a novel approach to simultaneously align and annotate conserved TFBSs in a pair of sequences. Building upon the standard Smith-Waterman algorithm for local alignments, SimAnn introduces additional states for profiles to output extended alignments or annotated alignments. That is, alignments with parts annotated as gaplessly aligned TFBSs (pair-profile hits)are generated. Moreover,the pair- profile related parameters are derived in a sound statistical framework. In this article, we extend this approach to explicitly incorporate evolution of binding sites in the SimAnn framework. We demonstrate the extension in the theoretical derivations through two position-specific evolutionary models, previously used for modelling TFBS evolution. In a simulated setting, we provide a proof of concept that the approach works given the underlying assumptions,as compared to the original work. Finally, using a real dataset of experimentally verified binding sites in human-mouse sequence pairs,we compare the new approach (eSimAnn) to an existing multi-step tool that also considers TFBS evolution. Although it is widely accepted that binding sites evolve differently from the surrounding sequences, most comparative TFBS identification methods do not explicitly consider this.Additionally, prediction of conserved binding sites is carried out in a multi-step approach that segregates alignment from TFBS annotation. In this paper, we demonstrate how the simultaneous alignment and annotation approach of SimAnn can be further extended to incorporate TFBS evolutionary relationships. We study how alignments and binding site predictions interplay at varying evolutionary distances and for various profile qualities.
Three Dimensional Explicit Model for Cometary Tail Ions Interactions with Solar Wind
NASA Astrophysics Data System (ADS)
Al Bermani, M. J. F.; Alhamed, S. A.; Khalaf, S. Z.; Ali, H. Sh.; Selman, A. A.
2009-06-01
The different interactions between cometary tail and solar wind ions are studied in the present paper based on three-dimensional Lax explicit method. The model used in this research is based on the continuity equations describing the cometary tail-solar wind interactions. Three dimensional system was considered in this paper. Simulation of the physical system was achieved using computer code written using Matlab 7.0. The parameters studied here assumed Halley comet type and include the particle density rho, the particles velocity v, the magnetic field strength B, dynamic pressure p and internal energy E. The results of the present research showed that the interaction near the cometary nucleus is mainly affected by the new ions added to the plasma of the solar wind, which increases the average molecular weight and result in many unique characteristics of the cometary tail. These characteristics were explained in the presence of the IMF.
Narcissistic Traits and Explicit Self-Esteem: The Moderating Role of Implicit Self-View
Di Pierro, Rossella; Mattavelli, Simone; Gallucci, Marcello
2016-01-01
Objective: Whilst the relationship between narcissism and self-esteem has been studied for a long time, findings are still controversial. The majority of studies investigated narcissistic grandiosity (NG), neglecting the existence of vulnerable manifestations of narcissism. Moreover, recent studies have shown that grandiosity traits are not always associated with inflated explicit self-esteem. The aim of the present study is to investigate the relationship between narcissistic traits and explicit self-esteem, distinguishing between grandiosity and vulnerability. Moreover, we consider the role of implicit self-esteem in qualifying these associations. Method: Narcissistic traits, explicit and implicit self-esteem measures were assessed among 120 university students (55.8% women, Mage = 22.55, SD = 3.03). Results: Results showed different patterns of association between narcissistic traits and explicit self-esteem, depending on phenotypic manifestations of narcissism. Narcissistic vulnerability (NV) was linked to low explicit self-evaluations regardless of one’s levels of implicit self-esteem. On the other hand, the link between NG and explicit self-esteem was qualified by levels of implicit self-views, such that grandiosity was significantly associated with inflated explicit self-evaluations only at either high or medium levels of implicit self-views. Discussion: These findings showed that the relationship between narcissistic traits and explicit self-esteem is not univocal, highlighting the importance of distinguishing between NG and NV. Finally, the study suggested that both researchers and clinicians should consider the relevant role of implicit self-views in conditioning self-esteem levels reported explicitly by individuals with grandiose narcissistic traits. PMID:27920739
Narcissistic Traits and Explicit Self-Esteem: The Moderating Role of Implicit Self-View.
Di Pierro, Rossella; Mattavelli, Simone; Gallucci, Marcello
2016-01-01
Objective: Whilst the relationship between narcissism and self-esteem has been studied for a long time, findings are still controversial. The majority of studies investigated narcissistic grandiosity (NG), neglecting the existence of vulnerable manifestations of narcissism. Moreover, recent studies have shown that grandiosity traits are not always associated with inflated explicit self-esteem. The aim of the present study is to investigate the relationship between narcissistic traits and explicit self-esteem, distinguishing between grandiosity and vulnerability. Moreover, we consider the role of implicit self-esteem in qualifying these associations. Method: Narcissistic traits, explicit and implicit self-esteem measures were assessed among 120 university students (55.8% women, M age = 22.55, SD = 3.03). Results: Results showed different patterns of association between narcissistic traits and explicit self-esteem, depending on phenotypic manifestations of narcissism. Narcissistic vulnerability (NV) was linked to low explicit self-evaluations regardless of one's levels of implicit self-esteem. On the other hand, the link between NG and explicit self-esteem was qualified by levels of implicit self-views, such that grandiosity was significantly associated with inflated explicit self-evaluations only at either high or medium levels of implicit self-views. Discussion: These findings showed that the relationship between narcissistic traits and explicit self-esteem is not univocal, highlighting the importance of distinguishing between NG and NV. Finally, the study suggested that both researchers and clinicians should consider the relevant role of implicit self-views in conditioning self-esteem levels reported explicitly by individuals with grandiose narcissistic traits.
A Goal Oriented Approach for Modeling and Analyzing Security Trade-Offs
NASA Astrophysics Data System (ADS)
Elahi, Golnaz; Yu, Eric
In designing software systems, security is typically only one design objective among many. It may compete with other objectives such as functionality, usability, and performance. Too often, security mechanisms such as firewalls, access control, or encryption are adopted without explicit recognition of competing design objectives and their origins in stakeholder interests. Recently, there is increasing acknowledgement that security is ultimately about trade-offs. One can only aim for "good enough" security, given the competing demands from many parties. In this paper, we examine how conceptual modeling can provide explicit and systematic support for analyzing security trade-offs. After considering the desirable criteria for conceptual modeling methods, we examine several existing approaches for dealing with security trade-offs. From analyzing the limitations of existing methods, we propose an extension to the i* framework for security trade-off analysis, taking advantage of its multi-agent and goal orientation. The method was applied to several case studies used to exemplify existing approaches.
Mrozek, Piotr
2011-08-01
A numerical model explicitly considering the space-charge density evolved both under the mask and in the region of optical structure formation was used to predict the profiles of Ag concentration during field-assisted Ag(+)-Na(+) ion exchange channel waveguide fabrication. The influence of the unequal values of diffusion constants and mobilities of incoming and outgoing ions, the value of a correlation factor (Haven ratio), and particularly space-charge density induced during the ion exchange, on the resulting profiles of Ag concentration was analyzed and discussed. It was shown that the incorporation into the numerical model of a small quantity of highly mobile ions other than exclusively Ag(+) and Na(+) may considerably affect the range and shape of calculated Ag profiles in the multicomponent glass. The Poisson equation was used to predict the electric field spread evolution in the glass substrate. The results of the numerical analysis were verified by the experimental data of Ag concentration in a channel waveguide fabricated using a field-assisted process.
Analytical basis for planetary quarantine.
NASA Technical Reports Server (NTRS)
Schalkowsky, S.; Kline, R. C., Jr.
1971-01-01
The attempt is made to investigate quarantine constraints, and alternatives for meeting them, in sufficient detail for identifying those courses of action which compromise neither the quarantine nor the space mission objectives. Mathematical models pertinent to this goal are formulated at three distinct levels. The first level of mission constraint models pertains to the quarantine goals considered necessary by the international scientific community. The principal emphasis of modeling at this level is to quantify international considerations and to produce well-defined mission constraints. Such constraints must be translated into explicit implementation requirements by the operational agency of the launching nation. This produces the second level of implementation system modeling. However, because of the multitude of factors entering into the implementation models, it is convenient to consider these factors at the third level of implementation parameter models. These models are intentionally limited to the inclusion of only those factors which can be quantified realistically, either now or in the near future.
Transients in the synchronization of asymmetrically coupled oscillator arrays
NASA Astrophysics Data System (ADS)
Cantos, C. E.; Hammond, D. K.; Veerman, J. J. P.
2016-09-01
We consider the transient behavior of a large linear array of coupled linear damped harmonic oscillators following perturbation of a single element. Our work is motivated by modeling the behavior of flocks of autonomous vehicles. We first state a number of conjectures that allow us to derive an explicit characterization of the transients, within a certain parameter regime Ω. As corollaries we show that minimizing the transients requires considering non-symmetric coupling, and that within Ω the computed linear growth in N of the transients is independent of (reasonable) boundary conditions.
Optimal harvesting of a stochastic delay logistic model with Lévy jumps
NASA Astrophysics Data System (ADS)
Qiu, Hong; Deng, Wenmin
2016-10-01
The optimal harvesting problem of a stochastic time delay logistic model with Lévy jumps is considered in this article. We first show that the model has a unique global positive solution and discuss the uniform boundedness of its pth moment with harvesting. Then we prove that the system is globally attractive and asymptotically stable in distribution under our assumptions. Furthermore, we obtain the existence of the optimal harvesting effort by the ergodic method, and then we give the explicit expression of the optimal harvesting policy and maximum yield.
2005-08-12
productivity of the islands in producing copra or fish, was not considered. The assumption is also inconsistent with the capitalization model that the value of...David Barker and Jay Wa-Aadu, “Is Real Estate Becoming Important Again? A Neo Ricardian Model of Land Rent.” Real Estate Economics, Spring, 2004, pp...the model explicit, it avoids shortcomings of the NCT methodology, by using available data from RMI’s national income and product accounts that is
Advancing reservoir operation description in physically based hydrological models
NASA Astrophysics Data System (ADS)
Anghileri, Daniela; Giudici, Federico; Castelletti, Andrea; Burlando, Paolo
2016-04-01
Last decades have seen significant advances in our capacity of characterizing and reproducing hydrological processes within physically based models. Yet, when the human component is considered (e.g. reservoirs, water distribution systems), the associated decisions are generally modeled with very simplistic rules, which might underperform in reproducing the actual operators' behaviour on a daily or sub-daily basis. For example, reservoir operations are usually described by a target-level rule curve, which represents the level that the reservoir should track during normal operating conditions. The associated release decision is determined by the current state of the reservoir relative to the rule curve. This modeling approach can reasonably reproduce the seasonal water volume shift due to reservoir operation. Still, it cannot capture more complex decision making processes in response, e.g., to the fluctuations of energy prices and demands, the temporal unavailability of power plants or varying amount of snow accumulated in the basin. In this work, we link a physically explicit hydrological model with detailed hydropower behavioural models describing the decision making process by the dam operator. In particular, we consider two categories of behavioural models: explicit or rule-based behavioural models, where reservoir operating rules are empirically inferred from observational data, and implicit or optimization based behavioural models, where, following a normative economic approach, the decision maker is represented as a rational agent maximising a utility function. We compare these two alternate modelling approaches on the real-world water system of Lake Como catchment in the Italian Alps. The water system is characterized by the presence of 18 artificial hydropower reservoirs generating almost 13% of the Italian hydropower production. Results show to which extent the hydrological regime in the catchment is affected by different behavioural models and reservoir operating strategies.
NASA Astrophysics Data System (ADS)
Bykov, N. V.
2014-12-01
Numerical modelling of a ballistic setup with a tapered adapter and plastic piston is considered. The processes in the firing chamber are described within the framework of quasi- one-dimensional gas dynamics and a geometrical law of propellant burn by means of Lagrangian mass coordinates. The deformable piston is considered to be an ideal liquid with specific equations of state. The numerical solution is obtained by means of a modified explicit von Neumann scheme. The calculation results given show that the ballistic setup with a tapered adapter and plastic piston produces increased shell muzzle velocities by a factor of more than 1.5-2.
AST: Activity-Security-Trust driven modeling of time varying networks.
Wang, Jian; Xu, Jiake; Liu, Yanheng; Deng, Weiwen
2016-02-18
Network modeling is a flexible mathematical structure that enables to identify statistical regularities and structural principles hidden in complex systems. The majority of recent driving forces in modeling complex networks are originated from activity, in which an activity potential of a time invariant function is introduced to identify agents' interactions and to construct an activity-driven model. However, the new-emerging network evolutions are already deeply coupled with not only the explicit factors (e.g. activity) but also the implicit considerations (e.g. security and trust), so more intrinsic driving forces behind should be integrated into the modeling of time varying networks. The agents undoubtedly seek to build a time-dependent trade-off among activity, security, and trust in generating a new connection to another. Thus, we reasonably propose the Activity-Security-Trust (AST) driven model through synthetically considering the explicit and implicit driving forces (e.g. activity, security, and trust) underlying the decision process. AST-driven model facilitates to more accurately capture highly dynamical network behaviors and figure out the complex evolution process, allowing a profound understanding of the effects of security and trust in driving network evolution, and improving the biases induced by only involving activity representations in analyzing the dynamical processes.
Combining information from multiple flood projections in a hierarchical Bayesian framework
NASA Astrophysics Data System (ADS)
Le Vine, Nataliya
2016-04-01
This study demonstrates, in the context of flood frequency analysis, the potential of a recently proposed hierarchical Bayesian approach to combine information from multiple models. The approach explicitly accommodates shared multimodel discrepancy as well as the probabilistic nature of the flood estimates, and treats the available models as a sample from a hypothetical complete (but unobserved) set of models. The methodology is applied to flood estimates from multiple hydrological projections (the Future Flows Hydrology data set) for 135 catchments in the UK. The advantages of the approach are shown to be: (1) to ensure adequate "baseline" with which to compare future changes; (2) to reduce flood estimate uncertainty; (3) to maximize use of statistical information in circumstances where multiple weak predictions individually lack power, but collectively provide meaningful information; (4) to diminish the importance of model consistency when model biases are large; and (5) to explicitly consider the influence of the (model performance) stationarity assumption. Moreover, the analysis indicates that reducing shared model discrepancy is the key to further reduction of uncertainty in the flood frequency analysis. The findings are of value regarding how conclusions about changing exposure to flooding are drawn, and to flood frequency change attribution studies.
Exploring the effect of the spatial scale of fishery management.
Takashina, Nao; Baskett, Marissa L
2016-02-07
For any spatially explicit management, determining the appropriate spatial scale of management decisions is critical to success at achieving a given management goal. Specifically, managers must decide how much to subdivide a given managed region: from implementing a uniform approach across the region to considering a unique approach in each of one hundred patches and everything in between. Spatially explicit approaches, such as the implementation of marine spatial planning and marine reserves, are increasingly used in fishery management. Using a spatially explicit bioeconomic model, we quantify how the management scale affects optimal fishery profit, biomass, fishery effort, and the fraction of habitat in marine reserves. We find that, if habitats are randomly distributed, the fishery profit increases almost linearly with the number of segments. However, if habitats are positively autocorrelated, then the fishery profit increases with diminishing returns. Therefore, the true optimum in management scale given cost to subdivision depends on the habitat distribution pattern. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Garcia, Elena
The demand for air travel is expanding beyond the capacity of the existing National Airspace System. Excess traffic results in delays and compromised safety. Thus, a number of initiatives to improve airspace capacity have been proposed. To assess the impact of these technologies on air traffic one must move beyond the vehicle to a system-of-systems point of view. This top-level perspective must include consideration of the aircraft, airports, air traffic control and airlines that make up the airspace system. In addition to these components and their interactions economics, safety and government regulations must also be considered. Furthermore, the air transportation system is inherently variable with changes in everything from fuel prices to the weather. The development of a modeling environment that enables a comprehensive probabilistic evaluation of technological impacts was the subject of this thesis. The final modeling environment developed used economics as the thread to tie the airspace components together. Airport capacities and delays were calculated explicitly with due consideration to the impacts of air traffic control. The delay costs were then calculated for an entire fleet, and an airline economic analysis, considering the impact of these costs, was carried out. Airline return on investment was considered the metric of choice since it brings together all costs and revenues, including the cost of delays, landing fees for airport use and aircraft financing costs. Safety was found to require a level of detail unsuitable for a system-of-systems approach and was relegated to future airspace studies. Environmental concerns were considered to be incorporated into airport regulations and procedures and were not explicitly modeled. A deterministic case study was developed to test this modeling environment. The Atlanta airport operations for the year 2000 were used for validation purposes. A 2005 baseline was used as a basis for comparing the four technologies considered: a very large aircraft, Terminal Area Productivity air traffic control technologies, smoothing of an airline schedule, and the addition of a runway. A case including all four technologies simultaneously was also considered. Unfortunately, the complexity of the system prevented full exploration of the probabilistic aspects of the National Airspace System.
Explicit ions/implicit water generalized Born model for nucleic acids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tolokh, Igor S.; Thomas, Dennis G.; Onufriev, Alexey V.
Ion atmosphere around highly charged nucleic acid molecules plays a significant role in their dynamics, structure and interactions. Here we utilized the implicit solvent framework to develop a model for the explicit treatment of ions interacting with nucleic acid molecules. The proposed explicit ions/implicit water model is based on a significantly modified generalized Born (GB) model, and utilizes a non-standard approach to defining the solute/solvent dielectric boundary. Specifically, the model includes modifications to the GB interaction terms for the case of multiple interacting solutes – disconnected dielectric boundary around the solute-ion or ion-ion pairs. Fully analytical description of all energymore » components for charge-charge interactions is provided. The effectiveness of the approach is demonstrated by calculating the potential of mean force (PMF) for Na+-Cl− ion pair and by carrying out a set of Monte Carlo (MC) simulations of mono- and trivalent ions interacting with DNA and RNA duplexes. The monovalent (Na+) and trivalent (CoHex3+) counterion distributions predicted by the model are in close quantitative agreement with all-atom explicit water molecular dynamics simulations used as reference. Expressed in the units of energy, the maximum deviations of local ion concentrations from the reference are within kBT. The proposed explicit ions/implicit water GB model is able to resolve subtle features and differences of CoHex distributions around DNA and RNA duplexes. These features include preferential CoHex binding inside the major groove of RNA duplex, in contrast to CoHex biding at the "external" surface of the sugar-phosphate backbone of DNA duplex; these differences in the counterion binding patters were shown earlier to be responsible for the observed drastic differences in condensation propensities between short DNA and RNA duplexes. MC simulations of CoHex ions interacting with homopolymeric poly(dA·dT) DNA duplex with modified (de-methylated) and native Thymine bases are used to explore the physics behind CoHex-Thymine interactions. The simulations suggest that the ion desolvation penalty due to proximity to the low dielectric volume of the methyl group can contribute significantly to CoHex-Thymine interactions. Compared to the steric repulsion between the ion and the methyl group, the desolvation penalty interaction has a longer range, and may be important to consider in the context of methylation effects on DNA condensation.« less
Explicit ions/implicit water generalized Born model for nucleic acids
NASA Astrophysics Data System (ADS)
Tolokh, Igor S.; Thomas, Dennis G.; Onufriev, Alexey V.
2018-05-01
The ion atmosphere around highly charged nucleic acid molecules plays a significant role in their dynamics, structure, and interactions. Here we utilized the implicit solvent framework to develop a model for the explicit treatment of ions interacting with nucleic acid molecules. The proposed explicit ions/implicit water model is based on a significantly modified generalized Born (GB) model and utilizes a non-standard approach to define the solute/solvent dielectric boundary. Specifically, the model includes modifications to the GB interaction terms for the case of multiple interacting solutes—disconnected dielectric boundary around the solute-ion or ion-ion pairs. A fully analytical description of all energy components for charge-charge interactions is provided. The effectiveness of the approach is demonstrated by calculating the potential of mean force for Na+-Cl- ion pair and by carrying out a set of Monte Carlo (MC) simulations of mono- and trivalent ions interacting with DNA and RNA duplexes. The monovalent (Na+) and trivalent (CoHex3+) counterion distributions predicted by the model are in close quantitative agreement with all-atom explicit water molecular dynamics simulations used as reference. Expressed in the units of energy, the maximum deviations of local ion concentrations from the reference are within kBT. The proposed explicit ions/implicit water GB model is able to resolve subtle features and differences of CoHex distributions around DNA and RNA duplexes. These features include preferential CoHex binding inside the major groove of the RNA duplex, in contrast to CoHex biding at the "external" surface of the sugar-phosphate backbone of the DNA duplex; these differences in the counterion binding patters were earlier shown to be responsible for the observed drastic differences in condensation propensities between short DNA and RNA duplexes. MC simulations of CoHex ions interacting with the homopolymeric poly(dA.dT) DNA duplex with modified (de-methylated) and native thymine bases are used to explore the physics behind CoHex-thymine interactions. The simulations suggest that the ion desolvation penalty due to proximity to the low dielectric volume of the methyl group can contribute significantly to CoHex-thymine interactions. Compared to the steric repulsion between the ion and the methyl group, the desolvation penalty interaction has a longer range and may be important to consider in the context of methylation effects on DNA condensation.
Zipf exponent of trajectory distribution in the hidden Markov model
NASA Astrophysics Data System (ADS)
Bochkarev, V. V.; Lerner, E. Yu
2014-03-01
This paper is the first step of generalization of the previously obtained full classification of the asymptotic behavior of the probability for Markov chain trajectories for the case of hidden Markov models. The main goal is to study the power (Zipf) and nonpower asymptotics of the frequency list of trajectories of hidden Markov frequencys and to obtain explicit formulae for the exponent of the power asymptotics. We consider several simple classes of hidden Markov models. We prove that the asymptotics for a hidden Markov model and for the corresponding Markov chain can be essentially different.
Pos, Edwin; Guevara Andino, Juan Ernesto; Sabatier, Daniel; Molino, Jean-François; Pitman, Nigel; Mogollón, Hugo; Neill, David; Cerón, Carlos; Rivas-Torres, Gonzalo; Di Fiore, Anthony; Thomas, Raquel; Tirado, Milton; Young, Kenneth R; Wang, Ophelia; Sierra, Rodrigo; García-Villacorta, Roosevelt; Zagt, Roderick; Palacios Cuenca, Walter; Aulestia, Milton; Ter Steege, Hans
2017-06-01
With many sophisticated methods available for estimating migration, ecologists face the difficult decision of choosing for their specific line of work. Here we test and compare several methods, performing sanity and robustness tests, applying to large-scale data and discussing the results and interpretation. Five methods were selected to compare for their ability to estimate migration from spatially implicit and semi-explicit simulations based on three large-scale field datasets from South America (Guyana, Suriname, French Guiana and Ecuador). Space was incorporated semi-explicitly by a discrete probability mass function for local recruitment, migration from adjacent plots or from a metacommunity. Most methods were able to accurately estimate migration from spatially implicit simulations. For spatially semi-explicit simulations, estimation was shown to be the additive effect of migration from adjacent plots and the metacommunity. It was only accurate when migration from the metacommunity outweighed that of adjacent plots, discrimination, however, proved to be impossible. We show that migration should be considered more an approximation of the resemblance between communities and the summed regional species pool. Application of migration estimates to simulate field datasets did show reasonably good fits and indicated consistent differences between sets in comparison with earlier studies. We conclude that estimates of migration using these methods are more an approximation of the homogenization among local communities over time rather than a direct measurement of migration and hence have a direct relationship with beta diversity. As betadiversity is the result of many (non)-neutral processes, we have to admit that migration as estimated in a spatial explicit world encompasses not only direct migration but is an ecological aggregate of these processes. The parameter m of neutral models then appears more as an emerging property revealed by neutral theory instead of being an effective mechanistic parameter and spatially implicit models should be rejected as an approximation of forest dynamics.
The Impact of Aerosol Microphysical Representation in Models on the Direct Radiative Effect
NASA Astrophysics Data System (ADS)
Ridley, D. A.; Heald, C. L.
2017-12-01
Aerosol impacts the radiative balance of the atmosphere both directly and indirectly. There is considerable uncertainty remaining in the aerosol direct radiative effect (DRE), hampering understanding of the present magnitude of anthropogenic aerosol forcing and how future changes in aerosol loading will influence climate. Computationally expensive explicit aerosol microphysics are usually reserved for modelling of the aerosol indirect radiative effects that depend upon aerosol particle number. However, the direct radiative effects of aerosol are also strongly dependent upon the aerosol size distribution, especially particles between 0.2µm - 2µm diameter. In this work, we use a consistent model framework and consistent emissions to explore the impact of prescribed size distributions (bulk scheme) relative to explicit microphysics (sectional scheme) on the aerosol radiative properties. We consider the difference in aerosol burden, water uptake, and extinction efficiency resulting from the two representations, highlighting when and where the bulk and sectional schemes diverge significantly in their estimates of the DRE. Finally, we evaluate the modelled size distributions using in-situ measurements over a range of regimes to provide constraints on both the accumulation and coarse aerosol sizes.
Numerical quantification of habitability in serpentinizing systems
NASA Astrophysics Data System (ADS)
Som, S.; Alperin, M. J.; Hoehler, T. M.
2012-12-01
The likely presence of liquid water in contact with olivine-bearing rocks on Mars, the detection of serpentine minerals and of methane emissions possibly consistent with serpentinization, and the observation of serpentine-associated methane-cycling communities on Earth have all led to excitement over the potential of such systems to host life on Mars, even into the present day. However, the habitability of subsurface serpentinizing systems on Mars does not necessarily follow from these qualitative observations. In particular, while the production of H2 during serpentinization could provide methanogens with a needed substrate, the alkaline conditions and corresponding potential for carbon limitation that arise in concert are negatives against which H2 supply must be balanced. We considered this balance via a coupled geochemical-bioenergetic model that weighs the outputs of serpentinization against the metabolic requirements of methanogenesis, in an energetic frame of reference. Serpentinization is modeled using the "Geochemist's Workbench" (GWB) whereby ultramafic harzburgite rocks are reacted with oxygen and sulfate depleted seawater. Reaction kinetics are not explicitly considered, but comparable effects of partial reaction are approximated by assuming post-reaction dilution of equilibrated fluids. The output of GWB serves as the input to the bioenergetic model, which calculates methanogenic energy yields based on spherically-symmetrical diffusion of substrates to a cell followed by reaction at the diffusion-limited rate. Membrane selectivity for substrate transport is explicitly considered. Results will be report updates for two scenarios: (i) High temperature serpentinization followed by cooling and transport of equilibrated fluid to a lower temperature regime accessible to biology; (ii) Serpentinization within the biologically-tolerated range of temperatures. Such coupled models demonstrate that environmental variability with respect to both water-rock reaction, temperature, and biologically-mediated methanogenesis drive orders of magnitude variability in the energy available in methanogenic metabolism.
NASA Astrophysics Data System (ADS)
Ireland, Lewis G.; Browning, Matthew K.
2018-04-01
Some low-mass stars appear to have larger radii than predicted by standard 1D structure models; prior work has suggested that inefficient convective heat transport, due to rotation and/or magnetism, may ultimately be responsible. We examine this issue using 1D stellar models constructed using Modules for Experiments in Stellar Astrophysics (MESA). First, we consider standard models that do not explicitly include rotational/magnetic effects, with convective inhibition modeled by decreasing a depth-independent mixing length theory (MLT) parameter α MLT. We provide formulae linking changes in α MLT to changes in the interior specific entropy, and hence to the stellar radius. Next, we modify the MLT formulation in MESA to mimic explicitly the influence of rotation and magnetism, using formulations suggested by Stevenson and MacDonald & Mullan, respectively. We find rapid rotation in these models has a negligible impact on stellar structure, primarily because a star’s adiabat, and hence its radius, is predominantly affected by layers near the surface; convection is rapid and largely uninfluenced by rotation there. Magnetic fields, if they influenced convective transport in the manner described by MacDonald & Mullan, could lead to more noticeable radius inflation. Finally, we show that these non-standard effects on stellar structure can be fabricated using a depth-dependent α MLT: a non-magnetic, non-rotating model can be produced that is virtually indistinguishable from one that explicitly parameterizes rotation and/or magnetism using the two formulations above. We provide formulae linking the radially variable α MLT to these putative MLT reformulations.
Non-linear analytic and coanalytic problems ( L_p-theory, Clifford analysis, examples)
NASA Astrophysics Data System (ADS)
Dubinskii, Yu A.; Osipenko, A. S.
2000-02-01
Two kinds of new mathematical model of variational type are put forward: non-linear analytic and coanalytic problems. The formulation of these non-linear boundary-value problems is based on a decomposition of the complete scale of Sobolev spaces into the "orthogonal" sum of analytic and coanalytic subspaces. A similar decomposition is considered in the framework of Clifford analysis. Explicit examples are presented.
Motives and periods in Bianchi IX gravity models
NASA Astrophysics Data System (ADS)
Fan, Wentao; Fathizadeh, Farzad; Marcolli, Matilde
2018-05-01
We show that, when considering the anisotropic scaling factors and their derivatives as affine variables, the coefficients of the heat-kernel expansion of the Dirac-Laplacian on SU(2) Bianchi IX metrics are algebro-geometric periods of motives of complements in affine spaces of unions of quadrics and hyperplanes. We show that the motives are mixed Tate and we provide an explicit computation of their Grothendieck classes.
NASA Astrophysics Data System (ADS)
Huttenlau, Matthias; Schneeberger, Klaus; Winter, Benjamin; Pazur, Robert; Förster, Kristian; Achleitner, Stefan; Bolliger, Janine
2017-04-01
Devastating flood events have caused substantial economic damage across Europe during past decades. Flood risk management has therefore become a topic of crucial interest across state agencies, research communities and the public sector including insurances. There is consensus that mitigating flood risk relies on impact assessments which quantitatively account for a broad range of aspects in a (changing) environment. Flood risk assessments which take into account the interaction between the drivers climate change, land-use change and socio-economic change might bring new insights to the understanding of the magnitude and spatial characteristic of flood risks. Furthermore, the comparative assessment of different adaptation measures can give valuable information for decision-making. With this contribution we present an inter- and transdisciplinary research project aiming at developing and applying such an impact assessment relying on a coupled modelling framework for the Province of Vorarlberg in Austria. Stakeholder engagement ensures that the final outcomes of our study are accepted and successfully implemented in flood management practice. The study addresses three key questions: (i) What are scenarios of land- use and climate change for the study area? (ii) How will the magnitude and spatial characteristic of future flood risk change as a result of changes in climate and land use? (iii) Are there spatial planning and building-protection measures which effectively reduce future flood risk? The modelling framework has a modular structure comprising modules (i) climate change, (ii) land-use change, (iii) hydrologic modelling, (iv) flood risk analysis, and (v) adaptation measures. Meteorological time series are coupled with spatially explicit scenarios of land-use change to model runoff time series. The runoff time series are combined with impact indicators such as building damages and results are statistically assessed to analyse flood risk scenarios. Thus, the regional flood risk can be expressed in terms of expected annual damage and damages associated with a low probability of occurrence. We consider building protection measures explicitly as part of the consequence analysis of flood risk whereas spatial planning measures are already considered as explicit scenarios in the course of land-use change modelling.
Solution Methods for Certain Evolution Equations
NASA Astrophysics Data System (ADS)
Vega-Guzman, Jose Manuel
Solution methods for certain linear and nonlinear evolution equations are presented in this dissertation. Emphasis is placed mainly on the analytical treatment of nonautonomous differential equations, which are challenging to solve despite the existent numerical and symbolic computational software programs available. Ideas from the transformation theory are adopted allowing one to solve the problems under consideration from a non-traditional perspective. First, the Cauchy initial value problem is considered for a class of nonautonomous and inhomogeneous linear diffusion-type equation on the entire real line. Explicit transformations are used to reduce the equations under study to their corresponding standard forms emphasizing on natural relations with certain Riccati(and/or Ermakov)-type systems. These relations give solvability results for the Cauchy problem of the parabolic equation considered. The superposition principle allows to solve formally this problem from an unconventional point of view. An eigenfunction expansion approach is also considered for this general evolution equation. Examples considered to corroborate the efficacy of the proposed solution methods include the Fokker-Planck equation, the Black-Scholes model and the one-factor Gaussian Hull-White model. The results obtained in the first part are used to solve the Cauchy initial value problem for certain inhomogeneous Burgers-type equation. The connection between linear (the Diffusion-type) and nonlinear (Burgers-type) parabolic equations is stress in order to establish a strong commutative relation. Traveling wave solutions of a nonautonomous Burgers equation are also investigated. Finally, it is constructed explicitly the minimum-uncertainty squeezed states for quantum harmonic oscillators. They are derived by the action of corresponding maximal kinematical invariance group on the standard ground state solution. It is shown that the product of the variances attains the required minimum value only at the instances that one variance is a minimum and the other is a maximum, when the squeezing of one of the variances occurs. Such explicit construction is possible due to the relation between the diffusion-type equation studied in the first part and the time-dependent Schrodinger equation. A modication of the radiation field operators for squeezed photons in a perfect cavity is also suggested with the help of a nonstandard solution of Heisenberg's equation of motion.
Adarkwah, Charles Christian; Sadoghi, Amirhossein; Gandjour, Afschin
2016-02-01
There has been a debate on whether cost-effectiveness analysis should consider the cost of consumption and leisure time activities when using the quality-adjusted life year as a measure of health outcome under a societal perspective. The purpose of this study was to investigate whether the effects of ill health on consumptive activities are spontaneously considered in a health state valuation exercise and how much this matters. The survey enrolled patients with inflammatory bowel disease in Germany (n = 104). Patients were randomized to explicit and no explicit instruction for the consideration of consumption and leisure effects in a time trade-off (TTO) exercise. Explicit instruction to consider non-health-related utility in TTO exercises did not influence TTO scores. However, spontaneous consideration of non-health-related utility in patients without explicit instruction (60% of respondents) led to significantly lower TTO scores. Results suggest an inclusion of consumption costs in the numerator of the cost-effectiveness ratio, at least for those respondents who spontaneously consider non-health-related utility from treatment. Results also suggest that exercises eliciting health valuations from the general public may include a description of the impact of disease on consumptive activities. Copyright © 2015 John Wiley & Sons, Ltd.
High School Students' Meta-Modeling Knowledge
NASA Astrophysics Data System (ADS)
Fortus, David; Shwartz, Yael; Rosenfeld, Sherman
2016-12-01
Modeling is a core scientific practice. This study probed the meta-modeling knowledge (MMK) of high school students who study science but had not had any explicit prior exposure to modeling as part of their formal schooling. Our goals were to (A) evaluate the degree to which MMK is dependent on content knowledge and (B) assess whether the upper levels of the modeling learning progression defined by Schwarz et al. (2009) are attainable by Israeli K-12 students. Nine Israeli high school students studying physics, chemistry, biology, or general science were interviewed individually, once using a context related to the science subject that they were learning and once using an unfamiliar context. All the interviewees displayed MMK superior to that of elementary and middle school students, despite the lack of formal instruction on the practice. Their MMK was independent of content area, but their ability to engage in the practice of modeling was content dependent. This study indicates that, given proper support, the upper levels of the learning progression described by Schwarz et al. (2009) may be attainable by K-12 science students. The value of explicitly focusing on MMK as a learning goal in science education is considered.
Explicit modeling of groundwater-surface water interactions using a simple bucket-type model
NASA Astrophysics Data System (ADS)
Staudinger, Maria; Carlier, Claire; Brunner, Philip; Seibert, Jan
2017-04-01
Longer dry spells can become critical for water supply and groundwater dependent ecosystems. During these dry spells groundwater is often the most relevant source for streams. Hence, the hydrological behavior of a catchment is often dominated by groundwater surface water interactions, which can vary considerably in space and time. While classical hydrological approaches hardly consider this spatial dependence, quantitative, hydrogeological modeling approaches can couple surface runoff processes and groundwater processes. Hydrogeological modeling can help to gain an improved understanding of catchment processes during low flow. However, due to their complex parametrization and large computational requirements, such hydrogeological models are difficult to employ at catchment scale, particularly for a larger set of catchments. Then bucket-type hydrological models remain a practical alternative. In this study we combine the strengths of both the hydrogeological and bucket-type hydrological models to better understand low flow processes and ultimately to use this knowledge for low flow projections. Bucket-type hydrological models have traditionally not been developed with focus on the simulation of low flow. One consequence is that interactions between surface and groundwater are not explicitly considered. Water fluxes in bucket-type hydrological models are commonly simulated only in one direction, namely from the groundwater to the stream but not from the stream to the groundwater. This latter flux, however, can become more important during low flow situations. We therefore further developed the bucket-type hydrological model HBV to simulate low flow situations by allowing for exchange in both directions i.e. also from the stream to the groundwater. The additional HBV exchange box is developed by using a variety of synthetic hydrogeological models as training set that were generated using a fully coupled, physically based hydrogeological model. In this way processes that occur in different spatial settings within the catchment are translated to functional relationships and effective parameter values for the conceptual exchange box can be extracted. Here, we show the development and evaluation of the HBV exchange box. We further show a first application in real catchments and evaluate the model performance by comparing the simulations to benchmark models that do not consider groundwater surface water interaction.
Consider the source: persuasion of implicit evaluations is moderated by source credibility.
Smith, Colin Tucker; De Houwer, Jan; Nosek, Brian A
2013-02-01
The long history of persuasion research shows how to change explicit, self-reported evaluations through direct appeals. At the same time, research on how to change implicit evaluations has focused almost entirely on techniques of retraining existing evaluations or manipulating contexts. In five studies, we examined whether direct appeals can change implicit evaluations in the same way as they do explicit evaluations. In five studies, both explicit and implicit evaluations showed greater evidence of persuasion following information presented by a highly credible source than a source low in credibility. Whereas cognitive load did not alter the effect of source credibility on explicit evaluations, source credibility had an effect on the persuasion of implicit evaluations only when participants were encouraged and able to consider information about the source. Our findings reveal the relevance of persuasion research for changing implicit evaluations and provide new ideas about the processes underlying both types of evaluation.
Experimental oligopolies modeling: A dynamic approach based on heterogeneous behaviors
NASA Astrophysics Data System (ADS)
Cerboni Baiardi, Lorenzo; Naimzada, Ahmad K.
2018-05-01
In the rank of behavioral rules, imitation-based heuristics has received special attention in economics (see [14] and [12]). In particular, imitative behavior is considered in order to understand the evidences arising in experimental oligopolies which reveal that the Cournot-Nash equilibrium does not emerge as unique outcome and show that an important component of the production at the competitive level is observed (see e.g.[1,3,9] or [7,10]). By considering the pioneering groundbreaking approach of [2], we build a dynamical model of linear oligopolies where heterogeneous decision mechanisms of players are made explicit. In particular, we consider two different types of quantity setting players characterized by different decision mechanisms that coexist and operate simultaneously: agents that adaptively adjust their choices towards the direction that increases their profit are embedded with imitator agents. The latter ones use a particular form of proportional imitation rule that considers the awareness about the presence of strategic interactions. It is noteworthy that the Cournot-Nash outcome is a stationary state of our models. Our thesis is that the chaotic dynamics arousing from a dynamical model, where heterogeneous players are considered, are capable to qualitatively reproduce the outcomes of experimental oligopolies.
Explicitly represented polygon wall boundary model for the explicit MPS method
NASA Astrophysics Data System (ADS)
Mitsume, Naoto; Yoshimura, Shinobu; Murotani, Kohei; Yamada, Tomonori
2015-05-01
This study presents an accurate and robust boundary model, the explicitly represented polygon (ERP) wall boundary model, to treat arbitrarily shaped wall boundaries in the explicit moving particle simulation (E-MPS) method, which is a mesh-free particle method for strong form partial differential equations. The ERP model expresses wall boundaries as polygons, which are explicitly represented without using the distance function. These are derived so that for viscous fluids, and with less computational cost, they satisfy the Neumann boundary condition for the pressure and the slip/no-slip condition on the wall surface. The proposed model is verified and validated by comparing computed results with the theoretical solution, results obtained by other models, and experimental results. Two simulations with complex boundary movements are conducted to demonstrate the applicability of the E-MPS method to the ERP model.
Accounting for system dynamics in reserve design.
Leroux, Shawn J; Schmiegelow, Fiona K A; Cumming, Steve G; Lessard, Robert B; Nagy, John
2007-10-01
Systematic conservation plans have only recently considered the dynamic nature of ecosystems. Methods have been developed to incorporate climate change, population dynamics, and uncertainty in reserve design, but few studies have examined how to account for natural disturbance. Considering natural disturbance in reserve design may be especially important for the world's remaining intact areas, which still experience active natural disturbance regimes. We developed a spatially explicit, dynamic simulation model, CONSERV, which simulates patch dynamics and fire, and used it to evaluate the efficacy of hypothetical reserve networks in northern Canada. We designed six networks based on conventional reserve design methods, with different conservation targets for woodland caribou habitat, high-quality wetlands, vegetation, water bodies, and relative connectedness. We input the six reserve networks into CONSERV and tracked the ability of each to maintain initial conservation targets through time under an active natural disturbance regime. None of the reserve networks maintained all initial targets, and some over-represented certain features, suggesting that both effectiveness and efficiency of reserve design could be improved through use of spatially explicit dynamic simulation during the planning process. Spatial simulation models of landscape dynamics are commonly used in natural resource management, but we provide the first illustration of their potential use for reserve design. Spatial simulation models could be used iteratively to evaluate competing reserve designs and select targets that have a higher likelihood of being maintained through time. Such models could be combined with dynamic planning techniques to develop a general theory for reserve design in an uncertain world.
Discriminative analysis of lip motion features for speaker identification and speech-reading.
Cetingül, H Ertan; Yemez, Yücel; Erzin, Engin; Tekalp, A Murat
2006-10-01
There have been several studies that jointly use audio, lip intensity, and lip geometry information for speaker identification and speech-reading applications. This paper proposes using explicit lip motion information, instead of or in addition to lip intensity and/or geometry information, for speaker identification and speech-reading within a unified feature selection and discrimination analysis framework, and addresses two important issues: 1) Is using explicit lip motion information useful, and, 2) if so, what are the best lip motion features for these two applications? The best lip motion features for speaker identification are considered to be those that result in the highest discrimination of individual speakers in a population, whereas for speech-reading, the best features are those providing the highest phoneme/word/phrase recognition rate. Several lip motion feature candidates have been considered including dense motion features within a bounding box about the lip, lip contour motion features, and combination of these with lip shape features. Furthermore, a novel two-stage, spatial, and temporal discrimination analysis is introduced to select the best lip motion features for speaker identification and speech-reading applications. Experimental results using an hidden-Markov-model-based recognition system indicate that using explicit lip motion information provides additional performance gains in both applications, and lip motion features prove more valuable in the case of speech-reading application.
Development of a Localized Low-Dimensional Approach to Turbulence Simulation
NASA Astrophysics Data System (ADS)
Juttijudata, Vejapong; Rempfer, Dietmar; Lumley, John
2000-11-01
Our previous study has shown that the localized low-dimensional model derived from a projection of Navier-Stokes equations onto a set of one-dimensional scalar POD modes, with boundary conditions at y^+=40, can predict wall turbulence accurately for short times while failing to give a stable long-term solution. The structures obtained from the model and later studies suggest our boundary conditions from DNS are not consistent with the solution from the localized model resulting in an injection of energy at the top boundary. In the current study, we develop low-dimensional models using one-dimensional scalar POD modes derived from an explicitly filtered DNS. This model problem has exact no-slip boundary conditions at both walls while the locality of the wall layer is still retained. Furthermore, the interaction between wall and core region is attenuated via an explicit filter which allows us to investigate the quality of the model without requiring complicated modeling of the top boundary conditions. The full-channel model gives reasonable wall turbulence structures as well as long-term turbulent statistics while still having difficulty with the prediction of the mean velocity profile farther from the wall. We also consider a localized model with modified boundary conditions in the last part of our study.
Ecohydrologic role of solar radiation on landscape evolution
NASA Astrophysics Data System (ADS)
Yetemen, Omer; Istanbulluoglu, Erkan; Flores-Cervantes, J. Homero; Vivoni, Enrique R.; Bras, Rafael L.
2015-02-01
Solar radiation has a clear signature on the spatial organization of ecohydrologic fluxes, vegetation patterns and dynamics, and landscape morphology in semiarid ecosystems. Existing landscape evolution models (LEMs) do not explicitly consider spatially explicit solar radiation as model forcing. Here, we improve an existing LEM to represent coupled processes of energy, water, and sediment balance for semiarid fluvial catchments. To ground model predictions, a study site is selected in central New Mexico where hillslope aspect has a marked influence on vegetation patterns and landscape morphology. Model predictions are corroborated using limited field observations in central NM and other locations with similar conditions. We design a set of comparative LEM simulations to investigate the role of spatially explicit solar radiation on landscape ecohydro-geomorphic development under different uplift scenarios. Aspect-control and network-control are identified as the two main drivers of soil moisture and vegetation organization on the landscape. Landscape-scale and long-term implications of these short-term ecohdrologic patterns emerged in modeled landscapes. As north facing slopes (NFS) get steeper by continuing uplift they support erosion-resistant denser vegetation cover which leads to further slope steepening until erosion and uplift attains a dynamic equilibrium. Conversely, on south facing slopes (SFS), as slopes grow with uplift, increased solar radiation exposure with slope supports sparser biomass and shallower slopes. At the landscape scale, these differential erosion processes lead to asymmetric development of catchment forms, consistent with regional observations. Understanding of ecohydrogeomorphic evolution will improve to assess the impacts of past and future climates on landscape response and morphology.
Optimization of wood plastic composite decks
NASA Astrophysics Data System (ADS)
Ravivarman, S.; Venkatesh, G. S.; Karmarkar, A.; Shivkumar N., D.; Abhilash R., M.
2018-04-01
Wood Plastic Composite (WPC) is a new class of natural fibre based composite material that contains plastic matrix reinforced with wood fibres or wood flour. In the present work, Wood Plastic Composite was prepared with 70-wt% of wood flour reinforced in polypropylene matrix. Mechanical characterization of the composite was done by carrying out laboratory tests such as tensile test and flexural test as per the American Society for Testing and Materials (ASTM) standards. Computer Aided Design (CAD) model of the laboratory test specimen (tensile test) was created and explicit finite element analysis was carried out on the finite element model in non-linear Explicit FE code LS - DYNA. The piecewise linear plasticity (MAT 24) material model was identified as a suitable model in LS-DYNA material library, describing the material behavior of the developed composite. The composite structures for decking application in construction industry were then optimized for cross sectional area and distance between two successive supports (span length) by carrying out various numerical experiments in LS-DYNA. The optimized WPC deck (Elliptical channel-2 E10) has 45% reduced weight than the baseline model (solid cross-section) considered in this study with the load carrying capacity meeting acceptance criterion (allowable deflection & stress) for outdoor decking application.
Multiscale modeling of porous ceramics using movable cellular automaton method
NASA Astrophysics Data System (ADS)
Smolin, Alexey Yu.; Smolin, Igor Yu.; Smolina, Irina Yu.
2017-10-01
The paper presents a multiscale model for porous ceramics based on movable cellular automaton method, which is a particle method in novel computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the unique position in space. As a result, we get the average values of Young's modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behavior at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via effective properties determined earliar. If the pore size distribution function of the material has N maxima we need to perform computations for N-1 levels in order to get the properties step by step from the lowest scale up to the macroscale. The proposed approach was applied to modeling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behavior of the model sample at the macroscale.
Numerical approach to optimal portfolio in a power utility regime-switching model
NASA Astrophysics Data System (ADS)
Gyulov, Tihomir B.; Koleva, Miglena N.; Vulkov, Lubin G.
2017-12-01
We consider a system of weakly coupled degenerate semi-linear parabolic equations of optimal portfolio in a regime-switching with power utility function, derived by A.R. Valdez and T. Vargiolu [14]. First, we discuss some basic properties of the solution of this system. Then, we develop and analyze implicit-explicit, flux limited finite difference schemes for the differential problem. Numerical experiments are discussed.
Oscillations and stability of numerical solutions of the heat conduction equation
NASA Technical Reports Server (NTRS)
Kozdoba, L. A.; Levi, E. V.
1976-01-01
The mathematical model and results of numerical solutions are given for the one dimensional problem when the linear equations are written in a rectangular coordinate system. All the computations are easily realizable for two and three dimensional problems when the equations are written in any coordinate system. Explicit and implicit schemes are shown in tabular form for stability and oscillations criteria; the initial temperature distribution is considered uniform.
Merging information from multi-model flood projections in a hierarchical Bayesian framework
NASA Astrophysics Data System (ADS)
Le Vine, Nataliya
2016-04-01
Multi-model ensembles are becoming widely accepted for flood frequency change analysis. The use of multiple models results in large uncertainty around estimates of flood magnitudes, due to both uncertainty in model selection and natural variability of river flow. The challenge is therefore to extract the most meaningful signal from the multi-model predictions, accounting for both model quality and uncertainties in individual model estimates. The study demonstrates the potential of a recently proposed hierarchical Bayesian approach to combine information from multiple models. The approach facilitates explicit treatment of shared multi-model discrepancy as well as the probabilistic nature of the flood estimates, by treating the available models as a sample from a hypothetical complete (but unobserved) set of models. The advantages of the approach are: 1) to insure an adequate 'baseline' conditions with which to compare future changes; 2) to reduce flood estimate uncertainty; 3) to maximize use of statistical information in circumstances where multiple weak predictions individually lack power, but collectively provide meaningful information; 4) to adjust multi-model consistency criteria when model biases are large; and 5) to explicitly consider the influence of the (model performance) stationarity assumption. Moreover, the analysis indicates that reducing shared model discrepancy is the key to further reduction of uncertainty in the flood frequency analysis. The findings are of value regarding how conclusions about changing exposure to flooding are drawn, and to flood frequency change attribution studies.
Christoforou, Paraskevi S; Ashforth, Blake E
2015-01-01
We argue that the strength with which the organization communicates expectations regarding the appropriate emotional expression toward customers (i.e., explicitness of display rules) has an inverted U-shaped relationship with service delivery behaviors, customer satisfaction, and sales performance. Further, we argue that service organizations need a particular blend of explicitness of display rules and role discretion for the purpose of optimizing sales performance. As hypothesized, findings from 2 samples of salespeople suggest that either high or low explicitness of display rules impedes service delivery behaviors and sales performance, which peaks at moderate explicitness of display rules and high role discretion. The findings also suggest that the explicitness of display rules has a positive relationship with customer satisfaction. (c) 2015 APA, all rights reserved.
2D discontinuous piecewise linear map: Emergence of fashion cycles.
Gardini, L; Sushko, I; Matsuyama, K
2018-05-01
We consider a discrete-time version of the continuous-time fashion cycle model introduced in Matsuyama, 1992. Its dynamics are defined by a 2D discontinuous piecewise linear map depending on three parameters. In the parameter space of the map periodicity, regions associated with attracting cycles of different periods are organized in the period adding and period incrementing bifurcation structures. The boundaries of all the periodicity regions related to border collision bifurcations are obtained analytically in explicit form. We show the existence of several partially overlapping period incrementing structures, that is, a novelty for the considered class of maps. Moreover, we show that if the time-delay in the discrete time formulation of the model shrinks to zero, the number of period incrementing structures tends to infinity and the dynamics of the discrete time fashion cycle model converges to those of continuous-time fashion cycle model.
A dual-loop model of the human controller
NASA Technical Reports Server (NTRS)
Hess, R. A.
1977-01-01
A representative model of the human controller in single-axis compensatory tracking tasks that exhibits an internal feedback loop which is not evident in single-loop models now in common use is presented. This hypothetical inner-loop involves a neuromuscular command signal derived from the time rate of change of controlled element output which is due to control activity. It is not contended that the single-loop human controller models now in use are incorrect, but that they contain an implicit but important internal loop closure, which, if explicitly considered, can account for a good deal of the adaptive nature of the human controller in a systematic manner.
Distinguishing neutrino mass hierarchies using dark matter annihilation signals at IceCube
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allahverdi, Rouzbeh; Knockel, Bradley; Dutta, Bhaskar
2015-12-01
We explore the possibility of distinguishing neutrino mass hierarchies through the neutrino signal from dark matter annihilation at neutrino telescopes. We consider a simple extension of the standard model where the neutrino masses and mixing angles are obtained via the type-II seesaw mechanism as an explicit example. We show that future extensions of IceCube neutrino telescope may detect the neutrino signal from DM annihilation at the Galactic Center and inside the Sun, and differentiate between the normal and inverted mass hierarchies, in this model.
A flow-control mechanism for distributed systems
NASA Technical Reports Server (NTRS)
Maitan, J.
1991-01-01
A new approach to the rate-based flow control in store-and-forward networks is evaluated. Existing methods display oscillations in the presence of transport delays. The proposed scheme is based on the explicit use of an embedded dynamic model of a store-and-forward buffer in a controller's feedback loop. It is shown that the use of the model eliminates the oscillations caused by the transport delays. The paper presents simulation examples and assesses the applicability of the scheme in the new generation of high-speed photonic networks where transport delays must be considered.
Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains
NASA Astrophysics Data System (ADS)
Cofré, Rodrigo; Maldonado, Cesar
2018-01-01
We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. We review large deviations techniques useful in this context to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.
The Emergence of Organizing Structure in Conceptual Representation.
Lake, Brenden M; Lawrence, Neil D; Tenenbaum, Joshua B
2018-06-01
Both scientists and children make important structural discoveries, yet their computational underpinnings are not well understood. Structure discovery has previously been formalized as probabilistic inference about the right structural form-where form could be a tree, ring, chain, grid, etc. (Kemp & Tenenbaum, 2008). Although this approach can learn intuitive organizations, including a tree for animals and a ring for the color circle, it assumes a strong inductive bias that considers only these particular forms, and each form is explicitly provided as initial knowledge. Here we introduce a new computational model of how organizing structure can be discovered, utilizing a broad hypothesis space with a preference for sparse connectivity. Given that the inductive bias is more general, the model's initial knowledge shows little qualitative resemblance to some of the discoveries it supports. As a consequence, the model can also learn complex structures for domains that lack intuitive description, as well as predict human property induction judgments without explicit structural forms. By allowing form to emerge from sparsity, our approach clarifies how both the richness and flexibility of human conceptual organization can coexist. Copyright © 2018 Cognitive Science Society, Inc.
Local approximation of a metapopulation's equilibrium.
Barbour, A D; McVinish, R; Pollett, P K
2018-04-18
We consider the approximation of the equilibrium of a metapopulation model, in which a finite number of patches are randomly distributed over a bounded subset [Formula: see text] of Euclidean space. The approximation is good when a large number of patches contribute to the colonization pressure on any given unoccupied patch, and when the quality of the patches varies little over the length scale determined by the colonization radius. If this is the case, the equilibrium probability of a patch at z being occupied is shown to be close to [Formula: see text], the equilibrium occupation probability in Levins's model, at any point [Formula: see text] not too close to the boundary, if the local colonization pressure and extinction rates appropriate to z are assumed. The approximation is justified by giving explicit upper and lower bounds for the occupation probabilities, expressed in terms of the model parameters. Since the patches are distributed randomly, the occupation probabilities are also random, and we complement our bounds with explicit bounds on the probability that they are satisfied at all patches simultaneously.
Visual sensory networks and effective information transfer in animal groups.
Strandburg-Peshkin, Ariana; Twomey, Colin R; Bode, Nikolai W F; Kao, Albert B; Katz, Yael; Ioannou, Christos C; Rosenthal, Sara B; Torney, Colin J; Wu, Hai Shan; Levin, Simon A; Couzin, Iain D
2013-09-09
Social transmission of information is vital for many group-living animals, allowing coordination of motion and effective response to complex environments. Revealing the interaction networks underlying information flow within these groups is a central challenge. Previous work has modeled interactions between individuals based directly on their relative spatial positions: each individual is considered to interact with all neighbors within a fixed distance (metric range), a fixed number of nearest neighbors (topological range), a 'shell' of near neighbors (Voronoi range), or some combination (Figure 1A). However, conclusive evidence to support these assumptions is lacking. Here, we employ a novel approach that considers individual movement decisions to be based explicitly on the sensory information available to the organism. In other words, we consider that while spatial relations do inform interactions between individuals, they do so indirectly, through individuals' detection of sensory cues. We reconstruct computationally the visual field of each individual throughout experiments designed to investigate information propagation within fish schools (golden shiners, Notemigonus crysoleucas). Explicitly considering visual sensing allows us to more accurately predict the propagation of behavioral change in these groups during leadership events. Furthermore, we find that structural properties of visual interaction networks differ markedly from those of metric and topological counterparts, suggesting that previous assumptions may not appropriately reflect information flow in animal groups. Copyright © 2013 Elsevier Ltd. All rights reserved.
Terry, Alan J; Sturrock, Marc; Dale, J Kim; Maroto, Miguel; Chaplain, Mark A J
2011-02-28
In the vertebrate embryo, tissue blocks called somites are laid down in head-to-tail succession, a process known as somitogenesis. Research into somitogenesis has been both experimental and mathematical. For zebrafish, there is experimental evidence for oscillatory gene expression in cells in the presomitic mesoderm (PSM) as well as evidence that Notch signalling synchronises the oscillations in neighbouring PSM cells. A biological mechanism has previously been proposed to explain these phenomena. Here we have converted this mechanism into a mathematical model of partial differential equations in which the nuclear and cytoplasmic diffusion of protein and mRNA molecules is explicitly considered. By performing simulations, we have found ranges of values for the model parameters (such as diffusion and degradation rates) that yield oscillatory dynamics within PSM cells and that enable Notch signalling to synchronise the oscillations in two touching cells. Our model contains a Hill coefficient that measures the co-operativity between two proteins (Her1, Her7) and three genes (her1, her7, deltaC) which they inhibit. This coefficient appears to be bounded below by the requirement for oscillations in individual cells and bounded above by the requirement for synchronisation. Consistent with experimental data and a previous spatially non-explicit mathematical model, we have found that signalling can increase the average level of Her1 protein. Biological pattern formation would be impossible without a certain robustness to variety in cell shape and size; our results possess such robustness. Our spatially-explicit modelling approach, together with new imaging technologies that can measure intracellular protein diffusion rates, is likely to yield significant new insight into somitogenesis and other biological processes.
Thermodynamic Modeling of Gas Transport in Glassy Polymeric Membranes.
Minelli, Matteo; Sarti, Giulio Cesare
2017-08-19
Solubility and permeability of gases in glassy polymers have been considered with the aim of illustrating the applicability of thermodynamically-based models for their description and prediction. The solubility isotherms are described by using the nonequilibrium lattice fluid (NELF) (model, already known to be appropriate for nonequilibrium glassy polymers, while the permeability isotherms are described through a general transport model in which diffusivity is the product of a purely kinetic factor, the mobility coefficient, and a thermodynamic factor. The latter is calculated from the NELF model and mobility is considered concentration-dependent through an exponential relationship containing two parameters only. The models are tested explicitly considering solubility and permeability data of various penetrants in three glassy polymers, PSf, PPh and 6FDA-6FpDA, selected as the reference for different behaviors. It is shown that the models are able to calculate the different behaviors observed, and in particular the permeability dependence on upstream pressure, both when it is decreasing as well as when it is increasing, with no need to invoke the onset of additional plasticization phenomena. The correlations found between polymer and penetrant properties with the two parameters of the mobility coefficient also lead to the predictive ability of the transport model.
Thermodynamic Modeling of Gas Transport in Glassy Polymeric Membranes
Minelli, Matteo; Sarti, Giulio Cesare
2017-01-01
Solubility and permeability of gases in glassy polymers have been considered with the aim of illustrating the applicability of thermodynamically-based models for their description and prediction. The solubility isotherms are described by using the nonequilibrium lattice fluid (NELF) (model, already known to be appropriate for nonequilibrium glassy polymers, while the permeability isotherms are described through a general transport model in which diffusivity is the product of a purely kinetic factor, the mobility coefficient, and a thermodynamic factor. The latter is calculated from the NELF model and mobility is considered concentration-dependent through an exponential relationship containing two parameters only. The models are tested explicitly considering solubility and permeability data of various penetrants in three glassy polymers, PSf, PPh and 6FDA-6FpDA, selected as the reference for different behaviors. It is shown that the models are able to calculate the different behaviors observed, and in particular the permeability dependence on upstream pressure, both when it is decreasing as well as when it is increasing, with no need to invoke the onset of additional plasticization phenomena. The correlations found between polymer and penetrant properties with the two parameters of the mobility coefficient also lead to the predictive ability of the transport model. PMID:28825619
Neutron-$$\\gamma$$ competition for β-delayed neutron emission
Mumpower, Matthew Ryan; Kawano, Toshihiko; Moller, Peter
2016-12-19
Here we present a coupled quasiparticle random phase approximation and Hauser-Feshbach (QRPA+HF) model for calculating delayed particle emission. This approach uses microscopic nuclear structure information, which starts with Gamow-Teller strength distributions in the daughter nucleus and then follows the statistical decay until the initial available excitation energy is exhausted. Explicitly included at each particle emission stage is γ-ray competition. We explore this model in the context of neutron emission of neutron-rich nuclei and find that neutron-γ competition can lead to both increases and decreases in neutron emission probabilities, depending on the system considered. Finally, a second consequence of this formalismmore » is a prediction of more neutrons on average being emitted after β decay for nuclei near the neutron drip line compared to models that do not consider the statistical decay.« less
Yang, Yuanyuan; Zhang, Shuwen; Liu, Yansui; Xing, Xiaoshi; de Sherbinin, Alex
2017-01-01
Historical land use information is essential to understanding the impact of anthropogenic modification of land use/cover on the temporal dynamics of environmental and ecological issues. However, due to a lack of spatial explicitness, complete thematic details and the conversion types for historical land use changes, the majority of historical land use reconstructions do not sufficiently meet the requirements for an adequate model. Considering these shortcomings, we explored the possibility of constructing a spatially-explicit modeling framework (HLURM: Historical Land Use Reconstruction Model). Then a three-map comparison method was adopted to validate the projected reconstruction map. The reconstruction suggested that the HLURM model performed well in the spatial reconstruction of various land-use categories, and had a higher figure of merit (48.19%) than models used in other case studies. The largest land use/cover type in the study area was determined to be grassland, followed by arable land and wetland. Using the three-map comparison, we noticed that the major discrepancies in land use changes among the three maps were as a result of inconsistencies in the classification of land-use categories during the study period, rather than as a result of the simulation model. PMID:28134342
Simulating Space Capsule Water Landing with Explicit Finite Element Method
NASA Technical Reports Server (NTRS)
Wang, John T.; Lyle, Karen H.
2007-01-01
A study of using an explicit nonlinear dynamic finite element code for simulating the water landing of a space capsule was performed. The finite element model contains Lagrangian shell elements for the space capsule and Eulerian solid elements for the water and air. An Arbitrary Lagrangian Eulerian (ALE) solver and a penalty coupling method were used for predicting the fluid and structure interaction forces. The space capsule was first assumed to be rigid, so the numerical results could be correlated with closed form solutions. The water and air meshes were continuously refined until the solution was converged. The converged maximum deceleration predicted is bounded by the classical von Karman and Wagner solutions and is considered to be an adequate solution. The refined water and air meshes were then used in the models for simulating the water landing of a capsule model that has a flexible bottom. For small pitch angle cases, the maximum deceleration from the flexible capsule model was found to be significantly greater than the maximum deceleration obtained from the corresponding rigid model. For large pitch angle cases, the difference between the maximum deceleration of the flexible model and that of its corresponding rigid model is smaller. Test data of Apollo space capsules with a flexible heat shield qualitatively support the findings presented in this paper.
AST: Activity-Security-Trust driven modeling of time varying networks
Wang, Jian; Xu, Jiake; Liu, Yanheng; Deng, Weiwen
2016-01-01
Network modeling is a flexible mathematical structure that enables to identify statistical regularities and structural principles hidden in complex systems. The majority of recent driving forces in modeling complex networks are originated from activity, in which an activity potential of a time invariant function is introduced to identify agents’ interactions and to construct an activity-driven model. However, the new-emerging network evolutions are already deeply coupled with not only the explicit factors (e.g. activity) but also the implicit considerations (e.g. security and trust), so more intrinsic driving forces behind should be integrated into the modeling of time varying networks. The agents undoubtedly seek to build a time-dependent trade-off among activity, security, and trust in generating a new connection to another. Thus, we reasonably propose the Activity-Security-Trust (AST) driven model through synthetically considering the explicit and implicit driving forces (e.g. activity, security, and trust) underlying the decision process. AST-driven model facilitates to more accurately capture highly dynamical network behaviors and figure out the complex evolution process, allowing a profound understanding of the effects of security and trust in driving network evolution, and improving the biases induced by only involving activity representations in analyzing the dynamical processes. PMID:26888717
CDPOP: A spatially explicit cost distance population genetics program
Erin L. Landguth; S. A. Cushman
2010-01-01
Spatially explicit simulation of gene flow in complex landscapes is essential to explain observed population responses and provide a foundation for landscape genetics. To address this need, we wrote a spatially explicit, individual-based population genetics model (CDPOP). The model implements individual-based population modelling with Mendelian inheritance and k-allele...
A quantification model for the structure of clay materials.
Tang, Liansheng; Sang, Haitao; Chen, Haokun; Sun, Yinlei; Zhang, Longjian
2016-07-04
In this paper, the quantification for clay structure is explicitly explained, and the approach and goals of quantification are also discussed. The authors consider that the purpose of the quantification for clay structure is to determine some parameters that can be used to quantitatively characterize the impact of clay structure on the macro-mechanical behaviour. According to the system theory and the law of energy conservation, a quantification model for the structure characteristics of clay materials is established and three quantitative parameters (i.e., deformation structure potential, strength structure potential and comprehensive structure potential) are proposed. And the corresponding tests are conducted. The experimental results show that these quantitative parameters can accurately reflect the influence of clay structure on the deformation behaviour, strength behaviour and the relative magnitude of structural influence on the above two quantitative parameters, respectively. These quantitative parameters have explicit mechanical meanings, and can be used to characterize the structural influences of clay on its mechanical behaviour.
NASA Astrophysics Data System (ADS)
Jeon, Haemin; Yu, Jaesang; Lee, Hunsu; Kim, G. M.; Kim, Jae Woo; Jung, Yong Chae; Yang, Cheol-Min; Yang, B. J.
2017-09-01
Continuous fiber-reinforced composites are important materials that have the highest commercialized potential in the upcoming future among existing advanced materials. Despite their wide use and value, their theoretical mechanisms have not been fully established due to the complexity of the compositions and their unrevealed failure mechanisms. This study proposes an effective three-dimensional damage modeling of a fibrous composite by combining analytical micromechanics and evolutionary computation. The interface characteristics, debonding damage, and micro-cracks are considered to be the most influential factors on the toughness and failure behaviors of composites, and a constitutive equation considering these factors was explicitly derived in accordance with the micromechanics-based ensemble volume averaged method. The optimal set of various model parameters in the analytical model were found using modified evolutionary computation that considers human-induced error. The effectiveness of the proposed formulation was validated by comparing a series of numerical simulations with experimental data from available studies.
The MUSIC algorithm for impedance tomography of small inclusions from discrete data
NASA Astrophysics Data System (ADS)
Lechleiter, A.
2015-09-01
We consider a point-electrode model for electrical impedance tomography and show that current-to-voltage measurements from finitely many electrodes are sufficient to characterize the positions of a finite number of point-like inclusions. More precisely, we consider an asymptotic expansion with respect to the size of the small inclusions of the relative Neumann-to-Dirichlet operator in the framework of the point electrode model. This operator is naturally finite-dimensional and models difference measurements by finitely many small electrodes of the electric potential with and without the small inclusions. Moreover, its leading-order term explicitly characterizes the centers of the small inclusions if the (finite) number of point electrodes is large enough. This characterization is based on finite-dimensional test vectors and leads naturally to a MUSIC algorithm for imaging the inclusion centers. We show both the feasibility and limitations of this imaging technique via two-dimensional numerical experiments, considering in particular the influence of the number of point electrodes on the algorithm’s images.
On valuing information in adaptive-management models.
Moore, Alana L; McCarthy, Michael A
2010-08-01
Active adaptive management looks at the benefit of using strategies that may be suboptimal in the near term but may provide additional information that will facilitate better management in the future. In many adaptive-management problems that have been studied, the optimal active and passive policies (accounting for learning when designing policies and designing policy on the basis of current best information, respectively) are very similar. This seems paradoxical; when faced with uncertainty about the best course of action, managers should spend very little effort on actively designing programs to learn about the system they are managing. We considered two possible reasons why active and passive adaptive solutions are often similar. First, the benefits of learning are often confined to the particular case study in the modeled scenario, whereas in reality information gained from local studies is often applied more broadly. Second, management objectives that incorporate the variance of an estimate may place greater emphasis on learning than more commonly used objectives that aim to maximize an expected value. We explored these issues in a case study of Merri Creek, Melbourne, Australia, in which the aim was to choose between two options for revegetation. We explicitly incorporated monitoring costs in the model. The value of the terminal rewards and the choice of objective both influenced the difference between active and passive adaptive solutions. Explicitly considering the cost of monitoring provided a different perspective on how the terminal reward and management objective affected learning. The states for which it was optimal to monitor did not always coincide with the states in which active and passive adaptive management differed. Our results emphasize that spending resources on monitoring is only optimal when the expected benefits of the options being considered are similar and when the pay-off for learning about their benefits is large.
Xu, Jingjie; Xie, Yan; Lu, Benzhuo; Zhang, Linbo
2016-08-25
The Debye-Hückel limiting law is used to study the binding kinetics of substrate-enzyme system as well as to estimate the reaction rate of a electrostatically steered diffusion-controlled reaction process. It is based on a linearized Poisson-Boltzmann model and known for its accurate predictions in dilute solutions. However, the substrate and product particles are in nonequilibrium states and are possibly charged, and their contributions to the total electrostatic field cannot be explicitly studied in the Poisson-Boltzmann model. Hence the influences of substrate and product on reaction rate coefficient were not known. In this work, we consider all the charged species, including the charged substrate, product, and mobile salt ions in a Poisson-Nernst-Planck model, and then compare the results with previous work. The results indicate that both the charged substrate and product can significantly influence the reaction rate coefficient with different behaviors under different setups of computational conditions. It is interesting to find that when substrate and product are both considered, under an overall neutral boundary condition for all the bulk charged species, the computed reaction rate kinetics recovers a similar Debye-Hückel limiting law again. This phenomenon implies that the charged product counteracts the influence of charged substrate on reaction rate coefficient. Our analysis discloses the fact that the total charge concentration of substrate and product, though in a nonequilibrium state individually, obeys an equilibrium Boltzmann distribution, and therefore contributes as a normal charged ion species to ionic strength. This explains why the Debye-Hückel limiting law still works in a considerable range of conditions even though the effects of charged substrate and product particles are not specifically and explicitly considered in the theory.
Single- or multi-flavor Kondo effect in graphene
NASA Astrophysics Data System (ADS)
Zhu, Zhen-Gang; Ding, Kai-He; Berakdar, Jamal
2010-06-01
Based on the tight-binding formalism, we investigate the Anderson and the Kondo model for an adatom magnetic impurity above graphene. Different impurity positions are analyzed. Employing a partial-wave representation we study the nature of the coupling between the impurity and the conducting electrons. The components from the two Dirac points are mixed while interacting with the impurity. Two configurations are considered explicitly: the adatom is above one atom (ADA), the other case is the adatom above the center the honeycomb (ADC). For ADA the impurity is coupled with one flavor for both A and B sublattice and both Dirac points. For ADC the impurity couples with multi-flavor states for a spinor state of the impurity. We show, explicitly for a 3d magnetic atom, dz2, (dxz,dyz), and (dx2- y2,dxy) couple respectively with the Γ1, Γ5(E1), and Γ6(E2) representations (reps) of C6v group in ADC case. The bases for these reps of graphene are also derived explicitly. For ADA we calculate the Kondo temperature.
NASA Astrophysics Data System (ADS)
Wada, Y.; Wisser, D.; Bierkens, M. F. P.
2013-02-01
To sustain growing food demand and increasing standard of living, global water withdrawal and consumptive water use have been increasing rapidly. To analyze the human perturbation on water resources consistently over a large scale, a number of macro-scale hydrological models (MHMs) have been developed over the recent decades. However, few models consider the feedback between water availability and water demand, and even fewer models explicitly incorporate water allocation from surface water and groundwater resources. Here, we integrate a global water demand model into a global water balance model, and simulate water withdrawal and consumptive water use over the period 1979-2010, considering water allocation from surface water and groundwater resources and explicitly taking into account feedbacks between supply and demand, using two re-analysis products: ERA-Interim and MERRA. We implement an irrigation water scheme, which works dynamically with daily surface and soil water balance, and include a newly available extensive reservoir data set. Simulated surface water and groundwater withdrawal show generally good agreement with available reported national and sub-national statistics. The results show a consistent increase in both surface water and groundwater use worldwide, but groundwater use has been increasing more rapidly than surface water use since the 1990s. Human impacts on terrestrial water storage (TWS) signals are evident, altering the seasonal and inter-annual variability. The alteration is particularly large over the heavily regulated basins such as the Colorado and the Columbia, and over the major irrigated basins such as the Mississippi, the Indus, and the Ganges. Including human water use generally improves the correlation of simulated TWS anomalies with those of the GRACE observations.
NASA Astrophysics Data System (ADS)
Smith, Mike U.; Scharmann, Lawrence
2008-02-01
This investigation delineates a multi-year action research agenda designed to develop an instructional model for teaching the nature of science (NOS) to preservice science teachers. Our past research strongly supports the use of explicit reflective instructional methods, which includes Thomas Kuhn’s notion of learning by ostention and treating science as a continuum (i.e., comparing fields of study to one another for relative placement as less to more scientific). Instruction based on conceptual change precepts, however, also exhibits promise. Thus, the investigators sought to ascertain the degree to which conceptual change took place among students (n = 15) participating in the NOS instructional model. Three case studies are presented to illustrate successful conceptual changes that took place as a result of the NOS instructional model. All three cases represent students who claim a very conservative Christian heritage and for whom evolution was not considered a legitimate scientific theory prior to participating in the NOS instructional model. All three case study individuals, along with their twelve classmates, placed evolution as most scientific when compared to intelligent design and a fictional field of study called “Umbrellaology.”
NASA Astrophysics Data System (ADS)
Abellán-Nebot, J. V.; Liu, J.; Romero, F.
2009-11-01
The State Space modelling approach has been recently proposed as an engineering-driven technique for part quality prediction in Multistage Machining Processes (MMP). Current State Space models incorporate fixture and datum variations in the multi-stage variation propagation, without explicitly considering common operation variations such as machine-tool thermal distortions, cutting-tool wear, cutting-tool deflections, etc. This paper shows the limitations of the current State Space model through an experimental case study where the effect of the spindle thermal expansion, cutting-tool flank wear and locator errors are introduced. The paper also discusses the extension of the current State Space model to include operation variations and its potential benefits.
NASA Astrophysics Data System (ADS)
Chaudhary, Tarun; Khanna, Gargi
2017-03-01
The purpose of this paper is to explore junctionless double gate vertical slit field effect transistor (JLDG VeSFET) with reduced short channel effects and to develop an analytical threshold voltage model for the device considering the impact of thermal variations for the very first time. The model has been derived by solving 2D Poisson's equation and the effects of variation in temperature on various electrical parameters of the device such as Rout, drain current, mobility, subthreshold slope and DIBL has been studied and described in the paper. The model provides a deep physical insight of the device behavior and is also very helpful in contributing to the design space exploration for JLDG VeSFET. The proposed model is verified with simulative analysis at different radii of the device and it has been observed that there is a good agreement between the analytical model and simulation results.
Are adverse effects incorporated in economic models? An initial review of current practice.
Craig, D; McDaid, C; Fonseca, T; Stock, C; Duffy, S; Woolacott, N
2009-12-01
To identify methodological research on the incorporation of adverse effects in economic models and to review current practice. Major electronic databases (Cochrane Methodology Register, Health Economic Evaluations Database, NHS Economic Evaluation Database, EconLit, EMBASE, Health Management Information Consortium, IDEAS, MEDLINE and Science Citation Index) were searched from inception to September 2007. Health technology assessment (HTA) reports commissioned by the National Institute for Health Research (NIHR) HTA programme and published between 2004 and 2007 were also reviewed. The reviews of methodological research on the inclusion of adverse effects in decision models and of current practice were carried out according to standard methods. Data were summarised in a narrative synthesis. Of the 719 potentially relevant references in the methodological research review, five met the inclusion criteria; however, they contained little information of direct relevance to the incorporation of adverse effects in models. Of the 194 HTA monographs published from 2004 to 2007, 80 were reviewed, covering a range of research and therapeutic areas. In total, 85% of the reports included adverse effects in the clinical effectiveness review and 54% of the decision models included adverse effects in the model; 49% included adverse effects in the clinical review and model. The link between adverse effects in the clinical review and model was generally weak; only 3/80 (< 4%) used the results of a meta-analysis from the systematic review of clinical effectiveness and none used only data from the review without further manipulation. Of the models including adverse effects, 67% used a clinical adverse effects parameter, 79% used a cost of adverse effects parameter, 86% used one of these and 60% used both. Most models (83%) used utilities, but only two (2.5%) used solely utilities to incorporate adverse effects and were explicit that the utility captured relevant adverse effects; 53% of those models that included utilities derived them from patients on treatment and could therefore be interpreted as capturing adverse effects. In total, 30% of the models that included adverse effects used withdrawals related to drug toxicity and therefore might be interpreted as using withdrawals to capture adverse effects, but this was explicitly stated in only three reports. Of the 37 models that did not include adverse effects, 18 provided justification for this omission, most commonly lack of data; 19 appeared to make no explicit consideration of adverse effects in the model. There is an implicit assumption within modelling guidance that adverse effects are very important but there is a lack of clarity regarding how they should be dealt with and considered in modelling. In many cases a lack of clear reporting in the HTAs made it extremely difficult to ascertain what had actually been carried out in consideration of adverse effects. The main recommendation is for much clearer and explicit reporting of adverse effects, or their exclusion, in decision models and for explicit recognition in future guidelines that 'all relevant outcomes' should include some consideration of adverse events.
Fractional cable model for signal conduction in spiny neuronal dendrites
NASA Astrophysics Data System (ADS)
Vitali, Silvia; Mainardi, Francesco
2017-06-01
The cable model is widely used in several fields of science to describe the propagation of signals. A relevant medical and biological example is the anomalous subdiffusion in spiny neuronal dendrites observed in several studies of the last decade. Anomalous subdiffusion can be modelled in several ways introducing some fractional component into the classical cable model. The Chauchy problem associated to these kind of models has been investigated by many authors, but up to our knowledge an explicit solution for the signalling problem has not yet been published. Here we propose how this solution can be derived applying the generalized convolution theorem (known as Efros theorem) for Laplace transforms. The fractional cable model considered in this paper is defined by replacing the first order time derivative with a fractional derivative of order α ∈ (0, 1) of Caputo type. The signalling problem is solved for any input function applied to the accessible end of a semi-infinite cable, which satisfies the requirements of the Efros theorem. The solutions corresponding to the simple cases of impulsive and step inputs are explicitly calculated in integral form containing Wright functions. Thanks to the variability of the parameter α, the corresponding solutions are expected to adapt to the qualitative behaviour of the membrane potential observed in experiments better than in the standard case α = 1.
Silva, Nuno Miguel; Rio, Jeremy; Currat, Mathias
2017-12-15
Recent advances in sequencing technologies have allowed for the retrieval of ancient DNA data (aDNA) from skeletal remains, providing direct genetic snapshots from diverse periods of human prehistory. Comparing samples taken in the same region but at different times, hereafter called "serial samples", may indicate whether there is continuity in the peopling history of that area or whether an immigration of a genetically different population has occurred between the two sampling times. However, the exploration of genetic relationships between serial samples generally ignores their geographical locations and the spatiotemporal dynamics of populations. Here, we present a new coalescent-based, spatially explicit modelling approach to investigate population continuity using aDNA, which includes two fundamental elements neglected in previous methods: population structure and migration. The approach also considers the extensive temporal and geographical variance that is commonly found in aDNA population samples. We first showed that our spatially explicit approach is more conservative than the previous (panmictic) approach and should be preferred to test for population continuity, especially when small and isolated populations are considered. We then applied our method to two mitochondrial datasets from Germany and France, both including modern and ancient lineages dating from the early Neolithic. The results clearly reject population continuity for the maternal line over the last 7500 years for the German dataset but not for the French dataset, suggesting regional heterogeneity in post-Neolithic migratory processes. Here, we demonstrate the benefits of using a spatially explicit method when investigating population continuity with aDNA. It constitutes an improvement over panmictic methods by considering the spatiotemporal dynamics of genetic lineages and the precise location of ancient samples. The method can be used to investigate population continuity between any pair of serial samples (ancient-ancient or ancient-modern) and to investigate more complex evolutionary scenarios. Although we based our study on mitochondrial DNA sequences, diploid molecular markers of different types (DNA, SNP, STR) can also be simulated with our approach. It thus constitutes a promising tool for the analysis of the numerous aDNA datasets being produced, including genome wide data, in humans but also in many other species.
A Behavioral Model of Landscape Change in the Amazon Basin: The Colonist Case
NASA Technical Reports Server (NTRS)
Walker, R. A.; Drzyzga, S. A.; Li, Y. L.; Wi, J. G.; Caldas, M.; Arima, E.; Vergara, D.
2004-01-01
This paper presents the prototype of a predictive model capable of describing both magnitudes of deforestation and its spatial articulation into patterns of forest fragmentation. In a departure from other landscape models, it establishes an explicit behavioral foundation for algorithm development, predicated on notions of the peasant economy and on household production theory. It takes a 'bottom-up' approach, generating the process of land-cover change occurring at lot level together with the geography of a transportation system to describe regional landscape change. In other words, it translates the decentralized decisions of individual households into a collective, spatial impact. In so doing, the model unites the richness of survey research on farm households with the analytical rigor of spatial analysis enabled by geographic information systems (GIs). The paper describes earlier efforts at spatial modeling, provides a critique of the so-called spatially explicit model, and elaborates a behavioral foundation by considering farm practices of colonists in the Amazon basin. It then uses, insight from the behavioral statement to motivate a GIs-based model architecture. The model is implemented for a long-standing colonization frontier in the eastern sector of the basin, along the Trans-Amazon Highway in the State of Para, Brazil. Results are subjected to both sensitivity analysis and error assessment, and suggestions are made about how the model could be improved.
NASA Astrophysics Data System (ADS)
Braakhekke, Maarten; Rebel, Karin; Dekker, Stefan; Smith, Benjamin; Sutanudjaja, Edwin; van Beek, Rens; van Kampenhout, Leo; Wassen, Martin
2017-04-01
In up to 30% of the global land surface ecosystems are potentially influenced by the presence of a shallow groundwater table. In these regions upward water flux by capillary rise increases soil moisture availability in the root zone, which has a strong effect on evapotranspiration, vegetation dynamics, and fluxes of carbon and nitrogen. Most global hydrological models and several land surface models simulate groundwater table dynamics and their effects on land surface processes. However, these models typically have relatively simplistic representation of vegetation and do not consider changes in vegetation type and structure. Dynamic global vegetation models (DGVMs), describe land surface from an ecological perspective, combining detailed description of vegetation dynamics and structure, and biogeochemical processes and are thus more appropriate to simulate the ecological and biogeochemical effects of groundwater interactions. However, currently virtually all DGVMs ignore these effects, assuming that water tables are too deep to affect soil moisture in the root zone. We have implemented a tight coupling between the dynamic global ecosystem model LPJ-GUESS and the global hydrological model PCR-GLOBWB, which explicitly simulates groundwater dynamics. This coupled model allows us to explicitly account for groundwater effects on terrestrial ecosystem processes at global scale. Results of global simulations indicate that groundwater strongly influences fluxes of water, carbon and nitrogen, in many regions, adding up to a considerable effect at the global scale.
Work distributions for random sudden quantum quenches
NASA Astrophysics Data System (ADS)
Łobejko, Marcin; Łuczka, Jerzy; Talkner, Peter
2017-05-01
The statistics of work performed on a system by a sudden random quench is investigated. Considering systems with finite dimensional Hilbert spaces we model a sudden random quench by randomly choosing elements from a Gaussian unitary ensemble (GUE) consisting of Hermitian matrices with identically, Gaussian distributed matrix elements. A probability density function (pdf) of work in terms of initial and final energy distributions is derived and evaluated for a two-level system. Explicit results are obtained for quenches with a sharply given initial Hamiltonian, while the work pdfs for quenches between Hamiltonians from two independent GUEs can only be determined in explicit form in the limits of zero and infinite temperature. The same work distribution as for a sudden random quench is obtained for an adiabatic, i.e., infinitely slow, protocol connecting the same initial and final Hamiltonians.
Review Of The Working Group On Precession And The Ecliptic
NASA Astrophysics Data System (ADS)
Hilton, J. L.
2006-08-01
The IAU Working Group on Precession and the Ecliptic was charged with providing a precession model that was both dynamically consistent and compatible with the IAU 2000A nutation model, along with an updated definition and model for the ecliptic. The report of the working group has been accepted for publication in Celestial Mechanics (Hilton et al. 2006, in press) and has resulted in a recommendation to be considered at this General Assembly of the IAU. Specifically, the working group recommends: 1. That the terms lunisolar precession and planetary precession be replaced by precession of the equator and precession of the ecliptic, respectively. 2. That, beginning on 1 January 2009, the precession component of the IAU 2000A precession-nutation model be replaced by the P03 precession theory, of Capitaine et al. (2003, A&A, 412, 567-586) for the precession of the equator (Eqs. 37) and the precession of the ecliptic (Eqs. 38); the same paper provides the polynomial developments for the P03 primary angles and a number of derived quantities for use in both the equinox based and Celestial Intermediate Origin based paradigms. 3. That the choice of precession parameters be left to the user. 4. That the ecliptic pole should be explicitly defined by the mean orbital angular momentum vector of the Earth-Moon barycenter in an inertial reference frame, and this definition should be explicitly stated to avoid confusion with other, older definitions. consistent and compatible with the IAU 2000A nutation model, along consistent and compatible with the IAU 2000A nutation model, along with an updated definition and model for the ecliptic. The report of the working group has been accepted for publication in Celestial Mechanics (Hilton et al. 2006, in press) and has resulted in a recommendation to be considered at this General Assembly of the IAU. Specifically, the working group recommends, * that the terms lunisolar precession and planetary precession be replaced by precession of the equator and precession of the ecliptic, respectively, * that, beginning on 1 January 2009, the precession component of the IAU 2000A precession-nutation model be replaced by the P03 precession theory, of Capitaine et al. (2003, A&A, 412, 567-586) for the precession of the equator (Eqs.~37) and the precession of the ecliptic (Eqs.~38); the same paper provides the polynomial developments for the P03 primary angles and a number of derived quantities for use in both the equinox basedand Celestial Intermediate Origin based paradigms, * that the choice of precession parameters be left to the user, and * that the ecliptic pole should be explicitly defined by the mean orbital angular momentum vector of the Earth-Moon barycenter in an inertial reference frame, and this definition should be explicitly stated to avoid confusion with other, older definitions.
Effects of Explicit Instructions, Metacognition, and Motivation on Creative Performance
ERIC Educational Resources Information Center
Hong, Eunsook; O'Neil, Harold F.; Peng, Yun
2016-01-01
Effects of explicit instructions, metacognition, and intrinsic motivation on creative homework performance were examined in 303 Chinese 10th-grade students. Models that represent hypothesized relations among these constructs and trait covariates were tested using structural equation modelling. Explicit instructions geared to originality were…
Towards an Understanding of Atmospheric Balance
NASA Technical Reports Server (NTRS)
Errico, Ronald M.
2015-01-01
During a 35 year period I published 30+ pear-reviewed papers and technical reports concerning, in part or whole, the topic of atmospheric balance. Most used normal modes, either implicitly or explicitly, as the appropriate diagnostic tool. This included examination of nonlinear balance in several different global and regional models using a variety of novel metrics as well as development of nonlinear normal mode initialization schemes for particular global and regional models. Recent studies also included the use of adjoint models and OSSEs to answer some questions regarding balance. lwill summarize what I learned through those many works, but also present what l see as remaining issues to be considered or investigated.
Squeezing and its graphical representations in the anharmonic oscillator model
NASA Astrophysics Data System (ADS)
Tanaś, R.; Miranowicz, A.; Kielich, S.
1991-04-01
The problem of squeezing and its graphical representations in the anharmonic oscillator model is considered. Explicit formulas for squeezing, principal squeezing, and the quasiprobability distribution (QPD) function are given and illustrated graphically. Approximate analytical formulas for the variances, extremal variances, and QPD are obtained for the case of small nonlinearities and large numbers of photons. The possibility of almost perfect squeezing in the model is demonstrated and its graphical representations in the form of variance lemniscates and QPD contours are plotted. For large numbers of photons the crescent shape of the QPD contours is hardly visible and quite regular ellipses are obtained.
Implicit Associations with Popularity in Early Adolescence: An Approach-Avoidance Analysis
ERIC Educational Resources Information Center
Lansu, Tessa A. M.; Cillessen, Antonius H. N.; Karremans, Johan C.
2012-01-01
This study examined 241 early adolescents' implicit and explicit associations with popularity. The peer status and gender of both the targets and the perceivers were considered. Explicit associations with popularity were assessed with sociometric methods. Implicit associations with popularity were assessed with an approach-avoidance task (AAT).…
Modeling Bloch oscillations in nanoscale Josephson junctions.
Vora, Heli; Kautz, R L; Nam, S W; Aumentado, J
2017-08-01
Bloch oscillations in nanoscale Josephson junctions with a Coulomb charging energy comparable to the Josephson coupling energy are explored within the context of a model previously considered by Geigenmüller and Schön that includes Zener tunneling and treats quasiparticle tunneling as an explicit shot-noise process. The dynamics of the junction quasicharge are investigated numerically using both Monte Carlo and ensemble approaches to calculate voltage-current characteristics in the presence of microwaves. We examine in detail the origin of harmonic and subharmonic Bloch steps at dc biases I = ( n/m )2 ef induced by microwaves of frequency f and consider the optimum parameters for the observation of harmonic ( m = 1) steps. We also demonstrate that the GS model allows a detailed semiquantitative fit to experimental voltage-current characteristics previously obtained at the Chalmers University of Technology, confirming and strengthening the interpretation of the observed microwave-induced steps in terms of Bloch oscillations.
Modeling Bloch oscillations in nanoscale Josephson junctions
Vora, Heli; Kautz, R. L.; Nam, S. W.; Aumentado, J.
2018-01-01
Bloch oscillations in nanoscale Josephson junctions with a Coulomb charging energy comparable to the Josephson coupling energy are explored within the context of a model previously considered by Geigenmüller and Schön that includes Zener tunneling and treats quasiparticle tunneling as an explicit shot-noise process. The dynamics of the junction quasicharge are investigated numerically using both Monte Carlo and ensemble approaches to calculate voltage-current characteristics in the presence of microwaves. We examine in detail the origin of harmonic and subharmonic Bloch steps at dc biases I = (n/m)2ef induced by microwaves of frequency f and consider the optimum parameters for the observation of harmonic (m = 1) steps. We also demonstrate that the GS model allows a detailed semiquantitative fit to experimental voltage-current characteristics previously obtained at the Chalmers University of Technology, confirming and strengthening the interpretation of the observed microwave-induced steps in terms of Bloch oscillations. PMID:29577106
Lotka-Volterra competition models for sessile organisms.
Spencer, Matthew; Tanner, Jason E
2008-04-01
Markov models are widely used to describe the dynamics of communities of sessile organisms, because they are easily fitted to field data and provide a rich set of analytical tools. In typical ecological applications, at any point in time, each point in space is in one of a finite set of states (e.g., species, empty space). The models aim to describe the probabilities of transitions between states. In most Markov models for communities, these transition probabilities are assumed to be independent of state abundances. This assumption is often suspected to be false and is rarely justified explicitly. Here, we start with simple assumptions about the interactions among sessile organisms and derive a model in which transition probabilities depend on the abundance of destination states. This model is formulated in continuous time and is equivalent to a Lotka-Volterra competition model. We fit this model and a variety of alternatives in which transition probabilities do not depend on state abundances to a long-term coral reef data set. The Lotka-Volterra model describes the data much better than all models we consider other than a saturated model (a model with a separate parameter for each transition at each time interval, which by definition fits the data perfectly). Our approach provides a basis for further development of stochastic models of sessile communities, and many of the methods we use are relevant to other types of community. We discuss possible extensions to spatially explicit models.
The Evolution of Data-Information-Knowledge-Wisdom in Nursing Informatics.
Ronquillo, Charlene; Currie, Leanne M; Rodney, Paddy
2016-01-01
The data-information-knowledge-wisdom (DIKW) model has been widely adopted in nursing informatics. In this article, we examine the evolution of DIKW in nursing informatics while incorporating critiques from other disciplines. This includes examination of assumptions of linearity and hierarchy and an exploration of the implicit philosophical grounding of the model. Two guiding questions are considered: (1) Does DIKW serve clinical information systems, nurses, or both? and (2) What level of theory does DIKW occupy? The DIKW model has been valuable in advancing the independent field of nursing informatics. We offer that if the model is to continue to move forward, its role and functions must be explicitly addressed.
2015-08-01
21 Figure 4. Data-based proportion of DDD , DDE and DDT in total DDx in fish and sediment by... DDD dichlorodiphenyldichloroethane DDE dichlorodiphenyldichloroethylene DDT dichlorodiphenyltrichloroethane DoD Department of Defense ERM... DDD ) at the other site. The spatially-explicit model consistently predicts tissue concentrations that closely match both the average and the
Yigit, Cemil; Heyda, Jan; Dzubiella, Joachim
2015-08-14
We introduce a set of charged patchy particle models (CPPMs) in order to systematically study the influence of electrostatic charge patchiness and multipolarity on macromolecular interactions by means of implicit-solvent, explicit-ion Langevin dynamics simulations employing the Gromacs software. We consider well-defined zero-, one-, and two-patched spherical globules each of the same net charge and (nanometer) size which are composed of discrete atoms. The studied mono- and multipole moments of the CPPMs are comparable to those of globular proteins with similar size. We first characterize ion distributions and electrostatic potentials around a single CPPM. Although angle-resolved radial distribution functions reveal the expected local accumulation and depletion of counter- and co-ions around the patches, respectively, the orientation-averaged electrostatic potential shows only a small variation among the various CPPMs due to space charge cancellations. Furthermore, we study the orientation-averaged potential of mean force (PMF), the number of accumulated ions on the patches, as well as the CPPM orientations along the center-to-center distance of a pair of CPPMs. We compare the PMFs to the classical Derjaguin-Verwey-Landau-Overbeek theory and previously introduced orientation-averaged Debye-Hückel pair potentials including dipolar interactions. Our simulations confirm the adequacy of the theories in their respective regimes of validity, while low salt concentrations and large multipolar interactions remain a challenge for tractable theoretical descriptions.
Diffusion on an Ising chain with kinks
NASA Astrophysics Data System (ADS)
Hamma, Alioscia; Mansour, Toufik; Severini, Simone
2009-07-01
We count the number of histories between the two degenerate minimum energy configurations of the Ising model on a chain, as a function of the length n and the number d of kinks that appear above the critical temperature. This is equivalent to count permutations of length n avoiding certain subsequences depending on d. We give explicit generating functions and compute the asymptotics. The setting considered has a role when describing dynamics induced by quantum Hamiltonians with deconfined quasi-particles.
Fourth order difference methods for hyperbolic IBVP's
NASA Technical Reports Server (NTRS)
Gustafsson, Bertil; Olsson, Pelle
1994-01-01
Fourth order difference approximations of initial-boundary value problems for hyperbolic partial differential equations are considered. We use the method of lines approach with both explicit and compact implicit difference operators in space. The explicit operator satisfies an energy estimate leading to strict stability. For the implicit operator we develop boundary conditions and give a complete proof of strong stability using the Laplace transform technique. We also present numerical experiments for the linear advection equation and Burgers' equation with discontinuities in the solution or in its derivative. The first equation is used for modeling contact discontinuities in fluid dynamics, the second one for modeling shocks and rarefaction waves. The time discretization is done with a third order Runge-Kutta TVD method. For solutions with discontinuities in the solution itself we add a filter based on second order viscosity. In case of the non-linear Burger's equation we use a flux splitting technique that results in an energy estimate for certain different approximations, in which case also an entropy condition is fulfilled. In particular we shall demonstrate that the unsplit conservative form produces a non-physical shock instead of the physically correct rarefaction wave. In the numerical experiments we compare our fourth order methods with a standard second order one and with a third order TVD-method. The results show that the fourth order methods are the only ones that give good results for all the considered test problems.
Hu, Xiao; Maffucci, Irene; Contini, Alessandro
2018-05-13
The inclusion of direct effects mediated by water during the ligand-receptor recognition is a hot-topic of modern computational chemistry applied to drug discovery and development. Docking or virtual screening with explicit hydration is still debatable, despite the successful cases that have been presented in the last years. Indeed, how to select the water molecules that will be included in the docking process or how the included waters should be treated remain open questions. In this review, we will discuss some of the most recent methods that can be used in computational drug discovery and drug development when the effect of a single water, or of a small network of interacting waters, needs to be explicitly considered. Here, we analyse software to aid the selection, or to predict the position, of water molecules that are going to be explicitly considered in later docking studies. We also present software and protocols able to efficiently treat flexible water molecules during docking, including examples of applications. Finally, we discuss methods based on molecular dynamics simulations that can be used to integrate docking studies or to reliably and efficiently compute binding energies of ligands in presence of interfacial or bridging water molecules. Software applications aiding the design of new drugs that exploit water molecules, either as displaceable residues or as bridges to the receptor, are constantly being developed. Although further validation is needed, workflows that explicitly consider water will probably become a standard for computational drug discovery soon. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Prediction of Complex Aerodynamic Flows with Explicit Algebraic Stress Models
NASA Technical Reports Server (NTRS)
Abid, Ridha; Morrison, Joseph H.; Gatski, Thomas B.; Speziale, Charles G.
1996-01-01
An explicit algebraic stress equation, developed by Gatski and Speziale, is used in the framework of K-epsilon formulation to predict complex aerodynamic turbulent flows. The nonequilibrium effects are modeled through coefficients that depend nonlinearly on both rotational and irrotational strains. The proposed model was implemented in the ISAAC Navier-Stokes code. Comparisons with the experimental data are presented which clearly demonstrate that explicit algebraic stress models can predict the correct response to nonequilibrium flow.
Gopalaswamy, Arjun M.; Royle, J. Andrew; Hines, James E.; Singh, Pallavi; Jathanna, Devcharan; Kumar, N. Samba; Karanth, K. Ullas
2012-01-01
1. The advent of spatially explicit capture-recapture models is changing the way ecologists analyse capture-recapture data. However, the advantages offered by these new models are not fully exploited because they can be difficult to implement. 2. To address this need, we developed a user-friendly software package, created within the R programming environment, called SPACECAP. This package implements Bayesian spatially explicit hierarchical models to analyse spatial capture-recapture data. 3. Given that a large number of field biologists prefer software with graphical user interfaces for analysing their data, SPACECAP is particularly useful as a tool to increase the adoption of Bayesian spatially explicit capture-recapture methods in practice.
The Kadomtsev{endash}Petviashvili equation as a source of integrable model equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maccari, A.
1996-12-01
A new integrable and nonlinear partial differential equation (PDE) in 2+1 dimensions is obtained, by an asymptotically exact reduction method based on Fourier expansion and spatiotemporal rescaling, from the Kadomtsev{endash}Petviashvili equation. The integrability property is explicitly demonstrated, by exhibiting the corresponding Lax pair, that is obtained by applying the reduction technique to the Lax pair of the Kadomtsev{endash}Petviashvili equation. This model equation is likely to be of applicative relevance, because it may be considered a consistent approximation of a large class of nonlinear evolution PDEs. {copyright} {ital 1996 American Institute of Physics.}
NASA Astrophysics Data System (ADS)
Yu, Jinchen; Peng, Mingshu
2016-10-01
In this paper, a Kaldor-Kalecki model of business cycle with both discrete and distributed delays is considered. With the corresponding characteristic equation analyzed, the local stability of the positive equilibrium is investigated. It is found that there exist Hopf bifurcations when the discrete time delay passes a sequence of critical values. By applying the method of multiple scales, the explicit formulae which determine the direction of Hopf bifurcation and the stability of bifurcating periodic solutions are derived. Finally, numerical simulations are carried out to illustrate our main results.
A maximum (non-extensive) entropy approach to equity options bid-ask spread
NASA Astrophysics Data System (ADS)
Tapiero, Oren J.
2013-07-01
The cross-section of options bid-ask spreads with their strikes are modelled by maximising the Kaniadakis entropy. A theoretical model results with the bid-ask spread depending explicitly on the implied volatility; the probability of expiring at-the-money and an asymmetric information parameter (κ). Considering AIG as a test case for the period between January 2006 and October 2008, we find that information flows uniquely from the trading activity in the underlying asset to its derivatives. Suggesting that κ is possibly an option implied measure of the current state of trading liquidity in the underlying asset.
Extinguishing trace fear engages the retrosplenial cortex rather than the amygdala
Kwapis, Janine L.; Jarome, Timothy J.; Lee, Jonathan L.; Gilmartin, Marieke R.; Helmstetter, Fred J.
2013-01-01
Extinction learning underlies the treatment for a variety of anxiety disorders. Most of what is known about the neurobiology of extinction is based on standard “delay” fear conditioning, in which awareness is not required for learning. Little is known about how complex, explicit associations extinguish, however. “Trace” conditioning is considered to be a rodent model of explicit fear because it relies on both the cortex and hippocampus and requires explicit contingency awareness in humans. Here, we explore the neural circuit supporting trace fear extinction in order to better understand how complex memories extinguish. We first show that the amygdala is selectively involved in delay fear extinction; blocking intra-amygdala glutamate receptors disrupted delay, but not trace extinction. Further, ERK phosphorylation was increased in the amygdala after delay, but not trace extinction. We then identify the retrosplenial cortex (RSC) as a key structure supporting trace extinction. ERK phosphorylation was selectively increased in the RSC following trace extinction and blocking intra-RSC NMDA receptors impaired trace, but not delay extinction. These findings indicate that delay and trace extinction require different neural circuits; delay extinction requires plasticity in the amygdala whereas trace extinction requires the RSC. Anxiety disorders linked to explicit memory may therefore depend on cortical processes that have not been traditionally targeted by extinction studies based on delay fear. PMID:24055593
Deng, Nanjie; Zhang, Bin W.; Levy, Ronald M.
2015-01-01
The ability to accurately model solvent effects on free energy surfaces is important for understanding many biophysical processes including protein folding and misfolding, allosteric transitions and protein-ligand binding. Although all-atom simulations in explicit solvent can provide an accurate model for biomolecules in solution, explicit solvent simulations are hampered by the slow equilibration on rugged landscapes containing multiple basins separated by barriers. In many cases, implicit solvent models can be used to significantly speed up the conformational sampling; however, implicit solvent simulations do not fully capture the effects of a molecular solvent, and this can lead to loss of accuracy in the estimated free energies. Here we introduce a new approach to compute free energy changes in which the molecular details of explicit solvent simulations are retained while also taking advantage of the speed of the implicit solvent simulations. In this approach, the slow equilibration in explicit solvent, due to the long waiting times before barrier crossing, is avoided by using a thermodynamic cycle which connects the free energy basins in implicit solvent and explicit solvent using a localized decoupling scheme. We test this method by computing conformational free energy differences and solvation free energies of the model system alanine dipeptide in water. The free energy changes between basins in explicit solvent calculated using fully explicit solvent paths agree with the corresponding free energy differences obtained using the implicit/explicit thermodynamic cycle to within 0.3 kcal/mol out of ~3 kcal/mol at only ~8 % of the computational cost. We note that WHAM methods can be used to further improve the efficiency and accuracy of the explicit/implicit thermodynamic cycle. PMID:26236174
Deng, Nanjie; Zhang, Bin W; Levy, Ronald M
2015-06-09
The ability to accurately model solvent effects on free energy surfaces is important for understanding many biophysical processes including protein folding and misfolding, allosteric transitions, and protein–ligand binding. Although all-atom simulations in explicit solvent can provide an accurate model for biomolecules in solution, explicit solvent simulations are hampered by the slow equilibration on rugged landscapes containing multiple basins separated by barriers. In many cases, implicit solvent models can be used to significantly speed up the conformational sampling; however, implicit solvent simulations do not fully capture the effects of a molecular solvent, and this can lead to loss of accuracy in the estimated free energies. Here we introduce a new approach to compute free energy changes in which the molecular details of explicit solvent simulations are retained while also taking advantage of the speed of the implicit solvent simulations. In this approach, the slow equilibration in explicit solvent, due to the long waiting times before barrier crossing, is avoided by using a thermodynamic cycle which connects the free energy basins in implicit solvent and explicit solvent using a localized decoupling scheme. We test this method by computing conformational free energy differences and solvation free energies of the model system alanine dipeptide in water. The free energy changes between basins in explicit solvent calculated using fully explicit solvent paths agree with the corresponding free energy differences obtained using the implicit/explicit thermodynamic cycle to within 0.3 kcal/mol out of ∼3 kcal/mol at only ∼8% of the computational cost. We note that WHAM methods can be used to further improve the efficiency and accuracy of the implicit/explicit thermodynamic cycle.
Spatially explicit multi-criteria decision analysis for managing vector-borne diseases
2011-01-01
The complex epidemiology of vector-borne diseases creates significant challenges in the design and delivery of prevention and control strategies, especially in light of rapid social and environmental changes. Spatial models for predicting disease risk based on environmental factors such as climate and landscape have been developed for a number of important vector-borne diseases. The resulting risk maps have proven value for highlighting areas for targeting public health programs. However, these methods generally only offer technical information on the spatial distribution of disease risk itself, which may be incomplete for making decisions in a complex situation. In prioritizing surveillance and intervention strategies, decision-makers often also need to consider spatially explicit information on other important dimensions, such as the regional specificity of public acceptance, population vulnerability, resource availability, intervention effectiveness, and land use. There is a need for a unified strategy for supporting public health decision making that integrates available data for assessing spatially explicit disease risk, with other criteria, to implement effective prevention and control strategies. Multi-criteria decision analysis (MCDA) is a decision support tool that allows for the consideration of diverse quantitative and qualitative criteria using both data-driven and qualitative indicators for evaluating alternative strategies with transparency and stakeholder participation. Here we propose a MCDA-based approach to the development of geospatial models and spatially explicit decision support tools for the management of vector-borne diseases. We describe the conceptual framework that MCDA offers as well as technical considerations, approaches to implementation and expected outcomes. We conclude that MCDA is a powerful tool that offers tremendous potential for use in public health decision-making in general and vector-borne disease management in particular. PMID:22206355
Modeling interactions between political parties and electors
NASA Astrophysics Data System (ADS)
Bagarello, F.; Gargano, F.
2017-09-01
In this paper we extend some recent results on an operatorial approach to the description of alliances between political parties interacting among themselves and with a basin of electors. In particular, we propose and compare three different models, deducing the dynamics of their related decision functions, i.e. the attitude of each party to form or not an alliance. In the first model the interactions between each party and their electors are considered. We show that these interactions drive the decision functions toward certain asymptotic values depending on the electors only: this is the perfect party, which behaves following the electors' suggestions. The second model is an extension of the first one in which we include a rule which modifies the status of the electors, and of the decision functions as a consequence, at some specific time step. In the third model we neglect the interactions with the electors while we consider cubic and quartic interactions between the parties and we show that we get (slightly oscillating) asymptotic values for the decision functions, close to their initial values. This is the real party, which does not listen to the electors. Several explicit situations are considered in details and numerical results are also shown.
Technical Note: Effect of explicit M and N-shell atomic transitions on a low-energy x-ray source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watson, Peter G. F., E-mail: peter.watson@mail.mcgill.ca; Seuntjens, Jan
Purpose: In EGSnrc, atomic transitions to and from the M and N-shells are treated in an average way by default. This approach is justified in which the energy difference between explicit and average M and N-shell binding energies is less than 1 keV, and for most applications can be considered negligible. However, for simulations of low energy x-ray sources on thin, high-Z targets, characteristic x-rays can make up a significant portion of the source spectra. As of release V4-2.4.0, EGSnrc has included an option to enable a more complete algorithm of all atomic transitions available in the EADL compilation. Inmore » this paper, the effect of M and N-shell averaging on the calculation of half-value layer (HVL) and relative depth dose (RDD) curve of a 50 kVp intraoperative x-ray tube with a thin gold target was investigated. Methods: A 50 kVp miniature x-ray source with a gold target (The INTRABEAM System, Carl Zeiss, Germany) was modeled with the EGSnrc user code cavity, both with and without M and N-shell averaging. From photon fluence spectra simulations, the source HVLs were determined analytically. The same source model was then used with egs-chamber to calculate RDD curves in water. Results: A 4% increase of HVL was reported when accounting for explicit M and N-shell transitions, and up to a 9% decrease in local relative dose for normalization at 3 mm depth in water. Conclusions: The EGSnrc default of using averaged M and N-shell binding energies has an observable effect on the HVL and RDD of a low energy x-ray source with high-Z target. For accurate modeling of this class of devices, explicit atomic transitions should be included.« less
Emerging from the bottleneck: Benefits of the comparative approach to modern neuroscience
Brenowitz, Eliot A.; Zakon, Harold H.
2015-01-01
Neuroscience historically exploited a wide diversity of animal taxa. Recently, however, research focused increasingly on a few model species. This trend accelerated with the genetic revolution, as genomic sequences and genetic tools became available for a few species, which formed a bottleneck. This coalescence on a small set of model species comes with several costs often not considered, especially in the current drive to use mice explicitly as models for human diseases. Comparative studies of strategically chosen non-model species can complement model species research and yield more rigorous studies. As genetic sequences and tools become available for many more species, we are poised to emerge from the bottleneck and once again exploit the rich biological diversity offered by comparative studies. PMID:25800324
Are Explicit Apologies Proportional to the Offenses They Address?
ERIC Educational Resources Information Center
Heritage, John; Raymond, Chase Wesley
2016-01-01
We consider here Goffman's proposal of proportionality between virtual offenses and remedial actions, based on the examination of 102 cases of explicit apologies. To this end, we offer a typology of the primary apology formats within the dataset, together with a broad categorization of the types of virtual offenses to which these apologies are…
Sleep Enhances Explicit Recollection in Recognition Memory
ERIC Educational Resources Information Center
Drosopoulos, Spyridon; Wagner, Ullrich; Born, Jan
2005-01-01
Recognition memory is considered to be supported by two different memory processes, i.e., the explicit recollection of information about a previous event and an implicit process of recognition based on a contextual sense of familiarity. Both types of memory supposedly rely on distinct memory systems. Sleep is known to enhance the consolidation of…
CP violation in heavy MSSM Higgs scenarios
Carena, M.; Ellis, J.; Lee, J. S.; ...
2016-02-18
We introduce and explore new heavy Higgs scenarios in the Minimal Supersymmetric Standard Model (MSSM) with explicit CP violation, which have important phenomenological implications that may be testable at the LHC. For soft supersymmetry-breaking scales M S above a few TeV and a charged Higgs boson mass M H+ above a few hundred GeV, new physics effects including those from explicit CP violation decouple from the light Higgs boson sector. However, such effects can significantly alter the phenomenology of the heavy Higgs bosons while still being consistent with constraints from low-energy observables, for instance electric dipole moments. To consider scenariosmore » with a charged Higgs boson much heavier than the Standard Model (SM) particles but much lighter than the supersymmetric particles, we revisit previous calculations of the MSSM Higgs sector. We compute the Higgs boson masses in the presence of CP violating phases, implementing improved matching and renormalization-group (RG) effects, as well as two-loop RG effects from the effective two-Higgs Doublet Model (2HDM) scale M H± to the scale M S. Here, we illustrate the possibility of non-decoupling CP-violating effects in the heavy Higgs sector using new benchmark scenarios named.« less
Energy efficient model based algorithm for control of building HVAC systems.
Kirubakaran, V; Sahu, Chinmay; Radhakrishnan, T K; Sivakumaran, N
2015-11-01
Energy efficient designs are receiving increasing attention in various fields of engineering. Heating ventilation and air conditioning (HVAC) control system designs involve improved energy usage with an acceptable relaxation in thermal comfort. In this paper, real time data from a building HVAC system provided by BuildingLAB is considered. A resistor-capacitor (RC) framework for representing thermal dynamics of the building is estimated using particle swarm optimization (PSO) algorithm. With objective costs as thermal comfort (deviation of room temperature from required temperature) and energy measure (Ecm) explicit MPC design for this building model is executed based on its state space representation of the supply water temperature (input)/room temperature (output) dynamics. The controllers are subjected to servo tracking and external disturbance (ambient temperature) is provided from the real time data during closed loop control. The control strategies are ported on a PIC32mx series microcontroller platform. The building model is implemented in MATLAB and hardware in loop (HIL) testing of the strategies is executed over a USB port. Results indicate that compared to traditional proportional integral (PI) controllers, the explicit MPC's improve both energy efficiency and thermal comfort significantly. Copyright © 2015 Elsevier Inc. All rights reserved.
CP violation in heavy MSSM Higgs scenarios
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carena, M.; Ellis, J.; Lee, J. S.
We introduce and explore new heavy Higgs scenarios in the Minimal Supersymmetric Standard Model (MSSM) with explicit CP violation, which have important phenomenological implications that may be testable at the LHC. For soft supersymmetry-breaking scales M S above a few TeV and a charged Higgs boson mass M H+ above a few hundred GeV, new physics effects including those from explicit CP violation decouple from the light Higgs boson sector. However, such effects can significantly alter the phenomenology of the heavy Higgs bosons while still being consistent with constraints from low-energy observables, for instance electric dipole moments. To consider scenariosmore » with a charged Higgs boson much heavier than the Standard Model (SM) particles but much lighter than the supersymmetric particles, we revisit previous calculations of the MSSM Higgs sector. We compute the Higgs boson masses in the presence of CP violating phases, implementing improved matching and renormalization-group (RG) effects, as well as two-loop RG effects from the effective two-Higgs Doublet Model (2HDM) scale M H± to the scale M S. Here, we illustrate the possibility of non-decoupling CP-violating effects in the heavy Higgs sector using new benchmark scenarios named.« less
Scheiner, Samuel M
2014-02-01
One potential evolutionary response to environmental heterogeneity is the production of randomly variable offspring through developmental instability, a type of bet-hedging. I used an individual-based, genetically explicit model to examine the evolution of developmental instability. The model considered both temporal and spatial heterogeneity alone and in combination, the effect of migration pattern (stepping stone vs. island), and life-history strategy. I confirmed that temporal heterogeneity alone requires a threshold amount of variation to select for a substantial amount of developmental instability. For spatial heterogeneity only, the response to selection on developmental instability depended on the life-history strategy and the form and pattern of dispersal with the greatest response for island migration when selection occurred before dispersal. Both spatial and temporal variation alone select for similar amounts of instability, but in combination resulted in substantially more instability than either alone. Local adaptation traded off against bet-hedging, but not in a simple linear fashion. I found higher-order interactions between life-history patterns, dispersal rates, dispersal patterns, and environmental heterogeneity that are not explainable by simple intuition. We need additional modeling efforts to understand these interactions and empirical tests that explicitly account for all of these factors.
Model Predictive Control considering Reachable Range of Wheels for Leg / Wheel Mobile Robots
NASA Astrophysics Data System (ADS)
Suzuki, Naito; Nonaka, Kenichiro; Sekiguchi, Kazuma
2016-09-01
Obstacle avoidance is one of the important tasks for mobile robots. In this paper, we study obstacle avoidance control for mobile robots equipped with four legs comprised of three DoF SCARA leg/wheel mechanism, which enables the robot to change its shape adapting to environments. Our previous method achieves obstacle avoidance by model predictive control (MPC) considering obstacle size and lateral wheel positions. However, this method does not ensure existence of joint angles which achieves reference wheel positions calculated by MPC. In this study, we propose a model predictive control considering reachable mobile ranges of wheels positions by combining multiple linear constraints, where each reachable mobile range is approximated as a convex trapezoid. Thus, we achieve to formulate a MPC as a quadratic problem with linear constraints for nonlinear problem of longitudinal and lateral wheel position control. By optimization of MPC, the reference wheel positions are calculated, while each joint angle is determined by inverse kinematics. Considering reachable mobile ranges explicitly, the optimal joint angles are calculated, which enables wheels to reach the reference wheel positions. We verify its advantages by comparing the proposed method with the previous method through numerical simulations.
Radon transport model into a porous ground layer of finite capacity
NASA Astrophysics Data System (ADS)
Parovik, Roman
2017-10-01
The model of radon transfer is considered in a porous ground layer of finite power. With the help of the Laplace integral transformation, a numerical solution of this model is obtained which is based on the construction of a generalized quadrature formula of the highest degree of accuracy for the transition to the original - the function of solving this problem. The calculated curves are constructed and investigated depending on the diffusion and advection coefficients.The work was a mathematical model that describes the effect of the sliding attachment (stick-slip), taking into account hereditarity. This model can be regarded as a mechanical model of earthquake preparation. For such a model was proposed explicit finite- difference scheme, on which were built the waveform and phase trajectories hereditarity effect of stick-slip.
NASA Astrophysics Data System (ADS)
Li, Yingchun; Wu, Wei; Li, Bo
2018-05-01
Jointed rock masses during underground excavation are commonly located under the constant normal stiffness (CNS) condition. This paper presents an analytical formulation to predict the shear behaviour of rough rock joints under the CNS condition. The dilatancy and deterioration of two-order asperities are quantified by considering the variation of normal stress. We separately consider the dilation angles of waviness and unevenness, which decrease to zero as the normal stress approaches the transitional stress. The sinusoidal function naturally yields the decay of dilation angle as a function of relative normal stress. We assume that the magnitude of transitional stress is proportionate to the square root of asperity geometric area. The comparison between the analytical prediction and experimental data shows the reliability of the analytical model. All the parameters involved in the analytical model possess explicit physical meanings and are measurable from laboratory tests. The proposed model is potentially practicable for assessing the stability of underground structures at various field scales.
NASA Astrophysics Data System (ADS)
Kuznetsov, Alexander M.; Medvedev, Igor G.
2006-05-01
Effects of deviation from the Born-Oppenheimer approximation (BOA) on the non-adiabatic transition probability for the transfer of a quantum particle in condensed media are studied within an exactly solvable model. The particle and the medium are modeled by a set of harmonic oscillators. The dynamic interaction of the particle with a single local mode is treated explicitly without the use of BOA. Two particular situations (symmetric and non-symmetric systems) are considered. It is shown that the difference between the exact solution and the true BOA is negligibly small at realistic parameters of the model. However, the exact results differ considerably from those of the crude Condon approximation (CCA) which is usually considered in the literature as a reference point for BOA (Marcus-Hush-Dogonadze formula). It is shown that the exact rate constant can be smaller (symmetric system) or larger (non-symmetric one) than that obtained in CCA. The non-Condon effects are also studied.
Estimation of images degraded by film-grain noise.
Naderi, F; Sawchuk, A A
1978-04-15
Film-grain noise describes the intrinsic noise produced by a photographic emulsion during the process of image recording and reproduction. In this paper we consider the restoration of images degraded by film-grain noise. First a detailed model for the over-all photographic imaging system is presented. The model includes linear blurring effects and the signal-dependent effect of film-grain noise. The accuracy of this model is tested by simulating images according to it and comparing the results to images of similar targets that were actually recorded on film. The restoration of images degraded by film-grain noise is then considered in the context of estimation theory. A discrete Wiener filer is developed which explicitly allows for the signal dependence of the noise. The filter adaptively alters its characteristics based on the nonstationary first order statistics of an image and is shown to have advantages over the conventional Wiener filter. Experimental results for modeling and the adaptive estimation filter are presented.
Seismic response of 3D steel buildings considering the effect of PR connections and gravity frames.
Reyes-Salazar, Alfredo; Bojórquez, Edén; Haldar, Achintya; López-Barraza, Arturo; Rivera-Salas, J Luz
2014-01-01
The nonlinear seismic responses of 3D steel buildings with perimeter moment resisting frames (PMRF) and interior gravity frames (IGF) are studied explicitly considering the contribution of the IGF. The effect on the structural response of the stiffness of the beam-to-column connections of the IGF, which is usually neglected, is also studied. It is commonly believed that the flexibility of shear connections is negligible and that 2D models can be used to properly represent 3D real structures. The results of the study indicate, however, that the moments developed on columns of IGF can be considerable and that modeling buildings as plane frames may result in very conservative designs. The contribution of IGF to the lateral structural resistance may be significant. The contribution increases when their connections are assumed to be partially restrained (PR). The incremented participation of IGF when the stiffness of their connections is considered helps to counteract the no conservative effect that results in practice when lateral seismic loads are not considered in IGF while designing steel buildings with PMRF. Thus, if the structural system under consideration is used, the three-dimensional model should be used in seismic analysis and the IGF and the stiffness of their connections should be considered as part of the lateral resistance system.
Efficient dynamic modeling of manipulators containing closed kinematic loops
NASA Astrophysics Data System (ADS)
Ferretti, Gianni; Rocco, Paolo
An approach to efficiently solve the forward dynamics problem for manipulators containing closed chains is proposed. The two main distinctive features of this approach are: the dynamics of the equivalent open loop tree structures (any closed loop can be in general modeled by imposing some additional kinematic constraints to a suitable tree structure) is computed through an efficient Newton Euler formulation; the constraint equations relative to the most commonly adopted closed chains in industrial manipulators are explicitly solved, thus, overcoming the redundancy of Lagrange's multipliers method while avoiding the inefficiency due to a numerical solution of the implicit constraint equations. The constraint equations considered for an explicit solution are those imposed by articulated gear mechanisms and planar closed chains (pantograph type structures). Articulated gear mechanisms are actually used in all industrial robots to transmit motion from actuators to links, while planar closed chains are usefully employed to increase the stiffness of the manipulators and their load capacity, as well to reduce the kinematic coupling of joint axes. The accuracy and the efficiency of the proposed approach are shown through a simulation test.
Computation of confined coflow jets with three turbulence models
NASA Technical Reports Server (NTRS)
Zhu, J.; Shih, T. H.
1993-01-01
A numerical study of confined jets in a cylindrical duct is carried out to examine the performance of two recently proposed turbulence models: an RNG-based K-epsilon model and a realizable Reynolds stress algebraic equation model. The former is of the same form as the standard K-epsilon model but has different model coefficients. The latter uses an explicit quadratic stress-strain relationship to model the turbulent stresses and is capable of ensuring the positivity of each turbulent normal stress. The flow considered involves recirculation with unfixed separation and reattachment points and severe adverse pressure gradients, thereby providing a valuable test of the predictive capability of the models for complex flows. Calculations are performed with a finite-volume procedure. Numerical credibility of the solutions is ensured by using second-order accurate differencing schemes and sufficiently fine grids. Calculations with the standard K-epsilon model are also made for comparison. Detailed comparisons with experiments show that the realizable Reynolds stress algebraic equation model consistently works better than does the standard K-epsilon model in capturing the essential flow features, while the RNG-based K-epsilon model does not seem to give improvements over the standard K-epsilon model under the flow conditions considered.
On the stabilization of viscoelastic laminated beams with interfacial slip
NASA Astrophysics Data System (ADS)
Mustafa, Muhammad I.
2018-04-01
In this paper, we consider a viscoelastic laminated beam model. This structure is given by two identical uniform layers on top of each other, taking into account that an adhesive of small thickness is bonding the two surfaces and produces an interfacial slip. We use viscoelastic damping with general assumptions on the relaxation function and establish explicit energy decay result from which we can recover the optimal exponential and polynomial rates. Our result generalizes the earlier related results in the literature.
Precision tools and models to narrow in on the 750 GeV diphoton resonance
NASA Astrophysics Data System (ADS)
Staub, Florian; Athron, Peter; Basso, Lorenzo; Goodsell, Mark D.; Harries, Dylan; Krauss, Manuel E.; Nickel, Kilian; Opferkuch, Toby; Ubaldi, Lorenzo; Vicente, Avelino; Voigt, Alexander
2016-09-01
The hints for a new resonance at 750 GeV from ATLAS and CMS have triggered a significant amount of attention. Since the simplest extensions of the standard model cannot accommodate the observation, many alternatives have been considered to explain the excess. Here we focus on several proposed renormalisable weakly-coupled models and revisit results given in the literature. We point out that physically important subtleties are often missed or neglected. To facilitate the study of the excess we have created a collection of 40 model files, selected from recent literature, for the Mathematica package SARAH. With SARAH one can generate files to perform numerical studies using the tailor-made spectrum generators FlexibleSUSY and SPheno. These have been extended to automatically include crucial higher order corrections to the diphoton and digluon decay rates for both CP-even and CP-odd scalars. Additionally, we have extended the UFO and CalcHep interfaces of SARAH, to pass the precise information about the effective vertices from the spectrum generator to a Monte-Carlo tool. Finally, as an example to demonstrate the power of the entire setup, we present a new supersymmetric model that accommodates the diphoton excess, explicitly demonstrating how a large width can be obtained. We explicitly show several steps in detail to elucidate the use of these public tools in the precision study of this model.
Application of the θ-method to a telegraphic model of fluid flow in a dual-porosity medium
NASA Astrophysics Data System (ADS)
González-Calderón, Alfredo; Vivas-Cruz, Luis X.; Herrera-Hernández, Erik César
2018-01-01
This work focuses mainly on the study of numerical solutions, which are obtained using the θ-method, of a generalized Warren and Root model that includes a second-order wave-like equation in its formulation. The solutions approximately describe the single-phase hydraulic head in fractures by considering the finite velocity of propagation by means of a Cattaneo-like equation. The corresponding discretized model is obtained by utilizing a non-uniform grid and a non-uniform time step. A simple relationship is proposed to give the time-step distribution. Convergence is analyzed by comparing results from explicit, fully implicit, and Crank-Nicolson schemes with exact solutions: a telegraphic model of fluid flow in a single-porosity reservoir with relaxation dynamics, the Warren and Root model, and our studied model, which is solved with the inverse Laplace transform. We find that the flux and the hydraulic head have spurious oscillations that most often appear in small-time solutions but are attenuated as the solution time progresses. Furthermore, we show that the finite difference method is unable to reproduce the exact flux at time zero. Obtaining results for oilfield production times, which are in the order of months in real units, is only feasible using parallel implicit schemes. In addition, we propose simple parallel algorithms for the memory flux and for the explicit scheme.
An explicit closed-form analytical solution for European options under the CGMY model
NASA Astrophysics Data System (ADS)
Chen, Wenting; Du, Meiyu; Xu, Xiang
2017-01-01
In this paper, we consider the analytical pricing of European path-independent options under the CGMY model, which is a particular type of pure jump Le´vy process, and agrees well with many observed properties of the real market data by allowing the diffusions and jumps to have both finite and infinite activity and variation. It is shown that, under this model, the option price is governed by a fractional partial differential equation (FPDE) with both the left-side and right-side spatial-fractional derivatives. In comparison to derivatives of integer order, fractional derivatives at a point not only involve properties of the function at that particular point, but also the information of the function in a certain subset of the entire domain of definition. This ;globalness; of the fractional derivatives has added an additional degree of difficulty when either analytical methods or numerical solutions are attempted. Albeit difficult, we still have managed to derive an explicit closed-form analytical solution for European options under the CGMY model. Based on our solution, the asymptotic behaviors of the option price and the put-call parity under the CGMY model are further discussed. Practically, a reliable numerical evaluation technique for the current formula is proposed. With the numerical results, some analyses of impacts of four key parameters of the CGMY model on European option prices are also provided.
Proposed best modeling practices for assessing the effects of ecosystem restoration on fish
Rose, Kenneth A; Sable, Shaye; DeAngelis, Donald L.; Yurek, Simeon; Trexler, Joel C.; Graf, William L.; Reed, Denise J.
2015-01-01
Large-scale aquatic ecosystem restoration is increasing and is often controversial because of the economic costs involved, with the focus of the controversies gravitating to the modeling of fish responses. We present a scheme for best practices in selecting, implementing, interpreting, and reporting of fish modeling designed to assess the effects of restoration actions on fish populations and aquatic food webs. Previous best practice schemes that tended to be more general are summarized, and they form the foundation for our scheme that is specifically tailored for fish and restoration. We then present a 31-step scheme, with supporting text and narrative for each step, which goes from understanding how the results will be used through post-auditing to ensure the approach is used effectively in subsequent applications. We also describe 13 concepts that need to be considered in parallel to these best practice steps. Examples of these concepts include: life cycles and strategies; variability and uncertainty; nonequilibrium theory; biological, temporal, and spatial scaling; explicit versus implicit representation of processes; and model validation. These concepts are often not considered or not explicitly stated and casual treatment of them leads to mis-communication and mis-understandings, which in turn, often underlie the resulting controversies. We illustrate a subset of these steps, and their associated concepts, using the three case studies of Glen Canyon Dam on the Colorado River, the wetlands of coastal Louisiana, and the Everglades. Use of our proposed scheme will require investment of additional time and effort (and dollars) to be done effectively. We argue that such an investment is well worth it and will more than pay back in the long run in effective and efficient restoration actions and likely avoided controversies and legal proceedings.
Multiscale Simulation of Porous Ceramics Based on Movable Cellular Automaton Method
NASA Astrophysics Data System (ADS)
Smolin, A.; Smolin, I.; Eremina, G.; Smolina, I.
2017-10-01
The paper presents a model for simulating mechanical behaviour of multiscale porous ceramics based on movable cellular automaton method, which is a novel particle method in computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the random unique position in space. As a result, we get the average values of Young’s modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behaviour at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via the effective properties determined at the previous scale level. If the pore size distribution function of the material has N maxima we need to perform computations for N - 1 levels in order to get the properties from the lowest scale up to the macroscale step by step. The proposed approach was applied to modelling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behaviour of the model sample at the macroscale.
Deng, Shaozhong; Xue, Changfeng; Baumketner, Andriy; Jacobs, Donald; Cai, Wei
2013-01-01
This paper extends the image charge solvation model (ICSM) [J. Chem. Phys. 131, 154103 (2009)], a hybrid explicit/implicit method to treat electrostatic interactions in computer simulations of biomolecules formulated for spherical cavities, to prolate spheroidal and triaxial ellipsoidal cavities, designed to better accommodate non-spherical solutes in molecular dynamics (MD) simulations. In addition to the utilization of a general truncated octahedron as the MD simulation box, central to the proposed extension is an image approximation method to compute the reaction field for a point charge placed inside such a non-spherical cavity by using a single image charge located outside the cavity. The resulting generalized image charge solvation model (GICSM) is tested in simulations of liquid water, and the results are analyzed in comparison with those obtained from the ICSM simulations as a reference. We find that, for improved computational efficiency due to smaller simulation cells and consequently a less number of explicit solvent molecules, the generalized model can still faithfully reproduce known static and dynamic properties of liquid water at least for systems considered in the present paper, indicating its great potential to become an accurate but more efficient alternative to the ICSM when bio-macromolecules of irregular shapes are to be simulated. PMID:23913979
Graph theory as a proxy for spatially explicit population models in conservation planning.
Minor, Emily S; Urban, Dean L
2007-09-01
Spatially explicit population models (SEPMs) are often considered the best way to predict and manage species distributions in spatially heterogeneous landscapes. However, they are computationally intensive and require extensive knowledge of species' biology and behavior, limiting their application in many cases. An alternative to SEPMs is graph theory, which has minimal data requirements and efficient algorithms. Although only recently introduced to landscape ecology, graph theory is well suited to ecological applications concerned with connectivity or movement. This paper compares the performance of graph theory to a SEPM in selecting important habitat patches for Wood Thrush (Hylocichla mustelina) conservation. We use both models to identify habitat patches that act as population sources and persistent patches and also use graph theory to identify patches that act as stepping stones for dispersal. Correlations of patch rankings were very high between the two models. In addition, graph theory offers the ability to identify patches that are very important to habitat connectivity and thus long-term population persistence across the landscape. We show that graph theory makes very similar predictions in most cases and in other cases offers insight not available from the SEPM, and we conclude that graph theory is a suitable and possibly preferable alternative to SEPMs for species conservation in heterogeneous landscapes.
van Tuijl, Lonneke A; de Jong, Peter J; Sportel, B Esther; de Hullu, Eva; Nauta, Maaike H
2014-03-01
A negative self-view is a prominent factor in most cognitive vulnerability models of depression and anxiety. Recently, there has been increased attention to differentiate between the implicit (automatic) and the explicit (reflective) processing of self-related evaluations. This longitudinal study aimed to test the association between implicit and explicit self-esteem and symptoms of adolescent depression and social anxiety disorder. Two complementary models were tested: the vulnerability model and the scarring effect model. Participants were 1641 first and second year pupils of secondary schools in the Netherlands. The Rosenberg Self-Esteem Scale, self-esteem Implicit Association Test and Revised Child Anxiety and Depression Scale were completed to measure explicit self-esteem, implicit self-esteem and symptoms of social anxiety disorder (SAD) and major depressive disorder (MDD), respectively, at baseline and two-year follow-up. Explicit self-esteem at baseline was associated with symptoms of MDD and SAD at follow-up. Symptomatology at baseline was not associated with explicit self-esteem at follow-up. Implicit self-esteem was not associated with symptoms of MDD or SAD in either direction. We relied on self-report measures of MDD and SAD symptomatology. Also, findings are based on a non-clinical sample. Our findings support the vulnerability model, and not the scarring effect model. The implications of these findings suggest support of an explicit self-esteem intervention to prevent increases in MDD and SAD symptomatology in non-clinical adolescents. Copyright © 2013 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamerlin, Shina C. L.; Haranczyk, Maciej; Warshel, Arieh
2009-05-01
Phosphate hydrolysis is ubiquitous in biology. However, despite intensive research on this class of reactions, the precise nature of the reaction mechanism remains controversial. In this work, we have examined the hydrolysis of three homologous phosphate diesters. The solvation free energy was simulated by means of either an implicit solvation model (COSMO), hybrid quantum mechanical / molecular mechanical free energy perturbation (QM/MM-FEP) or a mixed solvation model in which N water molecules were explicitly included in the ab initio description of the reacting system (where N=1-3), with the remainder of the solvent being implicitly modelled as a continuum. Here, bothmore » COSMO and QM/MM-FEP reproduce Delta Gobs within an error of about 2kcal/mol. However, we demonstrate that in order to obtain any form of reliable results from a mixed model, it is essential to carefully select the explicit water molecules from short QM/MM runs that act as a model for the true infinite system. Additionally, the mixed models tend to be increasingly inaccurate the more explicit water molecules are placed into the system. Thus, our analysis indicates that this approach provides an unreliable way for modelling phosphate hydrolysis in solution.« less
A polynomial chaos approach to the analysis of vehicle dynamics under uncertainty
NASA Astrophysics Data System (ADS)
Kewlani, Gaurav; Crawford, Justin; Iagnemma, Karl
2012-05-01
The ability of ground vehicles to quickly and accurately analyse their dynamic response to a given input is critical to their safety and efficient autonomous operation. In field conditions, significant uncertainty is associated with terrain and/or vehicle parameter estimates, and this uncertainty must be considered in the analysis of vehicle motion dynamics. Here, polynomial chaos approaches that explicitly consider parametric uncertainty during modelling of vehicle dynamics are presented. They are shown to be computationally more efficient than the standard Monte Carlo scheme, and experimental results compared with the simulation results performed on ANVEL (a vehicle simulator) indicate that the method can be utilised for efficient and accurate prediction of vehicle motion in realistic scenarios.
Mitcham, Carl
2007-12-01
Qualitative research struggles against a tide of quantitative methods. To assist in this struggle, it is useful to consider the historical and philosophical origins of quantitative methods as well as criticisms that have been raised against them. Although these criticisms have often been restricted to discussions in the philosophy of science, they have become increasingly prominent in debates regarding science policy. This article thus reviews current science policy debates concerning scientific autonomy and the linear model of science-society relationships. Then, having considered the multiple meanings of quality, it argues for a science policy reassessment of quantitative research, for deeper engagements between science policy and the social sciences, and finally, for a more explicit alliance between science policy and qualitative methods.
Exact solution of a ratchet with switching sawtooth potential
NASA Astrophysics Data System (ADS)
Saakian, David B.; Klümper, Andreas
2018-01-01
We consider the flashing potential ratchet model with general asymmetric potential. Using Bloch functions, we derive equations which allow for the calculation of both the ratchet's flux and higher moments of distribution for rather general potentials. We indicate how to derive the optimal transition rates for maximal velocity of the ratchet. We calculate explicitly the exact velocity of a ratchet with simple sawtooth potential from the solution of a system of 8 linear algebraic equations. Using Bloch functions, we derive the equations for the ratchet with potentials changing periodically with time. We also consider the case of the ratchet with evolution with two different potentials acting for some random periods of time.
A review of underwater bio-mimetic propulsion: cruise and fast-start
NASA Astrophysics Data System (ADS)
Chao, Li-Ming; Cao, Yong-Hui; Pan, Guang
2017-08-01
This paper reviews recent developments in the understanding of underwater bio-mimetic propulsion. Two impressive models of underwater propulsion are considered: cruise and fast-start. First, we introduce the progression of bio-mimetic propulsion, especially underwater propulsion, where some primary conceptions are touched upon. Second, the understanding of flapping foils, considered as one of the most efficient cruise styles of aquatic animals, is introduced, where the effect of kinematics and the shape and flexibility of foils on generating thrust are elucidated respectively. Fast-start propulsion is always exhibited when predator behaviour occurs, and we provide an explicit introduction of corresponding zoological experiments and numerical simulations. We also provide some predictions about underwater bio-mimetic propulsion.
NASA Astrophysics Data System (ADS)
Liu, Yuan; Wang, Mingqiang; Ning, Xingyao
2018-02-01
Spinning reserve (SR) should be scheduled considering the balance between economy and reliability. To address the computational intractability cursed by the computation of loss of load probability (LOLP), many probabilistic methods use simplified formulations of LOLP to improve the computational efficiency. Two tradeoffs embedded in the SR optimization model are not explicitly analyzed in these methods. In this paper, two tradeoffs including primary tradeoff and secondary tradeoff between economy and reliability in the maximum LOLP constrained unit commitment (UC) model are explored and analyzed in a small system and in IEEE-RTS System. The analysis on the two tradeoffs can help in establishing new efficient simplified LOLP formulations and new SR optimization models.
Quantum thermodynamics of the resonant-level model with driven system-bath coupling
NASA Astrophysics Data System (ADS)
Haughian, Patrick; Esposito, Massimiliano; Schmidt, Thomas L.
2018-02-01
We study nonequilibrium thermodynamics in a fermionic resonant-level model with arbitrary coupling strength to a fermionic bath, taking the wide-band limit. In contrast to previous theories, we consider a system where both the level energy and the coupling strength depend explicitly on time. We find that, even in this generalized model, consistent thermodynamic laws can be obtained, up to the second order in the drive speed, by splitting the coupling energy symmetrically between system and bath. We define observables for the system energy, work, heat, and entropy, and calculate them using nonequilibrium Green's functions. We find that the observables fulfill the laws of thermodynamics, and connect smoothly to the known equilibrium results.
The circulation of a baroclinic ocean around planetary scale islands with topography
NASA Astrophysics Data System (ADS)
Pedlosky, J.
2010-12-01
The circulation around planetary-scale islands is considered for an island with a topographic skirt for a stratified ocean. The simplest model of the ocean is a two layer ocean in a circular domain with the island in the center. When the girdling topography is steep, closed geostrophic contours guide the flow in each of the two layers although that guiding occurs at different horizontal locations in each layer. For flows with weak dissipation, modeled as bottom and interfacial friction, explicit formulae are given for the dependence of the streamfunction in each layer on the ambient potential vorticity, f/(layer depth). Numerical model calculations will be presented to supplement the analytical results.
An explicit canopy BRDF model and inversion. [Bidirectional Reflectance Distribution Function
NASA Technical Reports Server (NTRS)
Liang, Shunlin; Strahler, Alan H.
1992-01-01
Based on a rigorous canopy radiative transfer equation, the multiple scattering radiance is approximated by the asymptotic theory, and the single scattering radiance calculation, which requires an numerical intergration due to considering the hotspot effect, is simplified. A new formulation is presented to obtain more exact angular dependence of the sky radiance distribution. The unscattered solar radiance and single scattering radiance are calculated exactly, and the multiple scattering is approximated by the delta two-stream atmospheric radiative transfer model. The numerical algorithms prove that the parametric canopy model is very accurate, especially when the viewing angles are smaller than 55 deg. The Powell algorithm is used to retrieve biospheric parameters from the ground measured multiangle observations.
Explicit filtering in large eddy simulation using a discontinuous Galerkin method
NASA Astrophysics Data System (ADS)
Brazell, Matthew J.
The discontinuous Galerkin (DG) method is a formulation of the finite element method (FEM). DG provides the ability for a high order of accuracy in complex geometries, and allows for highly efficient parallelization algorithms. These attributes make the DG method attractive for solving the Navier-Stokes equations for large eddy simulation (LES). The main goal of this work is to investigate the feasibility of adopting an explicit filter in the numerical solution of the Navier-Stokes equations with DG. Explicit filtering has been shown to increase the numerical stability of under-resolved simulations and is needed for LES with dynamic sub-grid scale (SGS) models. The explicit filter takes advantage of DG's framework where the solution is approximated using a polyno- mial basis where the higher modes of the solution correspond to a higher order polynomial basis. By removing high order modes, the filtered solution contains low order frequency content much like an explicit low pass filter. The explicit filter implementation is tested on a simple 1-D solver with an initial condi- tion that has some similarity to turbulent flows. The explicit filter does restrict the resolution as well as remove accumulated energy in the higher modes from aliasing. However, the ex- plicit filter is unable to remove numerical errors causing numerical dissipation. A second test case solves the 3-D Navier-Stokes equations of the Taylor-Green vortex flow (TGV). The TGV is useful for SGS model testing because it is initially laminar and transitions into a fully turbulent flow. The SGS models investigated include the constant coefficient Smagorinsky model, dynamic Smagorinsky model, and dynamic Heinz model. The constant coefficient Smagorinsky model is over dissipative, this is generally not desirable however it does add stability. The dynamic Smagorinsky model generally performs better, especially during the laminar-turbulent transition region as expected. The dynamic Heinz model which is based on an improved model, handles the laminar-turbulent transition region well while also showing additional robustness.
Explicit asymmetric bounds for robust stability of continuous and discrete-time systems
NASA Technical Reports Server (NTRS)
Gao, Zhiqiang; Antsaklis, Panos J.
1993-01-01
The problem of robust stability in linear systems with parametric uncertainties is considered. Explicit stability bounds on uncertain parameters are derived and expressed in terms of linear inequalities for continuous systems, and inequalities with quadratic terms for discrete-times systems. Cases where system parameters are nonlinear functions of an uncertainty are also examined.
ERIC Educational Resources Information Center
Gokgoz Kurt, Burcu; Medlin, Julie; Tessarolo, Ashley
2014-01-01
Considering the contradictory research on explicit teaching of suprasegmentals, the present study aims to investigate the effects of explicit instruction of L2 English learners' perception of prosodically ambiguous intonation patterns, as well as the possible effects of reported musical familiarity on intonation acquisition. A control group and a…
TTLEM - an implicit-explicit (IMEX) scheme for modelling landscape evolution in MATLAB
NASA Astrophysics Data System (ADS)
Campforts, Benjamin; Schwanghart, Wolfgang
2016-04-01
Landscape evolution models (LEM) are essential to unravel interdependent earth surface processes. They are proven very useful to bridge several temporal and spatial timescales and have been successfully used to integrate existing empirical datasets. There is a growing consensus that landscapes evolve at least as much in the horizontal as in the vertical direction urging for an efficient implementation of dynamic drainage networks. Here we present a spatially explicit LEM, which is based on the object-oriented function library TopoToolbox 2 (Schwanghart and Scherler, 2014). Similar to other LEMs, rivers are considered to be the main drivers for simulated landscape evolution as they transmit pulses of tectonic perturbations and set the base level of surrounding hillslopes. Highly performant graph algorithms facilitate efficient updates of the flow directions to account for planform changes in the river network and the calculation of flow-related terrain attributes. We implement the model using an implicit-explicit (IMEX) scheme, i.e. different integrators are used for different terms in the diffusion-incision equation. While linear diffusion is solved using an implicit scheme, we calculate incision explicitly. Contrary to previously published LEMS, however, river incision is solved using a total volume method which is total variation diminishing in order to prevent numerical diffusion when solving the stream power law (Campforts and Govers, 2015). We show that the use of this updated numerical scheme alters both landscape topography and catchment wide erosion rates at a geological time scale. Finally, the availability of a graphical user interface facilitates user interaction, making the tool very useful both for research and didactical purposes. References Campforts, B., Govers, G., 2015. Keeping the edge: A numerical method that avoids knickpoint smearing when solving the stream power law. J. Geophys. Res. Earth Surf. 120, 1189-1205. doi:10.1002/2014JF003376 Schwanghart, W., Scherler, D., 2014. TopoToolbox 2 - MATLAB-based software for topographic analysis and modeling in Earth surface sciences. Earth Surf. Dyn. 2, 1-7. doi:10.5194/esurf-2-1-2014
NASA Astrophysics Data System (ADS)
Kim, Junhan; Marrone, Daniel P.; Chan, Chi-Kwan; Medeiros, Lia; Özel, Feryal; Psaltis, Dimitrios
2016-12-01
The Event Horizon Telescope (EHT) is a millimeter-wavelength, very-long-baseline interferometry (VLBI) experiment that is capable of observing black holes with horizon-scale resolution. Early observations have revealed variable horizon-scale emission in the Galactic Center black hole, Sagittarius A* (Sgr A*). Comparing such observations to time-dependent general relativistic magnetohydrodynamic (GRMHD) simulations requires statistical tools that explicitly consider the variability in both the data and the models. We develop here a Bayesian method to compare time-resolved simulation images to variable VLBI data, in order to infer model parameters and perform model comparisons. We use mock EHT data based on GRMHD simulations to explore the robustness of this Bayesian method and contrast it to approaches that do not consider the effects of variability. We find that time-independent models lead to offset values of the inferred parameters with artificially reduced uncertainties. Moreover, neglecting the variability in the data and the models often leads to erroneous model selections. We finally apply our method to the early EHT data on Sgr A*.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Junhan; Marrone, Daniel P.; Chan, Chi-Kwan
2016-12-01
The Event Horizon Telescope (EHT) is a millimeter-wavelength, very-long-baseline interferometry (VLBI) experiment that is capable of observing black holes with horizon-scale resolution. Early observations have revealed variable horizon-scale emission in the Galactic Center black hole, Sagittarius A* (Sgr A*). Comparing such observations to time-dependent general relativistic magnetohydrodynamic (GRMHD) simulations requires statistical tools that explicitly consider the variability in both the data and the models. We develop here a Bayesian method to compare time-resolved simulation images to variable VLBI data, in order to infer model parameters and perform model comparisons. We use mock EHT data based on GRMHD simulations to explore themore » robustness of this Bayesian method and contrast it to approaches that do not consider the effects of variability. We find that time-independent models lead to offset values of the inferred parameters with artificially reduced uncertainties. Moreover, neglecting the variability in the data and the models often leads to erroneous model selections. We finally apply our method to the early EHT data on Sgr A*.« less
Embedded-explicit emergent literacy intervention I: Background and description of approach.
Justice, Laura M; Kaderavek, Joan N
2004-07-01
This article, the first of a two-part series, provides background information and a general description of an emergent literacy intervention model for at-risk preschoolers and kindergartners. The embedded-explicit intervention model emphasizes the dual importance of providing young children with socially embedded opportunities for meaningful, naturalistic literacy experiences throughout the day, in addition to regular structured therapeutic interactions that explicitly target critical emergent literacy goals. The role of the speech-language pathologist (SLP) in the embedded-explicit model encompasses both indirect and direct service delivery: The SLP consults and collaborates with teachers and parents to ensure the highest quality and quantity of socially embedded literacy-focused experiences and serves as a direct provider of explicit interventions using structured curricula and/or lesson plans. The goal of this integrated model is to provide comprehensive emergent literacy interventions across a spectrum of early literacy skills to ensure the successful transition of at-risk children from prereaders to readers.
Neal D. Niemuth; Michael E. Estey; Charles R. Loesch
2005-01-01
Conservation planning for birds is increasingly focused on landscapes. However, little spatially explicit information is available to guide landscape-level conservation planning for many species of birds. We used georeferenced 1995 Breeding Bird Survey (BBS) data in conjunction with land-cover information to develop a spatially explicit habitat model predicting the...
Explicit robust schemes for implementation of general principal value-based constitutive models
NASA Technical Reports Server (NTRS)
Arnold, S. M.; Saleeb, A. F.; Tan, H. Q.; Zhang, Y.
1993-01-01
The issue of developing effective and robust schemes to implement general hyperelastic constitutive models is addressed. To this end, special purpose functions are used to symbolically derive, evaluate, and automatically generate the associated FORTRAN code for the explicit forms of the corresponding stress function and material tangent stiffness tensors. These explicit forms are valid for the entire deformation range. The analytical form of these explicit expressions is given here for the case in which the strain-energy potential is taken as a nonseparable polynomial function of the principle stretches.
Deep Learning with Hierarchical Convolutional Factor Analysis
Chen, Bo; Polatkan, Gungor; Sapiro, Guillermo; Blei, David; Dunson, David; Carin, Lawrence
2013-01-01
Unsupervised multi-layered (“deep”) models are considered for general data, with a particular focus on imagery. The model is represented using a hierarchical convolutional factor-analysis construction, with sparse factor loadings and scores. The computation of layer-dependent model parameters is implemented within a Bayesian setting, employing a Gibbs sampler and variational Bayesian (VB) analysis, that explicitly exploit the convolutional nature of the expansion. In order to address large-scale and streaming data, an online version of VB is also developed. The number of basis functions or dictionary elements at each layer is inferred from the data, based on a beta-Bernoulli implementation of the Indian buffet process. Example results are presented for several image-processing applications, with comparisons to related models in the literature. PMID:23787342
Silvia, Paul J
2012-07-01
Using motivational intensity theory as a framework, three experiments examined how implicit self-focus (manipulated with masked first-name priming) and explicit self-focus (manipulated with a large mirror) influence effort-related cardiovascular activity, particularly systolic blood pressure reactivity. Theories of self-focused attention suggest that both implicit and explicit self-focus bring about self-evaluation and thus make meeting a goal more important. For a "do your best" task of unfixed difficulty, implicit and explicit self-focus both increased effort (Experiment 1) compared to a control condition. For a task that varied in difficulty, implicit and explicit self-focus promoted more effort as the task became increasingly hard (Experiments 2 and 3). Taken together, the findings suggest that implicit and explicit self-processes share a similar motivational architecture. The discussion explores the value of integrating motivational intensity theory with self-awareness theory and considers the emerging interest in implicit aspects of effort regulation. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Diaz, Manuel A.; Solovchuk, Maxim A.; Sheu, Tony W. H.
2018-06-01
A nonlinear system of partial differential equations capable of describing the nonlinear propagation and attenuation of finite amplitude perturbations in thermoviscous media is presented. This system constitutes a full nonlinear wave model that has been formulated in the conservation form. Initially, this model is investigated analytically in the inviscid limit where it has been found that the resulting flux function fulfills the Lax-Wendroff theorem, and the scheme can match the solutions of the Westervelt and Burgers equations numerically. Here, high-order numerical descriptions of strongly nonlinear wave propagations become of great interest. For that matter we consider finite difference formulations of the weighted essentially non-oscillatory (WENO) schemes associated with explicit strong stability preserving Runge-Kutta (SSP-RK) time integration methods. Although this strategy is known to be computationally demanding, it is found to be effective when implemented to be solved in graphical processing units (GPUs). As we consider wave propagations in unbounded domains, perfectly matching layers (PML) have been also considered in this work. The proposed system model is validated and illustrated by using one- and two-dimensional benchmark test cases proposed in the literature for nonlinear acoustic propagation in homogeneous thermoviscous media.
Using travel times to simulate multi-dimensional bioreactive transport in time-periodic flows.
Sanz-Prat, Alicia; Lu, Chuanhe; Finkel, Michael; Cirpka, Olaf A
2016-04-01
In travel-time models, the spatially explicit description of reactive transport is replaced by associating reactive-species concentrations with the travel time or groundwater age at all locations. These models have been shown adequate for reactive transport in river-bank filtration under steady-state flow conditions. Dynamic hydrological conditions, however, can lead to fluctuations of infiltration velocities, putting the validity of travel-time models into question. In transient flow, the local travel-time distributions change with time. We show that a modified version of travel-time based reactive transport models is valid if only the magnitude of the velocity fluctuates, whereas its spatial orientation remains constant. We simulate nonlinear, one-dimensional, bioreactive transport involving oxygen, nitrate, dissolved organic carbon, aerobic and denitrifying bacteria, considering periodic fluctuations of velocity. These fluctuations make the bioreactive system pulsate: The aerobic zone decreases at times of low velocity and increases at those of high velocity. For the case of diurnal fluctuations, the biomass concentrations cannot follow the hydrological fluctuations and a transition zone containing both aerobic and obligatory denitrifying bacteria is established, whereas a clear separation of the two types of bacteria prevails in the case of seasonal velocity fluctuations. We map the 1-D results to a heterogeneous, two-dimensional domain by means of the mean groundwater age for steady-state flow in both domains. The mapped results are compared to simulation results of spatially explicit, two-dimensional, advective-dispersive-bioreactive transport subject to the same relative fluctuations of velocity as in the one-dimensional model. The agreement between the mapped 1-D and the explicit 2-D results is excellent. We conclude that travel-time models of nonlinear bioreactive transport are adequate in systems of time-periodic flow if the flow direction does not change. Copyright © 2016 Elsevier B.V. All rights reserved.
Welch, Vivian A; Akl, Elie A; Pottie, Kevin; Ansari, Mohammed T; Briel, Matthias; Christensen, Robin; Dans, Antonio; Dans, Leonila; Eslava-Schmalbach, Javier; Guyatt, Gordon; Hultcrantz, Monica; Jull, Janet; Katikireddi, Srinivasa Vittal; Lang, Eddy; Matovinovic, Elizabeth; Meerpohl, Joerg J; Morton, Rachael L; Mosdol, Annhild; Murad, M Hassan; Petkovic, Jennifer; Schünemann, Holger; Sharaf, Ravi; Shea, Bev; Singh, Jasvinder A; Solà, Ivan; Stanev, Roger; Stein, Airton; Thabaneii, Lehana; Tonia, Thomy; Tristan, Mario; Vitols, Sigurd; Watine, Joseph; Tugwell, Peter
2017-10-01
The aim of this paper is to describe a conceptual framework for how to consider health equity in the Grading Recommendations Assessment and Development Evidence (GRADE) guideline development process. Consensus-based guidance developed by the GRADE working group members and other methodologists. We developed consensus-based guidance to help address health equity when rating the certainty of synthesized evidence (i.e., quality of evidence). When health inequity is determined to be a concern by stakeholders, we propose five methods for explicitly assessing health equity: (1) include health equity as an outcome; (2) consider patient-important outcomes relevant to health equity; (3) assess differences in the relative effect size of the treatment; (4) assess differences in baseline risk and the differing impacts on absolute effects; and (5) assess indirectness of evidence to disadvantaged populations and/or settings. The most important priority for research on health inequity and guidelines is to identify and document examples where health equity has been considered explicitly in guidelines. Although there is a weak scientific evidence base for assessing health equity, this should not discourage the explicit consideration of how guidelines and recommendations affect the most vulnerable members of society. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
A hybrid model for traffic flow and crowd dynamics with random individual properties.
Schleper, Veronika
2015-04-01
Based on an established mathematical model for the behavior of large crowds, a new model is derived that is able to take into account the statistical variation of individual maximum walking speeds. The same model is shown to be valid also in traffic flow situations, where for instance the statistical variation of preferred maximum speeds can be considered. The model involves explicit bounds on the state variables, such that a special Riemann solver is derived that is proved to respect the state constraints. Some care is devoted to a valid construction of random initial data, necessary for the use of the new model. The article also includes a numerical method that is shown to respect the bounds on the state variables and illustrative numerical examples, explaining the properties of the new model in comparison with established models.
Effective Reading and Writing Instruction: A Focus on Modeling
ERIC Educational Resources Information Center
Regan, Kelley; Berkeley, Sheri
2012-01-01
When providing effective reading and writing instruction, teachers need to provide explicit modeling. Modeling is particularly important when teaching students to use cognitive learning strategies. Examples of how teachers can provide specific, explicit, and flexible instructional modeling is presented in the context of two evidence-based…
Model Hierarchies in Edge-Based Compartmental Modeling for Infectious Disease Spread
Miller, Joel C.; Volz, Erik M.
2012-01-01
We consider the family of edge-based compartmental models for epidemic spread developed in [11]. These models allow for a range of complex behaviors, and in particular allow us to explicitly incorporate duration of a contact into our mathematical models. Our focus here is to identify conditions under which simpler models may be substituted for more detailed models, and in so doing we define a hierarchy of epidemic models. In particular we provide conditions under which it is appropriate to use the standard mass action SIR model, and we show what happens when these conditions fail. Using our hierarchy, we provide a procedure leading to the choice of the appropriate model for a given population. Our result about the convergence of models to the Mass Action model gives clear, rigorous conditions under which the Mass Action model is accurate. PMID:22911242
Blue water scarcity and the economic impacts of future agricultural trade and demand
NASA Astrophysics Data System (ADS)
Schmitz, Christoph; Lotze-Campen, Hermann; Gerten, Dieter; Dietrich, Jan Philipp; Bodirsky, Benjamin; Biewald, Anne; Popp, Alexander
2013-06-01
An increasing demand for agricultural goods affects the pressure on global water resources over the coming decades. In order to quantify these effects, we have developed a new agroeconomic water scarcity indicator, considering explicitly economic processes in the agricultural system. The indicator is based on the water shadow price generated by an economic land use model linked to a global vegetation-hydrology model. Irrigation efficiency is implemented as a dynamic input depending on the level of economic development. We are able to simulate the heterogeneous distribution of water supply and agricultural water demand for irrigation through the spatially explicit representation of agricultural production. This allows in identifying regional hot spots of blue water scarcity and explicit shadow prices for water. We generate scenarios based on moderate policies regarding future trade liberalization and the control of livestock-based consumption, dependent on different population and gross domestic product (GDP) projections. Results indicate increased water scarcity in the future, especially in South Asia, the Middle East, and north Africa. In general, water shadow prices decrease with increasing liberalization, foremost in South Asia, Southeast Asia, and the Middle East. Policies to reduce livestock consumption in developed countries not only lower the domestic pressure on water but also alleviate water scarcity to a large extent in developing countries. It is shown that one of the two policy options would be insufficient for most regions to retain water scarcity in 2045 on levels comparable to 2005.
Optimizing Environmental Flow Operation Rules based on Explicit IHA Constraints
NASA Astrophysics Data System (ADS)
Dongnan, L.; Wan, W.; Zhao, J.
2017-12-01
Multi-objective operation of reservoirs are increasingly asked to consider the environmental flow to support ecosystem health. Indicators of Hydrologic Alteration (IHA) is widely used to describe environmental flow regimes, but few studies have explicitly formulated it into optimization models and thus is difficult to direct reservoir release. In an attempt to incorporate the benefit of environmental flow into economic achievement, a two-objective reservoir optimization model is developed and all 33 hydrologic parameters of IHA are explicitly formulated into constraints. The benefit of economic is defined by Hydropower Production (HP) while the benefit of environmental flow is transformed into Eco-Index (EI) that combined 5 of the 33 IHA parameters chosen by principal component analysis method. Five scenarios (A to E) with different constraints are tested and solved by nonlinear programming. The case study of Jing Hong reservoir, located in the upstream of Mekong basin, China, shows: 1. A Pareto frontier is formed by maximizing on only HP objective in scenario A and on only EI objective in scenario B. 2. Scenario D using IHA parameters as constraints obtains the optimal benefits of both economic and ecological. 3. A sensitive weight coefficient is found in scenario E, but the trade-offs between HP and EI objectives are not within the Pareto frontier. 4. When the fraction of reservoir utilizable capacity reaches 0.8, both HP and EI capture acceptable values. At last, to make this modelmore conveniently applied to everyday practice, a simplified operation rule curve is extracted.
NASA Astrophysics Data System (ADS)
Hamzah, Afiq; Hamid, Fatimah A.; Ismail, Razali
2016-12-01
An explicit solution for long-channel surrounding-gate (SRG) MOSFETs is presented from intrinsic to heavily doped body including the effects of interface traps and fixed oxide charges. The solution is based on the core SRGMOSFETs model of the Unified Charge Control Model (UCCM) for heavily doped conditions. The UCCM model of highly doped SRGMOSFETs is derived to obtain the exact equivalent expression as in the undoped case. Taking advantage of the undoped explicit charge-based expression, the asymptotic limits for below threshold and above threshold have been redefined to include the effect of trap states for heavily doped cases. After solving the asymptotic limits, an explicit mobile charge expression is obtained which includes the trap state effects. The explicit mobile charge model shows very good agreement with respect to numerical simulation over practical terminal voltages, doping concentration, geometry effects, and trap state effects due to the fixed oxide charges and interface traps. Then, the drain current is obtained using the Pao-Sah's dual integral, which is expressed as a function of inversion charge densities at the source/drain ends. The drain current agreed well with the implicit solution and numerical simulation for all regions of operation without employing any empirical parameters. A comparison with previous explicit models has been conducted to verify the competency of the proposed model with the doping concentration of 1× {10}19 {{cm}}-3, as the proposed model has better advantages in terms of its simplicity and accuracy at a higher doping concentration.
What do we know about implicit false-belief tracking?
Schneider, Dana; Slaughter, Virginia P; Dux, Paul E
2015-02-01
There is now considerable evidence that neurotypical individuals track the internal cognitions of others, even in the absence of instructions to do so. This finding has prompted the suggestion that humans possess an implicit mental state tracking system (implicit Theory of Mind, ToM) that exists alongside a system that allows the deliberate and explicit analysis of the mental states of others (explicit ToM). Here we evaluate the evidence for this hypothesis and assess the extent to which implicit and explicit ToM operations are distinct. We review evidence showing that adults can indeed engage in ToM processing even without being conscious of doing so. However, at the same time, there is evidence that explicit and implicit ToM operations share some functional features, including drawing on executive resources. Based on the available evidence, we propose that implicit and explicit ToM operations overlap and should only be considered partially distinct.
Explicit and Implicit Emotion Regulation: A Dual-Process Framework
Gyurak, Anett; Gross, James J.; Etkin, Amit
2012-01-01
It is widely acknowledged that emotions can be regulated in an astonishing variety of ways. Most research to date has focused on explicit (effortful) forms of emotion regulation. However, there is growing research interest in implicit (automatic) forms of emotion regulation. To organize emerging findings, we present a dual-process framework that integrates explicit and implicit forms of emotion regulation, and argue that both forms of regulation are necessary for well-being. In the first section of this review, we provide a broad overview of the construct of emotion regulation, with an emphasis on explicit and implicit processes. In the second section, we focus on explicit emotion regulation, considering both neural mechanisms that are associated with these processes and their experiential and physiological consequences. In the third section, we turn to several forms of implicit emotion regulation, and integrate the burgeoning literature in this area. We conclude by outlining open questions and areas for future research. PMID:21432682
From Cycle Rooted Spanning Forests to the Critical Ising Model: an Explicit Construction
NASA Astrophysics Data System (ADS)
de Tilière, Béatrice
2013-04-01
Fisher established an explicit correspondence between the 2-dimensional Ising model defined on a graph G and the dimer model defined on a decorated version {{G}} of this graph (Fisher in J Math Phys 7:1776-1781, 1966). In this paper we explicitly relate the dimer model associated to the critical Ising model and critical cycle rooted spanning forests (CRSFs). This relation is established through characteristic polynomials, whose definition only depends on the respective fundamental domains, and which encode the combinatorics of the model. We first show a matrix-tree type theorem establishing that the dimer characteristic polynomial counts CRSFs of the decorated fundamental domain {{G}_1}. Our main result consists in explicitly constructing CRSFs of {{G}_1} counted by the dimer characteristic polynomial, from CRSFs of G 1, where edges are assigned Kenyon's critical weight function (Kenyon in Invent Math 150(2):409-439, 2002); thus proving a relation on the level of configurations between two well known 2-dimensional critical models.
Transcription, intercellular variability and correlated random walk.
Müller, Johannes; Kuttler, Christina; Hense, Burkhard A; Zeiser, Stefan; Liebscher, Volkmar
2008-11-01
We develop a simple model for the random distribution of a gene product. It is assumed that the only source of variance is due to switching transcription on and off by a random process. Under the condition that the transition rates between on and off are constant we find that the amount of mRNA follows a scaled Beta distribution. Additionally, a simple positive feedback loop is considered. The simplicity of the model allows for an explicit solution also in this setting. These findings in turn allow, e.g., for easy parameter scans. We find that bistable behavior translates into bimodal distributions. These theoretical findings are in line with experimental results.
An explicit microphysics thunderstorm model.
R. Solomon; C.M. Medaglia; C. Adamo; S. Dietrick; A. Mugnai; U. Biader Ceipidor
2005-01-01
The authors present a brief description of a 1.5-dimensional thunderstorm model with a lightning parameterization that utilizes an explicit microphysical scheme to model lightning-producing clouds. The main intent of this work is to describe the basic microphysical and electrical properties of the model, with a small illustrative section to show how the model may be...
NASA Astrophysics Data System (ADS)
Ravi, Koustuban; Wang, Qian; Ho, Seng-Tiong
2015-08-01
We report a new computational model for simulations of electromagnetic interactions with semiconductor quantum well(s) (SQW) in complex electromagnetic geometries using the finite-difference time-domain method. The presented model is based on an approach of spanning a large number of electron transverse momentum states in each SQW sub-band (multi-band) with a small number of discrete multi-electron states (multi-level, multi-electron). This enables accurate and efficient two-dimensional (2-D) and three-dimensional (3-D) simulations of nanophotonic devices with SQW active media. The model includes the following features: (1) Optically induced interband transitions between various SQW conduction and heavy-hole or light-hole sub-bands are considered. (2) Novel intra sub-band and inter sub-band transition terms are derived to thermalize the electron and hole occupational distributions to the correct Fermi-Dirac distributions. (3) The terms in (2) result in an explicit update scheme which circumvents numerically cumbersome iterative procedures. This significantly augments computational efficiency. (4) Explicit update terms to account for carrier leakage to unconfined states are derived, which thermalize the bulk and SQW populations to a common quasi-equilibrium Fermi-Dirac distribution. (5) Auger recombination and intervalence band absorption are included. The model is validated by comparisons to analytic band-filling calculations, simulations of SQW optical gain spectra, and photonic crystal lasers.
NASA Astrophysics Data System (ADS)
Macías-Díaz, J. E.
2018-06-01
In this work, we investigate numerically a model governed by a multidimensional nonlinear wave equation with damping and fractional diffusion. The governing partial differential equation considers the presence of Riesz space-fractional derivatives of orders in (1, 2], and homogeneous Dirichlet boundary data are imposed on a closed and bounded spatial domain. The model under investigation possesses an energy function which is preserved in the undamped regime. In the damped case, we establish the property of energy dissipation of the model using arguments from functional analysis. Motivated by these results, we propose an explicit finite-difference discretization of our fractional model based on the use of fractional centered differences. Associated to our discrete model, we also propose discretizations of the energy quantities. We establish that the discrete energy is conserved in the undamped regime, and that it dissipates in the damped scenario. Among the most important numerical features of our scheme, we show that the method has a consistency of second order, that it is stable and that it has a quadratic order of convergence. Some one- and two-dimensional simulations are shown in this work to illustrate the fact that the technique is capable of preserving the discrete energy in the undamped regime. For the sake of convenience, we provide a Matlab implementation of our method for the one-dimensional scenario.
Chapter 4. New model systems for the study of developmental evolution in plants.
Kramer, Elena M
2009-01-01
The number of genetically tractable plant model systems is rapidly increasing, thanks to the decreasing cost of sequencing and the wide amenability of plants to stable transformation and other functional approaches. In this chapter, I discuss emerging model systems from throughout the land plant phylogeny and consider how their unique attributes are contributing to our understanding of development, evolution, and ecology. These new models are being developed using two distinct strategies: in some cases, they are selected because of their close relationship to the established models, while in others, they are chosen with the explicit intention of exploring distantly related plant lineages. Such complementary approaches are yielding exciting new results that shed light on both micro- and macroevolutionary processes in the context of developmental evolution.
Self-Learning Variable Structure Control for a Class of Sensor-Actuator Systems
Chen, Sanfeng; Li, Shuai; Liu, Bo; Lou, Yuesheng; Liang, Yongsheng
2012-01-01
Variable structure strategy is widely used for the control of sensor-actuator systems modeled by Euler-Lagrange equations. However, accurate knowledge on the model structure and model parameters are often required for the control design. In this paper, we consider model-free variable structure control of a class of sensor-actuator systems, where only the online input and output of the system are available while the mathematic model of the system is unknown. The problem is formulated from an optimal control perspective and the implicit form of the control law are analytically obtained by using the principle of optimality. The control law and the optimal cost function are explicitly solved iteratively. Simulations demonstrate the effectiveness and the efficiency of the proposed method. PMID:22778633
Study of the stability of a SEIRS model for computer worm propagation
NASA Astrophysics Data System (ADS)
Hernández Guillén, J. D.; Martín del Rey, A.; Hernández Encinas, L.
2017-08-01
Nowadays, malware is the most important threat to information security. In this sense, several mathematical models to simulate malware spreading have appeared. They are compartmental models where the population of devices is classified into different compartments: susceptible, exposed, infectious, recovered, etc. The main goal of this work is to propose an improved SEIRS (Susceptible-Exposed-Infectious-Recovered-Susceptible) mathematical model to simulate computer worm propagation. It is a continuous model whose dynamic is ruled by means of a system of ordinary differential equations. It considers more realistic parameters related to the propagation; in fact, a modified incidence rate has been used. Moreover, the equilibrium points are computed and their local and global stability analyses are studied. From the explicit expression of the basic reproductive number, efficient control measures are also obtained.
Background / Question / Methods Planning for the recovery of threatened species is increasingly informed by spatially-explicit population models. However, using simulation model results to guide land management decisions can be difficult due to the volume and complexity of model...
Lindgren, Kristen P.; Ramirez, Jason J.; Olin, Cecilia C.; Neighbors, Clayton
2016-01-01
Drinking identity – how much individuals view themselves as drinkers– is a promising cognitive factor that predicts problem drinking. Implicit and explicit measures of drinking identity have been developed (the former assesses more reflexive/automatic cognitive processes; the latter more reflective/controlled cognitive processes): each predicts unique variance in alcohol consumption and problems. However, implicit and explicit identity’s utility and uniqueness as a predictor relative to cognitive factors important for problem drinking screening and intervention has not been evaluated. Thus, the current study evaluated implicit and explicit drinking identity as predictors of consumption and problems over time. Baseline measures of drinking identity, social norms, alcohol expectancies, and drinking motives were evaluated as predictors of consumption and problems (evaluated every three months over two academic years) in a sample of 506 students (57% female) in their first or second year of college. Results found that baseline identity measures predicted unique variance in consumption and problems over time. Further, when compared to each set of cognitive factors, the identity measures predicted unique variance in consumption and problems over time. Findings were more robust for explicit, versus, implicit identity and in models that did not control for baseline drinking. Drinking identity appears to be a unique predictor of problem drinking relative to social norms, alcohol expectancies, and drinking motives. Intervention and theory could benefit from including and considering drinking identity. PMID:27428756
Lindgren, Kristen P; Ramirez, Jason J; Olin, Cecilia C; Neighbors, Clayton
2016-09-01
Drinking identity-how much individuals view themselves as drinkers-is a promising cognitive factor that predicts problem drinking. Implicit and explicit measures of drinking identity have been developed (the former assesses more reflexive/automatic cognitive processes; the latter more reflective/controlled cognitive processes): each predicts unique variance in alcohol consumption and problems. However, implicit and explicit identity's utility and uniqueness as predictors relative to cognitive factors important for problem drinking screening and intervention has not been evaluated. Thus, the current study evaluated implicit and explicit drinking identity as predictors of consumption and problems over time. Baseline measures of drinking identity, social norms, alcohol expectancies, and drinking motives were evaluated as predictors of consumption and problems (evaluated every 3 months over 2 academic years) in a sample of 506 students (57% female) in their first or second year of college. Results found that baseline identity measures predicted unique variance in consumption and problems over time. Further, when compared to each set of cognitive factors, the identity measures predicted unique variance in consumption and problems over time. Findings were more robust for explicit versus implicit identity and in models that did not control for baseline drinking. Drinking identity appears to be a unique predictor of problem drinking relative to social norms, alcohol expectancies, and drinking motives. Intervention and theory could benefit from including and considering drinking identity. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Applications of Derandomization Theory in Coding
NASA Astrophysics Data System (ADS)
Cheraghchi, Mahdi
2011-07-01
Randomized techniques play a fundamental role in theoretical computer science and discrete mathematics, in particular for the design of efficient algorithms and construction of combinatorial objects. The basic goal in derandomization theory is to eliminate or reduce the need for randomness in such randomized constructions. In this thesis, we explore some applications of the fundamental notions in derandomization theory to problems outside the core of theoretical computer science, and in particular, certain problems related to coding theory. First, we consider the wiretap channel problem which involves a communication system in which an intruder can eavesdrop a limited portion of the transmissions, and construct efficient and information-theoretically optimal communication protocols for this model. Then we consider the combinatorial group testing problem. In this classical problem, one aims to determine a set of defective items within a large population by asking a number of queries, where each query reveals whether a defective item is present within a specified group of items. We use randomness condensers to explicitly construct optimal, or nearly optimal, group testing schemes for a setting where the query outcomes can be highly unreliable, as well as the threshold model where a query returns positive if the number of defectives pass a certain threshold. Finally, we design ensembles of error-correcting codes that achieve the information-theoretic capacity of a large class of communication channels, and then use the obtained ensembles for construction of explicit capacity achieving codes. [This is a shortened version of the actual abstract in the thesis.
Modelling virus- and host-limitation in vectored plant disease epidemics.
Jeger, M J; van den Bosch, F; Madden, L V
2011-08-01
Models of plant virus epidemics have received less attention than those caused by fungal pathogens. Intuitively, the fact that virus diseases are systemic means that the individual diseased plant can be considered as the population unit which simplifies modelling. However, the fact that a vector is required in the vast majority of cases for virus transmission, means that explicit consideration must be taken of the vector, or, the involvement of the vector in the transmission process must be considered implicitly. In the latter case it is also important that within-plant processes, such as virus multiplication and systemic movement, are taken into account. In this paper we propose an approach based on the linking of transmission at the population level with virus multiplication within plants. The resulting models are parameter-sparse and hence simplistic. However, the range of model outcomes is representative of field observations relating to the apparent limitation of epidemic development in populations of healthy susceptible plants. We propose that epidemic development can be constrained by virus limitation in the early stages of an epidemic when the availability of healthy susceptible hosts is not limiting. There is an inverse relationship between levels of transmission in the population and the mean virus titre/infected plant. In the case of competition between viruses, both virus and host limitation are likely to be important in determining whether one virus can displace another or whether both viruses can co-exist in a plant population. Lotka-Volterra type equations are derived to describe density-dependent competition between two viruses multiplying within plants, embedded within a population level epidemiological model. Explicit expressions determining displacement or co-existence of the viruses are obtained. Unlike the classical Lotka-Volterra competition equations, the co-existence requirement for the competition coefficients to be both less than 1 can be relaxed. Copyright © 2011 Elsevier B.V. All rights reserved.
Gurney, Georgina G.; Melbourne-Thomas, Jessica; Geronimo, Rollan C.; Aliño, Perry M.; Johnson, Craig R.
2013-01-01
Climate change has emerged as a principal threat to coral reefs, and is expected to exacerbate coral reef degradation caused by more localised stressors. Management of local stressors is widely advocated to bolster coral reef resilience, but the extent to which management of local stressors might affect future trajectories of reef state remains unclear. This is in part because of limited understanding of the cumulative impact of multiple stressors. Models are ideal tools to aid understanding of future reef state under alternative management and climatic scenarios, but to date few have been sufficiently developed to be useful as decision support tools for local management of coral reefs subject to multiple stressors. We used a simulation model of coral reefs to investigate the extent to which the management of local stressors (namely poor water quality and fishing) might influence future reef state under varying climatic scenarios relating to coral bleaching. We parameterised the model for Bolinao, the Philippines, and explored how simulation modelling can be used to provide decision support for local management. We found that management of water quality, and to a lesser extent fishing, can have a significant impact on future reef state, including coral recovery following bleaching-induced mortality. The stressors we examined interacted antagonistically to affect reef state, highlighting the importance of considering the combined impact of multiple stressors rather than considering them individually. Further, by providing explicit guidance for management of Bolinao's reef system, such as which course of management action will most likely to be effective over what time scales and at which sites, we demonstrated the utility of simulation models for supporting management. Aside from providing explicit guidance for management of Bolinao's reef system, our study offers insights which could inform reef management more broadly, as well as general understanding of reef systems. PMID:24260347
Gurney, Georgina G; Melbourne-Thomas, Jessica; Geronimo, Rollan C; Aliño, Perry M; Johnson, Craig R
2013-01-01
Climate change has emerged as a principal threat to coral reefs, and is expected to exacerbate coral reef degradation caused by more localised stressors. Management of local stressors is widely advocated to bolster coral reef resilience, but the extent to which management of local stressors might affect future trajectories of reef state remains unclear. This is in part because of limited understanding of the cumulative impact of multiple stressors. Models are ideal tools to aid understanding of future reef state under alternative management and climatic scenarios, but to date few have been sufficiently developed to be useful as decision support tools for local management of coral reefs subject to multiple stressors. We used a simulation model of coral reefs to investigate the extent to which the management of local stressors (namely poor water quality and fishing) might influence future reef state under varying climatic scenarios relating to coral bleaching. We parameterised the model for Bolinao, the Philippines, and explored how simulation modelling can be used to provide decision support for local management. We found that management of water quality, and to a lesser extent fishing, can have a significant impact on future reef state, including coral recovery following bleaching-induced mortality. The stressors we examined interacted antagonistically to affect reef state, highlighting the importance of considering the combined impact of multiple stressors rather than considering them individually. Further, by providing explicit guidance for management of Bolinao's reef system, such as which course of management action will most likely to be effective over what time scales and at which sites, we demonstrated the utility of simulation models for supporting management. Aside from providing explicit guidance for management of Bolinao's reef system, our study offers insights which could inform reef management more broadly, as well as general understanding of reef systems.
ERIC Educational Resources Information Center
Roehr-Brackin, Karen
2014-01-01
This article considers explicit knowledge and processes in second language (L2) learning from a usage-based theoretical perspective. It reports on the long-term development of a single instructed adult learner's use of two L2 constructions, the German Perfekt of "gehen" ("go," "walk") and "fahren"…
Schwartz, Jennifer A T; Pearson, Steven D
2013-06-24
Despite increasing concerns regarding the cost of health care, the consideration of costs in the development of clinical guidance documents by physician specialty societies has received little analysis. To evaluate the approach to consideration of cost in publicly available clinical guidance documents and methodological statements produced between 2008 and 2012 by the 30 largest US physician specialty societies. Qualitative document review. Whether costs are considered in clinical guidance development, mechanism of cost consideration, and the way that cost issues were used in support of specific clinical practice recommendations. Methodological statements for clinical guidance documents indicated that 17 of 30 physician societies (57%) explicitly integrated costs, 4 (13%) implicitly considered costs, 3 (10%) intentionally excluded costs, and 6 (20%) made no mention. Of the 17 societies that explicitly integrated costs, 9 (53%) consistently used a formal system in which the strength of recommendation was influenced in part by costs, whereas 8 (47%) were inconsistent in their approach or failed to mention the exact mechanism for considering costs. Among the 138 specific recommendations in these guidance documents that included cost as part of the rationale, the most common form of recommendation (50 [36%]) encouraged the use of a specific medical service because of equal effectiveness and lower cost. Slightly more than half of the largest US physician societies explicitly consider costs in developing their clinical guidance documents; among these, approximately half use an explicit mechanism for integrating costs into the strength of recommendations. Many societies remain vague in their approach. Physician specialty societies should demonstrate greater transparency and rigor in their approach to cost consideration in documents meant to influence care decisions.
Thermoviscoplastic model with application to copper
NASA Technical Reports Server (NTRS)
Freed, Alan D.
1988-01-01
A viscoplastic model is developed which is applicable to anisothermal, cyclic, and multiaxial loading conditions. Three internal state variables are used in the model; one to account for kinematic effects, and the other two to account for isotropic effects. One of the isotropic variables is a measure of yield strength, while the other is a measure of limit strength. Each internal state variable evolves through a process of competition between strain hardening and recovery. There is no explicit coupling between dynamic and thermal recovery in any evolutionary equation, which is a useful simplification in the development of the model. The thermodynamic condition of intrinsic dissipation constrains the thermal recovery function of the model. Application of the model is made to copper, and cyclic experiments under isothermal, thermomechanical, and nonproportional loading conditions are considered. Correlations and predictions of the model are representative of observed material behavior.
Image model: new perspective for image processing and computer vision
NASA Astrophysics Data System (ADS)
Ziou, Djemel; Allili, Madjid
2004-05-01
We propose a new image model in which the image support and image quantities are modeled using algebraic topology concepts. The image support is viewed as a collection of chains encoding combination of pixels grouped by dimension and linking different dimensions with the boundary operators. Image quantities are encoded using the notion of cochain which associates values for pixels of given dimension that can be scalar, vector, or tensor depending on the problem that is considered. This allows obtaining algebraic equations directly from the physical laws. The coboundary and codual operators, which are generic operations on cochains allow to formulate the classical differential operators as applied for field functions and differential forms in both global and local forms. This image model makes the association between the image support and the image quantities explicit which results in several advantages: it allows the derivation of efficient algorithms that operate in any dimension and the unification of mathematics and physics to solve classical problems in image processing and computer vision. We show the effectiveness of this model by considering the isotropic diffusion.
CONSTRUCTING, PERTURBATION ANALYSIIS AND TESTING OF A MULTI-HABITAT PERIODIC MATRIX POPULATION MODEL
We present a matrix model that explicitly incorporates spatial habitat structure and seasonality and discuss preliminary results from a landscape level experimental test. Ecological risk to populations is often modeled without explicit treatment of spatially or temporally distri...
ERIC Educational Resources Information Center
Dang, Trang Thi Doan; Nguyen, Huong Thu
2013-01-01
Two approaches to grammar instruction are often discussed in the ESL literature: direct explicit grammar instruction (DEGI) (deduction) and indirect explicit grammar instruction (IEGI) (induction). This study aims to explore the effects of indirect explicit grammar instruction on EFL learners' mastery of English tenses. Ninety-four…
Thickness-shear mode quartz crystal resonators in viscoelastic fluid media
NASA Astrophysics Data System (ADS)
Arnau, A.; Jiménez, Y.; Sogorb, T.
2000-10-01
An extended Butterworth-Van Dyke (EBVD) model to characterize a thickness-shear mode quartz crystal resonator in a semi-infinite viscoelastic medium is derived by means of analysis of the lumped elements model described by Cernosek et al. [R. W. Cernosek, S. J. Martin, A. R. Hillman, and H. L. Bandey, IEEE Trans. Ultrason. Ferroelectr. Freq. Control 45, 1399 (1998)]. The EBVD model parameters are related to the viscoelastic properties of the medium. A capacitance added to the motional branch of the EBVD model has to be included when the elastic properties of the fluid are considered. From this model, an explicit expression for the frequency shift of a quartz crystal sensor in viscoelastic media is obtained. By combining the expressions for shifts in the motional series resonant frequency and in the motional resistance, a simple equation that relates only one unknown (the loss factor of the fluid) to those measurable quantities, and two simple explicit expressions for determining the viscoelastic properties of semi-infinite fluid media have been derived. The proposed expression for the parameter Δf/ΔR is compared with the corresponding ratio obtained with data computed from the complete admittance model. Relative errors below 4.5%, 3%, and 1.2% (for the ratios of the load surface mechanical impedance to the quartz shear characteristic impedance of 0.3, 0.25, and 0.1, respectively), are obtained in the range of the cases analyzed. Experimental data from the literature are used to validate the model.
Persistence of Rift Valley fever virus in East Africa
NASA Astrophysics Data System (ADS)
Gachohi, J.; Hansen, F.; Bett, B.; Kitala, P.
2012-04-01
Rift Valley fever virus (RVFv) is a mosquito-borne pathogen of livestock, wildlife and humans that causes severe outbreaks in intervals of several years. One of the open questions is how the virus persists between outbreaks. We developed a spatially-explicit, individual-based simulation model of the RVFv transmission dynamics to investigate this question. The model, is based on livestock and mosquito population dynamics. Spatial aspects are explicitly represented by a set of grid cells that represent mosquito breeding sites. A grid cell measures 500 by 500m and the model considers a grid of 100 by 100 grid cells; the model thus operates on the regional scale of 2500km2. Livestock herds move between grid cells, and provide connectivity between the cells. The model is used to explore the spatio-temporal dynamics of RVFv persistence in absence of a wildlife reservoir in an east African semi-arid context. Specifically, the model assesses the importance of local virus persistence in mosquito breeding sites relative to global virus persistence mitigated by movement of hosts. Local persistence is determined by the length of time the virus remains in a mosquito breeding site once introduced. In the model, this is a function of the number of mosquitoes that emerge infected and their lifespan. Global persistence is determined by the level of connectivity between isolated grid cells. Our work gives insights into the ecological and epidemiological conditions under which RVFv persists. The implication for disease surveillance and management are discussed.
The spatial pattern of suicide in the US in relation to deprivation, fragmentation and rurality.
Congdon, Peter
2011-01-01
Analysis of geographical patterns of suicide and psychiatric morbidity has demonstrated the impact of latent ecological variables (such as deprivation, rurality). Such latent variables may be derived by conventional multivariate techniques from sets of observed indices (for example, by principal components), by composite variable methods or by methods which explicitly consider the spatial framework of areas and, in particular, the spatial clustering of latent risks and outcomes. This article considers a latent random variable approach to explaining geographical contrasts in suicide in the US; and it develops a spatial structural equation model incorporating deprivation, social fragmentation and rurality. The approach allows for such latent spatial constructs to be correlated both within and between areas. Potential effects of area ethnic mix are also included. The model is applied to male and female suicide deaths over 2002–06 in 3142 US counties.
Eddy-resolving 1/10° model of the World Ocean
NASA Astrophysics Data System (ADS)
Ibrayev, R. A.; Khabeev, R. N.; Ushakov, K. V.
2012-02-01
The first results on simulating the intra-annual variability of the World Ocean circulation by use of the eddy-resolving model are considered. For this purpose, a model of the World Ocean with a 1/10° horizontal resolution and 49 vertical levels was developed (a 1/10 × 1/10 × 49 model of the World Ocean). This model is based on the traditional system of three-dimensional equations of the large-scale dynamics of the ocean and boundary conditions with an explicit allowance for water fluxes on the free surface of the ocean. The equations are written in the tripolar coordinate system. The numerical method is based on the separation of the barotropic and baroclinic components of the solution. Discretization in time is implemented using explicit schemes allowing effective parallelization for a large number of processors. The model uses the sub-models of the boundary layer of the atmosphere and the submodel of sea-ice thermodynamics. The model of the World Ocean was developed at the Institute of Numerical Mathematics of the Russian Academy of Sciences (INM RAS) and the P.P. Shirshov Institute of Oceanogy (IO RAS). The formulation of the problem of simulating the intra-annual variability of thermohydrodynamic processes of the World Ocean and the parameterizations that were used are considered. In the numerical experiment, the temporal evolution of the atmospheric effect is determined by the normal annual cycle according to the conditions of the international Coordinated Ocean-Ice Reference Experiment (CORE-I). The calculation was carried out on a multiprocessor computer with distributed memory; 1601 computational cores were used. The presented analysis demonstrates that the obtained results are quite satisfactory when compared to the results that were obtained by other eddy-resolving models of the global ocean. The analysis of the model solution is, to a larger extent, of a descriptive character. A detailed analysis of the results is to be presented in following works. This experiment is a significant first step in developing the eddy-resolving model of the World Ocean.
Baryonic matter perturbations in decaying vacuum cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marttens, R.F. vom; Zimdahl, W.; Hipólito-Ricaldi, W.S., E-mail: rodrigovonmarttens@gmail.com, E-mail: wiliam.ricaldi@ufes.br, E-mail: winfried.zimdahl@pq.cnpq.br
2014-08-01
We consider the perturbation dynamics for the cosmic baryon fluid and determine the corresponding power spectrum for a Λ(t)CDM model in which a cosmological term decays into dark matter linearly with the Hubble rate. The model is tested by a joint analysis of data from supernovae of type Ia (SNIa) (Constitution and Union 2.1), baryonic acoustic oscillations (BAO), the position of the first peak of the anisotropy spectrum of the cosmic microwave background (CMB) and large-scale-structure (LSS) data (SDSS DR7). While the homogeneous and isotropic background dynamics is only marginally influenced by the baryons, there are modifications on the perturbativemore » level if a separately conserved baryon fluid is included. Considering the present baryon fraction as a free parameter, we reproduce the observed abundance of the order of 5% independently of the dark-matter abundance which is of the order of 32% for this model. Generally, the concordance between background and perturbation dynamics is improved if baryons are explicitly taken into account.« less
Aoyagi, Miki; Nagata, Kenji
2012-06-01
The term algebraic statistics arises from the study of probabilistic models and techniques for statistical inference using methods from algebra and geometry (Sturmfels, 2009 ). The purpose of our study is to consider the generalization error and stochastic complexity in learning theory by using the log-canonical threshold in algebraic geometry. Such thresholds correspond to the main term of the generalization error in Bayesian estimation, which is called a learning coefficient (Watanabe, 2001a , 2001b ). The learning coefficient serves to measure the learning efficiencies in hierarchical learning models. In this letter, we consider learning coefficients for Vandermonde matrix-type singularities, by using a new approach: focusing on the generators of the ideal, which defines singularities. We give tight new bound values of learning coefficients for the Vandermonde matrix-type singularities and the explicit values with certain conditions. By applying our results, we can show the learning coefficients of three-layered neural networks and normal mixture models.
A climate model projection weighting scheme accounting for performance and interdependence
NASA Astrophysics Data System (ADS)
Knutti, Reto; Sedláček, Jan; Sanderson, Benjamin M.; Lorenz, Ruth; Fischer, Erich M.; Eyring, Veronika
2017-02-01
Uncertainties of climate projections are routinely assessed by considering simulations from different models. Observations are used to evaluate models, yet there is a debate about whether and how to explicitly weight model projections by agreement with observations. Here we present a straightforward weighting scheme that accounts both for the large differences in model performance and for model interdependencies, and we test reliability in a perfect model setup. We provide weighted multimodel projections of Arctic sea ice and temperature as a case study to demonstrate that, for some questions at least, it is meaningless to treat all models equally. The constrained ensemble shows reduced spread and a more rapid sea ice decline than the unweighted ensemble. We argue that the growing number of models with different characteristics and considerable interdependence finally justifies abandoning strict model democracy, and we provide guidance on when and how this can be achieved robustly.
We used a spatially explicit population model of wolves (Canis lupus) to propose a framework for defining rangewide recovery priorities and finer-scale strategies for regional reintroductions. The model predicts that Yellowstone and central Idaho, where wolves have recently been ...
NASA Astrophysics Data System (ADS)
Isaac, Aboagye Adjaye; Yongsheng, Cao; Fushen, Chen
2018-05-01
We present and compare the outcome of implicit and explicit labels using intensity modulation (IM), differential quadrature phase shift keying (DQPSK), and polarization division multiplexed (PDM-DQPSK). A payload bit rate of 1, 2, and 5 Gb/s is considered for IM implicit labels, while payloads of 40, 80, and 112 Gb/s are considered in DQPSK and PDM-DQPSK explicit labels by stimulating a 4-code 156-Mb/s SAC label. The generated label and payloads are observed by assessing the eye diagram, received optical power (ROP), and optical signal to noise ratio (OSNR).
NASA Astrophysics Data System (ADS)
Rinaldo, A.; Bertuzzo, E.; Mari, L.; Righetto, L.; Gatto, M.; Casagrandi, R.; Rodriguez-Iturbe, I.
2010-12-01
A recently proposed model for cholera epidemics is examined. The model accounts for local communities of susceptibles and infectives in a spatially explicit arrangement of nodes linked by networks having different topologies. The vehicle of infection (Vibrio cholerae) is transported through the network links which are thought of as hydrological connections among susceptible communities. The mathematical tools used are borrowed from general schemes of reactive transport on river networks acting as the environmental matrix for the circulation and mixing of water-borne pathogens. The results of a large-scale application to the Kwa Zulu (Natal) epidemics of 2001-2002 will be discussed. Useful theoretical results derived in the spatially-explicit context will also be reviewed (like e.g. the exact derivation of the speed of propagation for traveling fronts of epidemics on regular lattices endowed with uniform population density). Network effects will be discussed. The analysis of the limit case of uniformly distributed population density proves instrumental in establishing the overall conditions for the relevance of spatially explicit models. To that extent, it is shown that the ratio between spreading and disease outbreak timescales proves the crucial parameter. The relevance of our results lies in the major differences potentially arising between the predictions of spatially explicit models and traditional compartmental models of the SIR-like type. Our results suggest that in many cases of real-life epidemiological interest timescales of disease dynamics may trigger outbreaks that significantly depart from the predictions of compartmental models. Finally, a view on further developments includes: hydrologically improved aquatic reservoir models for pathogens; human mobility patterns affecting disease propagation; double-peak emergence and seasonality in the spatially explicit epidemic context.
Environmental decision-making and the influences of various stressors, such as landscape and climate changes on water quantity and quality, requires the application of environmental modeling. Spatially explicit environmental and watershed-scale models using GIS as a base framewor...
HexSim - A general purpose framework for spatially-explicit, individual-based modeling
HexSim is a framework for constructing spatially-explicit, individual-based computer models designed for simulating terrestrial wildlife population dynamics and interactions. HexSim is useful for a broad set of modeling applications. This talk will focus on a subset of those ap...
Studies of implicit and explicit solution techniques in transient thermal analysis of structures
NASA Technical Reports Server (NTRS)
Adelman, H. M.; Haftka, R. T.; Robinson, J. C.
1982-01-01
Studies aimed at an increase in the efficiency of calculating transient temperature fields in complex aerospace vehicle structures are reported. The advantages and disadvantages of explicit and implicit algorithms are discussed and a promising set of implicit algorithms with variable time steps, known as GEARIB, is described. Test problems, used for evaluating and comparing various algorithms, are discussed and finite element models of the configurations are described. These problems include a coarse model of the Space Shuttle wing, an insulated frame tst article, a metallic panel for a thermal protection system, and detailed models of sections of the Space Shuttle wing. Results generally indicate a preference for implicit over explicit algorithms for transient structural heat transfer problems when the governing equations are stiff (typical of many practical problems such as insulated metal structures). The effects on algorithm performance of different models of an insulated cylinder are demonstrated. The stiffness of the problem is highly sensitive to modeling details and careful modeling can reduce the stiffness of the equations to the extent that explicit methods may become the best choice. Preliminary applications of a mixed implicit-explicit algorithm and operator splitting techniques for speeding up the solution of the algebraic equations are also described.
Studies of implicit and explicit solution techniques in transient thermal analysis of structures
NASA Astrophysics Data System (ADS)
Adelman, H. M.; Haftka, R. T.; Robinson, J. C.
1982-08-01
Studies aimed at an increase in the efficiency of calculating transient temperature fields in complex aerospace vehicle structures are reported. The advantages and disadvantages of explicit and implicit algorithms are discussed and a promising set of implicit algorithms with variable time steps, known as GEARIB, is described. Test problems, used for evaluating and comparing various algorithms, are discussed and finite element models of the configurations are described. These problems include a coarse model of the Space Shuttle wing, an insulated frame tst article, a metallic panel for a thermal protection system, and detailed models of sections of the Space Shuttle wing. Results generally indicate a preference for implicit over explicit algorithms for transient structural heat transfer problems when the governing equations are stiff (typical of many practical problems such as insulated metal structures). The effects on algorithm performance of different models of an insulated cylinder are demonstrated. The stiffness of the problem is highly sensitive to modeling details and careful modeling can reduce the stiffness of the equations to the extent that explicit methods may become the best choice. Preliminary applications of a mixed implicit-explicit algorithm and operator splitting techniques for speeding up the solution of the algebraic equations are also described.
Implicit associations with popularity in early adolescence: an approach-avoidance analysis.
Lansu, Tessa A M; Cillessen, Antonius H N; Karremans, Johan C
2012-01-01
This study examined 241 early adolescents' implicit and explicit associations with popularity. The peer status and gender of both the targets and the perceivers were considered. Explicit associations with popularity were assessed with sociometric methods. Implicit associations with popularity were assessed with an approach-avoidance task (AAT). Explicit evaluations of popularity were positive, but implicit associations were negative: Avoidance reactions to popular peers were faster than approach reactions. Interactions with the status of the perceiver indicated that unpopular participants had stronger negative implicit reactions to popular girls than did popular participants. This study demonstrated a negative reaction to popularity that cannot be revealed with explicit methods. The study of implicit processes with methods such as the AAT is a new and important direction for peer relations research.
Chassin, Laurie; Presson, Clark C.; Sherman, Steven J.; Seo, Dong-Chul; Macy, Jon
2010-01-01
The current study tested implicit and explicit attitudes as prospective predictors of smoking cessation in a Midwestern community sample of smokers. Results showed that the effects of attitudes significantly varied with levels of experienced failure to control smoking and plans to quit. Explicit attitudes significantly predicted later cessation among those with low (but not high or average) levels of experienced failure to control smoking. Conversely, however, implicit attitudes significantly predicted later cessation among those with high levels of experienced failure to control smoking, but only if they had a plan to quit. Because smoking cessation involves both controlled and automatic processes, interventions may need to consider attitude change interventions that focus on both implicit and explicit attitudes. PMID:21198227
Explicit chiral symmetry breaking in the Nambu-Jona-Lasinio model
NASA Astrophysics Data System (ADS)
Schüren, C.; Arriola, E. Ruiz; Goeke, K.
1992-09-01
We consider a chirally symmetric bosonization of the SU(2) Nambu-Jona-Lasinio model within the Pauli-Villars regularization scheme. Special attention is paid to the way in which chiral symmetry is broken explicitly. The parameters of the model are fixed in the light of chiral perturbation theory by performing a covariant derivative expansion in the presence of external fields. As a by-product we obtain the corresponding low-energy parameters and pion radii as well as some threshold parameters for pion-pion scattering. The nucleon is obtained in terms of the solitonic solutions of the action in the sector with baryon number equal to one. It is found that for a constituent quark mass M ˜ 350 MeV most of the calculated vacuum and pion properties agree reasonably well with the experimental ones and coincide with the region where localized solitons with the right size exist. For this value, however, the scalar and vector pion radii turn out to be very small. A unique determination of the sigma term is proposed, obtaining a value of σ(0) = 41.3 MeV. The scalar nucleon form factor is evaluated in the Breit frame. The extrapolation to the Cheng-Dashen point leads to σ(2 m2) - σ(0) = 7.4 MeV.
Tchetgen Tchetgen, Eric
2011-03-01
This article considers the detection and evaluation of genetic effects incorporating gene-environment interaction and independence. Whereas ordinary logistic regression cannot exploit the assumption of gene-environment independence, the proposed approach makes explicit use of the independence assumption to improve estimation efficiency. This method, which uses both cases and controls, fits a constrained retrospective regression in which the genetic variant plays the role of the response variable, and the disease indicator and the environmental exposure are the independent variables. The regression model constrains the association of the environmental exposure with the genetic variant among the controls to be null, thus explicitly encoding the gene-environment independence assumption, which yields substantial gain in accuracy in the evaluation of genetic effects. The proposed retrospective regression approach has several advantages. It is easy to implement with standard software, and it readily accounts for multiple environmental exposures of a polytomous or of a continuous nature, while easily incorporating extraneous covariates. Unlike the profile likelihood approach of Chatterjee and Carroll (Biometrika. 2005;92:399-418), the proposed method does not require a model for the association of a polytomous or continuous exposure with the disease outcome, and, therefore, it is agnostic to the functional form of such a model and completely robust to its possible misspecification.
NASA Astrophysics Data System (ADS)
Tang, Tie-Qiao; Wang, Tao; Chen, Liang; Huang, Hai-Jun
2018-01-01
In this paper, we introduce the fuel cost into each commuter's trip cost, define a new trip cost without late arrival and its corresponding equilibrium state, and use a car-following model to explore the impacts of the fuel cost on each commuter's departure time, departure interval, arrival time, arrival interval, traveling time, early arrival time and trip cost at the above equilibrium state. The numerical results show that considering the fuel cost in each commuter's trip cost has positive impacts on his trip cost and fuel cost, and the traffic situation in the system without late arrival, i.e., each commuter should explicitly consider the fuel cost in his trip cost.
Measurement of the branching fraction of B → Xsγ and ACP in B → Xs + dγ from Belle
NASA Astrophysics Data System (ADS)
Pesántez, L.; Belle Collaboration
2016-04-01
The transitions b → dγ and b → sγ are flavor-changing neutral currents, forbidden at tree level in the Standard Model (SM). These decays proceed via electroweak penguin loop diagrams and can be used to test the SM and new-physics effects. The SM gives very precise predictions when the decays are considered inclusively, for this reason it is important to perform studies where as many final states as possible are reconstructed or where the decay is considered fully inclusively, without explicitly reconstructing the B meson. The large Belle data set of 711fb-1 recorded at the ϒ (4 S) resonance allows for precise measurements of radiative B-decays.
Harmsen, Stephen
2011-01-01
In addition, this report shows how incorporating geologic site condition information alters the values of the dominating magnitudes and distances in deaggregation-5-Hz values for a site near San Quentin, Calif., and 5-Hz and 1-Hz values for Harbor Island near Seattle, Wash. These deaggregations show that the modal event can shift from a larger closer source to a more distant, perhaps smaller source when nonlinear soil behavior is explicitly included in the hazard integral. The potential shift in the mode when considering the soil column's effect ought to be carefully considered by engineers who select scenario events based in part on the distribution in magnitude, distance, and epsilon space.
Computational methods for structural load and resistance modeling
NASA Technical Reports Server (NTRS)
Thacker, B. H.; Millwater, H. R.; Harren, S. V.
1991-01-01
An automated capability for computing structural reliability considering uncertainties in both load and resistance variables is presented. The computations are carried out using an automated Advanced Mean Value iteration algorithm (AMV +) with performance functions involving load and resistance variables obtained by both explicit and implicit methods. A complete description of the procedures used is given as well as several illustrative examples, verified by Monte Carlo Analysis. In particular, the computational methods described in the paper are shown to be quite accurate and efficient for a material nonlinear structure considering material damage as a function of several primitive random variables. The results show clearly the effectiveness of the algorithms for computing the reliability of large-scale structural systems with a maximum number of resolutions.
A BRST formulation for the conic constrained particle
NASA Astrophysics Data System (ADS)
Barbosa, Gabriel D.; Thibes, Ronaldo
2018-04-01
We describe the gauge invariant BRST formulation of a particle constrained to move in a general conic. The model considered constitutes an explicit example of an originally second-class system which can be quantized within the BRST framework. We initially impose the conic constraint by means of a Lagrange multiplier leading to a consistent second-class system which generalizes previous models studied in the literature. After calculating the constraint structure and the corresponding Dirac brackets, we introduce a suitable first-order Lagrangian, the resulting modified system is then shown to be gauge invariant. We proceed to the extended phase space introducing fermionic ghost variables, exhibiting the BRST symmetry transformations and writing the Green’s function generating functional for the BRST quantized model.
Seismic Response of 3D Steel Buildings considering the Effect of PR Connections and Gravity Frames
Haldar, Achintya; López-Barraza, Arturo; Rivera-Salas, J. Luz
2014-01-01
The nonlinear seismic responses of 3D steel buildings with perimeter moment resisting frames (PMRF) and interior gravity frames (IGF) are studied explicitly considering the contribution of the IGF. The effect on the structural response of the stiffness of the beam-to-column connections of the IGF, which is usually neglected, is also studied. It is commonly believed that the flexibility of shear connections is negligible and that 2D models can be used to properly represent 3D real structures. The results of the study indicate, however, that the moments developed on columns of IGF can be considerable and that modeling buildings as plane frames may result in very conservative designs. The contribution of IGF to the lateral structural resistance may be significant. The contribution increases when their connections are assumed to be partially restrained (PR). The incremented participation of IGF when the stiffness of their connections is considered helps to counteract the no conservative effect that results in practice when lateral seismic loads are not considered in IGF while designing steel buildings with PMRF. Thus, if the structural system under consideration is used, the three-dimensional model should be used in seismic analysis and the IGF and the stiffness of their connections should be considered as part of the lateral resistance system. PMID:24995357
ERIC Educational Resources Information Center
Morgan-Short, Kara; Deng, ZhiZhou; Brill-Schuetz, Katherine A.; Faretta- Stutenberg, Mandy; Wong, Patrick C. M.; Wong, Francis C. K.
2015-01-01
The current study aims to make an initial neuroimaging contribution to central implicit-explicit issues in second language (L2) acquisition by considering how implicit and explicit contexts mediate the neural representation of L2. Focusing on implicit contexts, the study employs a longitudinal design to examine the neural representation of L2…
Batch-mode Reinforcement Learning for improved hydro-environmental systems management
NASA Astrophysics Data System (ADS)
Castelletti, A.; Galelli, S.; Restelli, M.; Soncini-Sessa, R.
2010-12-01
Despite the great progresses made in the last decades, the optimal management of hydro-environmental systems still remains a very active and challenging research area. The combination of multiple, often conflicting interests, high non-linearities of the physical processes and the management objectives, strong uncertainties in the inputs, and high dimensional state makes the problem challenging and intriguing. Stochastic Dynamic Programming (SDP) is one of the most suitable methods for designing (Pareto) optimal management policies preserving the original problem complexity. However, it suffers from a dual curse, which, de facto, prevents its practical application to even reasonably complex water systems. (i) Computational requirement grows exponentially with state and control dimension (Bellman's curse of dimensionality), so that SDP can not be used with water systems where the state vector includes more than few (2-3) units. (ii) An explicit model of each system's component is required (curse of modelling) to anticipate the effects of the system transitions, i.e. any information included into the SDP framework can only be either a state variable described by a dynamic model or a stochastic disturbance, independent in time, with the associated pdf. Any exogenous information that could effectively improve the system operation cannot be explicitly considered in taking the management decision, unless a dynamic model is identified for each additional information, thus adding to the problem complexity through the curse of dimensionality (additional state variables). To mitigate this dual curse, the combined use of batch-mode Reinforcement Learning (bRL) and Dynamic Model Reduction (DMR) techniques is explored in this study. bRL overcomes the curse of modelling by replacing explicit modelling with an external simulator and/or historical observations. The curse of dimensionality is averted using a functional approximation of the SDP value function based on proper non-linear regressors. DMR reduces the complexity and the associated computational requirements of non-linear distributed process based models, making them suitable for being included into optimization schemes. Results from real world applications of the approach are also presented, including reservoir operation with both quality and quantity targets.
On explicit algebraic stress models for complex turbulent flows
NASA Technical Reports Server (NTRS)
Gatski, T. B.; Speziale, C. G.
1992-01-01
Explicit algebraic stress models that are valid for three-dimensional turbulent flows in noninertial frames are systematically derived from a hierarchy of second-order closure models. This represents a generalization of the model derived by Pope who based his analysis on the Launder, Reece, and Rodi model restricted to two-dimensional turbulent flows in an inertial frame. The relationship between the new models and traditional algebraic stress models -- as well as anistropic eddy visosity models -- is theoretically established. The need for regularization is demonstrated in an effort to explain why traditional algebraic stress models have failed in complex flows. It is also shown that these explicit algebraic stress models can shed new light on what second-order closure models predict for the equilibrium states of homogeneous turbulent flows and can serve as a useful alternative in practical computations.
Does Planck really rule out monomial inflation?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enqvist, Kari; Karčiauskas, Mindaugas, E-mail: kari.enqvist@helsinki.fi, E-mail: mindaugas.karciauskas@helsinki.fi
2014-02-01
We consider the modifications of monomial chaotic inflation models due to radiative corrections induced by inflaton couplings to bosons and/or fermions necessary for reheating. To the lowest order, ignoring gravitational corrections and treating the inflaton as a classical background field, they are of the Coleman-Weinberg type and parametrized by the renormalization scale μ. In cosmology, there are not enough measurements to fix μ so that we end up with a family of models, each having a slightly different slope of the potential. We demonstrate by explicit calculation that within the family of chaotic φ{sup 2} models, some may be ruledmore » out by Planck whereas some remain perfectly viable. In contrast, radiative corrections do not seem to help chaotic φ{sup 4} models to meet the Planck constraints.« less
AN OPTIMAL MAINTENANCE MANAGEMENT MODEL FOR AIRPORT CONCRETE PAVEMENT
NASA Astrophysics Data System (ADS)
Shimomura, Taizo; Fujimori, Yuji; Kaito, Kiyoyuki; Obama, Kengo; Kobayashi, Kiyoshi
In this paper, an optimal management model is formulated for the performance-based rehabilitation/maintenance contract for airport concrete pavement, whereby two types of life cycle cost risks, i.e., ground consolidation risk and concrete depreciation risk, are explicitly considered. The non-homogenous Markov chain model is formulated to represent the deterioration processes of concrete pavement which are conditional upon the ground consolidation processes. The optimal non-homogenous Markov decision model with multiple types of risk is presented to design the optimal rehabilitation/maintenance plans. And the methodology to revise the optimal rehabilitation/maintenance plans based upon the monitoring data by the Bayesian up-to-dating rules. The validity of the methodology presented in this paper is examined based upon the case studies carried out for the H airport.
EdgeMaps: visualizing explicit and implicit relations
NASA Astrophysics Data System (ADS)
Dörk, Marian; Carpendale, Sheelagh; Williamson, Carey
2011-01-01
In this work, we introduce EdgeMaps as a new method for integrating the visualization of explicit and implicit data relations. Explicit relations are specific connections between entities already present in a given dataset, while implicit relations are derived from multidimensional data based on shared properties and similarity measures. Many datasets include both types of relations, which are often difficult to represent together in information visualizations. Node-link diagrams typically focus on explicit data connections, while not incorporating implicit similarities between entities. Multi-dimensional scaling considers similarities between items, however, explicit links between nodes are not displayed. In contrast, EdgeMaps visualize both implicit and explicit relations by combining and complementing spatialization and graph drawing techniques. As a case study for this approach we chose a dataset of philosophers, their interests, influences, and birthdates. By introducing the limitation of activating only one node at a time, interesting visual patterns emerge that resemble the aesthetics of fireworks and waves. We argue that the interactive exploration of these patterns may allow the viewer to grasp the structure of a graph better than complex node-link visualizations.
Coupled stochastic soil moisture simulation-optimization model of deficit irrigation
NASA Astrophysics Data System (ADS)
Alizadeh, Hosein; Mousavi, S. Jamshid
2013-07-01
This study presents an explicit stochastic optimization-simulation model of short-term deficit irrigation management for large-scale irrigation districts. The model which is a nonlinear nonconvex program with an economic objective function is built on an agrohydrological simulation component. The simulation component integrates (1) an explicit stochastic model of soil moisture dynamics of the crop-root zone considering interaction of stochastic rainfall and irrigation with shallow water table effects, (2) a conceptual root zone salt balance model, and 3) the FAO crop yield model. Particle Swarm Optimization algorithm, linked to the simulation component, solves the resulting nonconvex program with a significantly better computational performance compared to a Monte Carlo-based implicit stochastic optimization model. The model has been tested first by applying it in single-crop irrigation problems through which the effects of the severity of water deficit on the objective function (net benefit), root-zone water balance, and irrigation water needs have been assessed. Then, the model has been applied in Dasht-e-Abbas and Ein-khosh Fakkeh Irrigation Districts (DAID and EFID) of the Karkheh Basin in southwest of Iran. While the maximum net benefit has been obtained for a stress-avoidance (SA) irrigation policy, the highest water profitability has been resulted when only about 60% of the water used in the SA policy is applied. The DAID with respectively 33% of total cultivated area and 37% of total applied water has produced only 14% of the total net benefit due to low-valued crops and adverse soil and shallow water table conditions.
2012-01-01
Background We explore the benefits of applying a new proportional hazard model to analyze survival of breast cancer patients. As a parametric model, the hypertabastic survival model offers a closer fit to experimental data than Cox regression, and furthermore provides explicit survival and hazard functions which can be used as additional tools in the survival analysis. In addition, one of our main concerns is utilization of multiple gene expression variables. Our analysis treats the important issue of interaction of different gene signatures in the survival analysis. Methods The hypertabastic proportional hazards model was applied in survival analysis of breast cancer patients. This model was compared, using statistical measures of goodness of fit, with models based on the semi-parametric Cox proportional hazards model and the parametric log-logistic and Weibull models. The explicit functions for hazard and survival were then used to analyze the dynamic behavior of hazard and survival functions. Results The hypertabastic model provided the best fit among all the models considered. Use of multiple gene expression variables also provided a considerable improvement in the goodness of fit of the model, as compared to use of only one. By utilizing the explicit survival and hazard functions provided by the model, we were able to determine the magnitude of the maximum rate of increase in hazard, and the maximum rate of decrease in survival, as well as the times when these occurred. We explore the influence of each gene expression variable on these extrema. Furthermore, in the cases of continuous gene expression variables, represented by a measure of correlation, we were able to investigate the dynamics with respect to changes in gene expression. Conclusions We observed that use of three different gene signatures in the model provided a greater combined effect and allowed us to assess the relative importance of each in determination of outcome in this data set. These results point to the potential to combine gene signatures to a greater effect in cases where each gene signature represents some distinct aspect of the cancer biology. Furthermore we conclude that the hypertabastic survival models can be an effective survival analysis tool for breast cancer patients. PMID:23241496
Texture-induced anisotropy and high-strain rate deformation in metals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schiferl, S.K.; Maudlin, P.J.
1990-01-01
We have used crystallographic texture calculations to model anisotropic yielding behavior for polycrystalline materials with strong preferred orientations and strong plastic anisotropy. Fitted yield surfaces were incorporated into an explicit Lagrangian finite-element code. We consider different anisotropic orientations, as well as different yield-surface forms, for Taylor cylinder impacts of hcp metals such as titanium and zirconium. Some deformed shapes are intrinsic to anisotropic response. Also, yield surface curvature, as distinct from strength anisotropy, has a strong influence on plastic flow. 13 refs., 5 figs.
Electromagnetic fields in curved spacetimes
NASA Astrophysics Data System (ADS)
Tsagas, Christos G.
2005-01-01
We consider the evolution of electromagnetic fields in curved spacetimes and calculate the exact wave equations for the associated electric and magnetic components. Our analysis is fully covariant, applies to a general spacetime and isolates all the sources that affect the propagation of these waves. Among others, we explicitly show how the different components of the gravitational field act as driving sources of electromagnetic disturbances. When applied to perturbed Friedmann Robertson Walker cosmologies, our results argue for a superadiabatic-type amplification of large-scale cosmological magnetic fields in Friedmann models with open spatial curvature.
Distributed Finite-Time Cooperative Control of Multiple High-Order Nonholonomic Mobile Robots.
Du, Haibo; Wen, Guanghui; Cheng, Yingying; He, Yigang; Jia, Ruting
2017-12-01
The consensus problem of multiple nonholonomic mobile robots in the form of high-order chained structure is considered in this paper. Based on the model features and the finite-time control technique, a finite-time cooperative controller is explicitly constructed which guarantees that the states consensus is achieved in a finite time. As an application of the proposed results, finite-time formation control of multiple wheeled mobile robots is studied and a finite-time formation control algorithm is proposed. To show effectiveness of the proposed approach, a simulation example is given.
Vortex creep and the internal temperature of neutron stars. I - General theory
NASA Technical Reports Server (NTRS)
Alpar, M. A.; Pines, D.; Anderson, P. W.; Shaham, J.
1984-01-01
The theory of a neutron star superfluid coupled to normal matter via thermal creep against pinning forces is developed in some detail. General equations of motion for a pinned rotating superfluid and their form for vortex creep are given. Steady state creep and the way in which the system approaches the steady state are discussed. The developed formalism is applied to the postglitch relaxation of a pulsar, and detailed models are developed which permit explicit calculation of the postglitch response. The energy dissipation associated with creep and glitches is considered.
The synthesis paradigm in genetics.
Rice, William R
2014-02-01
Experimental genetics with model organisms and mathematically explicit genetic theory are generally considered to be the major paradigms by which progress in genetics is achieved. Here I argue that this view is incomplete and that pivotal advances in genetics--and other fields of biology--are also made by synthesizing disparate threads of extant information rather than generating new information from experiments or formal theory. Because of the explosive expansion of information in numerous "-omics" data banks, and the fragmentation of genetics into numerous subdisciplines, the importance of the synthesis paradigm will likely expand with time.
On a comparison of two schemes in sequential data assimilation
NASA Astrophysics Data System (ADS)
Grishina, Anastasiia A.; Penenko, Alexey V.
2017-11-01
This paper is focused on variational data assimilation as an approach to mathematical modeling. Realization of the approach requires a sequence of connected inverse problems with different sets of observational data to be solved. Two variational data assimilation schemes, "implicit" and "explicit", are considered in the article. Their equivalence is shown and the numerical results are given on a basis of non-linear Robertson system. To avoid the "inverse problem crime" different schemes were used to produce synthetic measurement and to solve the data assimilation problem.
Non relativistic limit of integrable QFT and Lieb-Liniger models
NASA Astrophysics Data System (ADS)
Bastianello, Alvise; De Luca, Andrea; Mussardo, Giuseppe
2016-12-01
In this paper we study a suitable limit of integrable QFT with the aim to identify continuous non-relativistic integrable models with local interactions. This limit amounts to sending to infinity the speed of light c but simultaneously adjusting the coupling constant g of the quantum field theories in such a way to keep finite the energies of the various excitations. The QFT considered here are Toda field theories and the O(N) non-linear sigma model. In both cases the resulting non-relativistic integrable models consist only of Lieb-Liniger models, which are fully decoupled for the Toda theories while symmetrically coupled for the O(N) model. These examples provide explicit evidence of the universality and ubiquity of the Lieb-Liniger models and, at the same time, suggest that these models may exhaust the list of possible non-relativistic integrable theories of bosonic particles with local interactions.
NASA Astrophysics Data System (ADS)
Stewart, S.; Liu, Y.; Hartmann, H.; Mahmoud, M.; Gupta, H.; Dominguez, F.; Thorsten, W.
2007-12-01
Although there has been much written about the use of scenario analysis for long-term planning, particularly with respect to the decisions facing firms, the extant literature has few examples of scenarios explicitly applied to water resource issues. Fewer still have considered short-fuse events such as floods and failure of water retention and conveyance structures in the context of longer-term scenarios for water resources planning. We report progress on an effort to develop a unified framework for constructing scenarios for water resource management. We place particular emphasis on semi-arid environments and forces external to the traditional water management process such as high-impact weather and climate events or unforeseen changes in government institutions that may drive unanticipated change in environmental systems. Most water resource scenarios are typically based on high, medium and low projections of demographics (gpcd), climate (precipitation, temperature), and perhaps institutional variables (conveyance infrastructure, legal issues). We discuss the relative merits of this with other approaches including: probabalistic scenarios, which explicitly weight the likelihood of different outcomes; anticipatory scenarios, which consider how to achieve or avoid some subjective future state; strategic scenarios, which seeks to identify the inconsistencies between disciplines in the way the environmental models are constructed
Incorporating lower grade toxicity information into dose finding designs
Iasonos, Alexia; Zohar, Sarah; O’Quigley, John
2012-01-01
Background Toxicity grades underlie the definition of a dose limiting toxicity (DLT) but in the majority of phase I designs, the information contained in the individual grades is not used. Some authors have argued that it may be more appropriate to consider a polytomous rather than dichotomous response. Purpose We investigate whether the added information on individual grades can improve the operating characteristics of the Continual Reassessment Method (CRM). Methods We compare the original CRM design for a binary response with two stage CRM designs which make di erent use of lower-grade toxicity information via simulations. Specifically we study; a two-stage design that utilizes lower-grade toxicities in the first stage only, during the initial non model-based escalation, and two-stage designs where lower grades are used throughout the trial via explicit models. We postulate a model relating the rates of lower grade toxicities to the rate of DLTs, or assume the relative rates of low to high grade toxicities is unknown. The designs were compared in terms of accuracy, patient allocation and precision. Results Significant gains can be achieved when using grades in the first stage of a two-stage design. Otherwise, only modest improvements are seen when the information on grades is exploited via the use of explicit models, where the parameters are known precisely. CRM with some use of grade information, increases the number of patients treated at the MTD by approximately 5%. The additional information from lower grades can lead to a small increase in the precision of our estimate of the MTD. Limitations Our comparisons are not exhaustive and it would be worth studying other models and situations. Conclusions Although, the gains in performance were not as great as we had hoped, we observed no cases where the performance of CRM was poorer. Our recommendation is that investigators might consider using graded toxicities at the design stage. PMID:21835856
NASA Astrophysics Data System (ADS)
Fellner, Klemens; Tang, Bao Quoc
2018-06-01
The convergence to equilibrium for renormalised solutions to nonlinear reaction-diffusion systems is studied. The considered reaction-diffusion systems arise from chemical reaction networks with mass action kinetics and satisfy the complex balanced condition. By applying the so-called entropy method, we show that if the system does not have boundary equilibria, i.e. equilibrium states lying on the boundary of R_+^N, then any renormalised solution converges exponentially to the complex balanced equilibrium with a rate, which can be computed explicitly up to a finite-dimensional inequality. This inequality is proven via a contradiction argument and thus not explicitly. An explicit method of proof, however, is provided for a specific application modelling a reversible enzyme reaction by exploiting the specific structure of the conservation laws. Our approach is also useful to study the trend to equilibrium for systems possessing boundary equilibria. More precisely, to show the convergence to equilibrium for systems with boundary equilibria, we establish a sufficient condition in terms of a modified finite-dimensional inequality along trajectories of the system. By assuming this condition, which roughly means that the system produces too much entropy to stay close to a boundary equilibrium for infinite time, the entropy method shows exponential convergence to equilibrium for renormalised solutions to complex balanced systems with boundary equilibria.
Jet Noise Physics and Modeling Using First-principles Simulations
NASA Technical Reports Server (NTRS)
Freund, Jonathan B.
2003-01-01
An extensive analysis of our jet DNS database has provided for the first time the complex correlations that are the core of many statistical jet noise models, including MGBK. We have also for the first time explicitly computed the noise from different components of a commonly used noise source as proposed in many modeling approaches. Key findings are: (1) While two-point (space and time) velocity statistics are well-fitted by decaying exponentials, even for our low-Reynolds-number jet, spatially integrated fourth-order space/retarded-time correlations, which constitute the noise "source" in MGBK, are instead well-fitted by Gaussians. The width of these Gaussians depends (by a factor of 2) on which components are considered. This is counter to current modeling practice, (2) A standard decomposition of the Lighthill source is shown by direct evaluation to be somewhat artificial since the noise from these nominally separate components is in fact highly correlated. We anticipate that the same will be the case for the Lilley source, and (3) The far-field sound is computed in a way that explicitly includes all quadrupole cancellations, yet evaluating the Lighthill integral for only a small part of the jet yields a far-field noise far louder than that from the whole jet due to missing nonquadrupole cancellations. Details of this study are discussed in a draft of a paper included as appendix A.
Explicit least squares system parameter identification for exact differential input/output models
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1993-01-01
The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.
Distributed Energy Resources Customer Adoption Model Plus (DER-CAM+), Version 1.0.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stadler, Michael; Cardorso, Goncalo; Mashayekh, Salman
DER-CAM+ v1.0.0 is internally referred to as DER-CAM v5.0.0. Due to fundamental changes from previous versions, a new name (DER-CAM+) will be used for DER-CAM version 5.0.0 and above. DER-CAM+ is a Decision Support Tool for Decentralized Energy Systems that has been tailored for microgrid applications, and now explicitly considers electrical and thermal networks within a microgrid, ancillary services, and operating reserve. DER-CAM was initially created as an exclusively economic energy model, able to find the cost minimizing combination and operation profile of a set of DER technologies that meet energy loads of a building or microgrid for a typicalmore » test year. The previous versions of DER-CAM were formulated without modeling the electrical/thermal networks within the microgrid, and hence, used aggregate single-node approaches. Furthermore, they were not able to consider operating reserve constraints, and microgrid revenue streams from participating in ancillary services markets. This new version DER-CAM+ considers these issues by including electrical power flow and thermal flow equations and constraints in the microgrid, revenues from various ancillary services markets, and operating reserve constraints.« less
Atmospheric stability has a major effect in determining the wind energy doing work in the atmospheric boundary layer (ABL); however, it is seldom considered in determining this value in emergy analyses. One reason that atmospheric stability is not usually considered is that a sui...
NASA Astrophysics Data System (ADS)
Warner, Thomas T.; Sheu, Rong-Shyang; Bowers, James F.; Sykes, R. Ian; Dodd, Gregory C.; Henn, Douglas S.
2002-05-01
Ensemble simulations made using a coupled atmospheric dynamic model and a probabilistic Lagrangian puff dispersion model were employed in a forensic analysis of the transport and dispersion of a toxic gas that may have been released near Al Muthanna, Iraq, during the Gulf War. The ensemble study had two objectives, the first of which was to determine the sensitivity of the calculated dosage fields to the choices that must be made about the configuration of the atmospheric dynamic model. In this test, various choices were used for model physics representations and for the large-scale analyses that were used to construct the model initial and boundary conditions. The second study objective was to examine the dispersion model's ability to use ensemble inputs to predict dosage probability distributions. Here, the dispersion model was used with the ensemble mean fields from the individual atmospheric dynamic model runs, including the variability in the individual wind fields, to generate dosage probabilities. These are compared with the explicit dosage probabilities derived from the individual runs of the coupled modeling system. The results demonstrate that the specific choices made about the dynamic-model configuration and the large-scale analyses can have a large impact on the simulated dosages. For example, the area near the source that is exposed to a selected dosage threshold varies by up to a factor of 4 among members of the ensemble. The agreement between the explicit and ensemble dosage probabilities is relatively good for both low and high dosage levels. Although only one ensemble was considered in this study, the encouraging results suggest that a probabilistic dispersion model may be of value in quantifying the effects of uncertainties in a dynamic-model ensemble on dispersion model predictions of atmospheric transport and dispersion.
Work engagement in health professions education.
van den Berg, Joost W; Mastenbroek, Nicole J J M; Scheepers, Renée A; Jaarsma, A Debbie C
2017-11-01
Work engagement deserves more attention in health professions education because of its positive relations with personal well-being and performance at work. For health professions education, these outcomes have been studied on various levels. Consider engaged clinical teachers, who are seen as better clinical teachers; consider engaged residents, who report committing fewer medical errors than less engaged peers. Many topics in health professions education can benefit from explicitly including work engagement as an intended outcome such as faculty development programs, feedback provision and teacher recognition. In addition, interventions aimed at strengthening resources could provide teachers with a solid foundation for well-being and performance in all their work roles. Work engagement is conceptually linked to burnout. An important model that underlies both burnout and work engagement literature is the job demands-resources (JD-R) model. This model can be used to describe relationships between work characteristics, personal characteristics and well-being and performance at work. We explain how using this model helps identifying aspects of teaching that foster well-being and how it paves the way for interventions which aim to increase teacher's well-being and performance.
Flexible explicit but rigid implicit learning in a visuomotor adaptation task
Bond, Krista M.
2015-01-01
There is mounting evidence for the idea that performance in a visuomotor rotation task can be supported by both implicit and explicit forms of learning. The implicit component of learning has been well characterized in previous experiments and is thought to arise from the adaptation of an internal model driven by sensorimotor prediction errors. However, the role of explicit learning is less clear, and previous investigations aimed at characterizing the explicit component have relied on indirect measures such as dual-task manipulations, posttests, and descriptive computational models. To address this problem, we developed a new method for directly assaying explicit learning by having participants verbally report their intended aiming direction on each trial. While our previous research employing this method has demonstrated the possibility of measuring explicit learning over the course of training, it was only tested over a limited scope of manipulations common to visuomotor rotation tasks. In the present study, we sought to better characterize explicit and implicit learning over a wider range of task conditions. We tested how explicit and implicit learning change as a function of the specific visual landmarks used to probe explicit learning, the number of training targets, and the size of the rotation. We found that explicit learning was remarkably flexible, responding appropriately to task demands. In contrast, implicit learning was strikingly rigid, with each task condition producing a similar degree of implicit learning. These results suggest that explicit learning is a fundamental component of motor learning and has been overlooked or conflated in previous visuomotor tasks. PMID:25855690
An interactive Bayesian geostatistical inverse protocol for hydraulic tomography
Fienen, Michael N.; Clemo, Tom; Kitanidis, Peter K.
2008-01-01
Hydraulic tomography is a powerful technique for characterizing heterogeneous hydrogeologic parameters. An explicit trade-off between characterization based on measurement misfit and subjective characterization using prior information is presented. We apply a Bayesian geostatistical inverse approach that is well suited to accommodate a flexible model with the level of complexity driven by the data and explicitly considering uncertainty. Prior information is incorporated through the selection of a parameter covariance model characterizing continuity and providing stability. Often, discontinuities in the parameter field, typically caused by geologic contacts between contrasting lithologic units, necessitate subdivision into zones across which there is no correlation among hydraulic parameters. We propose an interactive protocol in which zonation candidates are implied from the data and are evaluated using cross validation and expert knowledge. Uncertainty introduced by limited knowledge of dynamic regional conditions is mitigated by using drawdown rather than native head values. An adjoint state formulation of MODFLOW-2000 is used to calculate sensitivities which are used both for the solution to the inverse problem and to guide protocol decisions. The protocol is tested using synthetic two-dimensional steady state examples in which the wells are located at the edge of the region of interest.
NASA Astrophysics Data System (ADS)
Shizgal, Bernie D.; Chikhaoui, Aziz
2006-06-01
The present paper considers a detailed analysis of the nonequilibrium effects for a model reactive system with the Chapman-Eskog (CE) solution of the Boltzmann equation as well as an explicit time dependent solution. The elastic cross sections employed are a hard sphere cross section and the Maxwell molecule cross section. Reactive cross sections which model reactions with and without activation energy are used. A detailed comparison is carried out with these solutions of the Boltzmann equation and the approximation introduced by Cukrowski and coworkers [J. Chem. Phys. 97 (1992) 9086; Chem. Phys. 89 (1992) 159; Physica A 188 (1992) 344; Chem. Phys. Lett. A 297 (1998) 402; Physica A 275 (2000) 134; Chem. Phys. Lett. 341 (2001) 585; Acta Phys. Polonica B 334 (2003) 3607.] based on the temperature of the reactive particles. We show that the Cukrowski approximation has limited applicability for the large class of reactive systems studied in this paper. The explicit time dependent solutions of the Boltzmann equation demonstrate that the CE approach is valid only for very slow reactions for which the corrections to the equilibrium rate coefficient are very small.
GroPBS: Fast Solver for Implicit Electrostatics of Biomolecules
Bertelshofer, Franziska; Sun, Liping; Greiner, Günther; Böckmann, Rainer A.
2015-01-01
Knowledge about the electrostatic potential on the surface of biomolecules or biomembranes under physiological conditions is an important step in the attempt to characterize the physico-chemical properties of these molecules and, in particular, also their interactions with each other. Additionally, knowledge about solution electrostatics may also guide the design of molecules with specified properties. However, explicit water models come at a high computational cost, rendering them unsuitable for large design studies or for docking purposes. Implicit models with the water phase treated as a continuum require the numerical solution of the Poisson–Boltzmann equation (PBE). Here, we present a new flexible program for the numerical solution of the PBE, allowing for different geometries, and the explicit and implicit inclusion of membranes. It involves a discretization of space and the computation of the molecular surface. The PBE is solved using finite differences, the resulting set of equations is solved using a Gauss–Seidel method. It is shown for the example of the sucrose transporter ScrY that the implicit inclusion of a surrounding membrane has a strong effect also on the electrostatics within the pore region and, thus, needs to be carefully considered, e.g., in design studies on membrane proteins. PMID:26636074
NASA Technical Reports Server (NTRS)
Sotiropoulou, Rafaella-Eleni P.; Nenes, Athanasios; Adams, Peter J.; Seinfeld, John H.
2007-01-01
In situ observations of aerosol and cloud condensation nuclei (CCN) and the GISS GCM Model II' with an online aerosol simulation and explicit aerosol-cloud interactions are used to quantify the uncertainty in radiative forcing and autoconversion rate from application of Kohler theory. Simulations suggest that application of Koehler theory introduces a 10-20% uncertainty in global average indirect forcing and 2-11% uncertainty in autoconversion. Regionally, the uncertainty in indirect forcing ranges between 10-20%, and 5-50% for autoconversion. These results are insensitive to the range of updraft velocity and water vapor uptake coefficient considered. This study suggests that Koehler theory (as implemented in climate models) is not a significant source of uncertainty for aerosol indirect forcing but can be substantial for assessments of aerosol effects on the hydrological cycle in climatically sensitive regions of the globe. This implies that improvements in the representation of GCM subgrid processes and aerosol size distribution will mostly benefit indirect forcing assessments. Predictions of autoconversion, by nature, will be subject to considerable uncertainty; its reduction may require explicit representation of size-resolved aerosol composition and mixing state.
Emerging from the bottleneck: benefits of the comparative approach to modern neuroscience.
Brenowitz, Eliot A; Zakon, Harold H
2015-05-01
Neuroscience has historically exploited a wide diversity of animal taxa. Recently, however, research has focused increasingly on a few model species. This trend has accelerated with the genetic revolution, as genomic sequences and genetic tools became available for a few species, which formed a bottleneck. This coalescence on a small set of model species comes with several costs that are often not considered, especially in the current drive to use mice explicitly as models for human diseases. Comparative studies of strategically chosen non-model species can complement model species research and yield more rigorous studies. As genetic sequences and tools become available for many more species, we are poised to emerge from the bottleneck and once again exploit the rich biological diversity offered by comparative studies. Copyright © 2015 Elsevier Ltd. All rights reserved.
Z3 topological order in the face-centered-cubic quantum plaquette model
NASA Astrophysics Data System (ADS)
Devakul, Trithep
2018-04-01
We examine the topological order in the resonating singlet valence plaquette (RSVP) phase of the hard-core quantum plaquette model (QPM) on the face centered cubic (FCC) lattice. To do this, we construct a Rohksar-Kivelson type Hamiltonian of local plaquette resonances. This model is shown to exhibit a Z3 topological order, which we show by identifying a Z3 topological constant (which leads to a 33-fold topological ground state degeneracy on the 3-torus) and topological pointlike charge and looplike magnetic excitations which obey Z3 statistics. We also consider an exactly solvable generalization of this model, which makes the geometrical origin of the Z3 order explicitly clear. For other models and lattices, such generalizations produce a wide variety of topological phases, some of which are novel fracton phases.
Deformed coset models from gauged WZW actions
NASA Astrophysics Data System (ADS)
Park, Q.-Han
1994-06-01
A general Lagrangian formulation of integrably deformed G/H-coset models is given. We consider the G/H-coset model in terms of the gauged Wess-Zumino-Witten action and obtain an integrable deformation by adding a potential energy term Tr(gTg -1overlineT) , where algebra elements T, overlineT belong to the center of the algebra h associated with the subgroup H. We show that the classical equation of motion of the deformed coset model can be identified with the integrability condition of certain linear equations which makes the use of the inverse scattering method possible. Using the linear equation, we give a systematic way to construct infinitely many conserved currents as well as soliton solutions. In the case of the parafermionic SU(2)/U(1)-coset model, we derive n-solitons and conserved currents explicitly.
Earing Prediction in Cup Drawing using the BBC2008 Yield Criterion
NASA Astrophysics Data System (ADS)
Vrh, Marko; Halilovič, Miroslav; Starman, Bojan; Štok, Boris; Comsa, Dan-Sorin; Banabic, Dorel
2011-08-01
The paper deals with constitutive modelling of highly anisotropic sheet metals. It presents FEM based earing predictions in cup drawing simulation of highly anisotropic aluminium alloys where more than four ears occur. For that purpose the BBC2008 yield criterion, which is a plane-stress yield criterion formulated in the form of a finite series, is used. Thus defined criterion can be expanded to retain more or less terms, depending on the amount of given experimental data. In order to use the model in sheet metal forming simulations we have implemented it in a general purpose finite element code ABAQUS/Explicit via VUMAT subroutine, considering alternatively eight or sixteen parameters (8p and 16p version). For the integration of the constitutive model the explicit NICE (Next Increment Corrects Error) integration scheme has been used. Due to the scheme effectiveness the CPU time consumption for a simulation is comparable to the time consumption of built-in constitutive models. Two aluminium alloys, namely AA5042-H2 and AA2090-T3, have been used for a validation of the model. For both alloys the parameters of the BBC2008 model have been identified with a developed numerical procedure, based on a minimization of the developed cost function. For both materials, the predictions of the BBC2008 model prove to be in very good agreement with the experimental results. The flexibility and the accuracy of the model together with the identification and integration procedure guarantee the applicability of the BBC2008 yield criterion in industrial applications.
Groundwater management under uncertainty using a stochastic multi-cell model
NASA Astrophysics Data System (ADS)
Joodavi, Ata; Zare, Mohammad; Ziaei, Ali Naghi; Ferré, Ty P. A.
2017-08-01
The optimization of spatially complex groundwater management models over long time horizons requires the use of computationally efficient groundwater flow models. This paper presents a new stochastic multi-cell lumped-parameter aquifer model that explicitly considers uncertainty in groundwater recharge. To achieve this, the multi-cell model is combined with the constrained-state formulation method. In this method, the lower and upper bounds of groundwater heads are incorporated into the mass balance equation using indicator functions. This provides expressions for the means, variances and covariances of the groundwater heads, which can be included in the constraint set in an optimization model. This method was used to formulate two separate stochastic models: (i) groundwater flow in a two-cell aquifer model with normal and non-normal distributions of groundwater recharge; and (ii) groundwater management in a multiple cell aquifer in which the differences between groundwater abstractions and water demands are minimized. The comparison between the results obtained from the proposed modeling technique with those from Monte Carlo simulation demonstrates the capability of the proposed models to approximate the means, variances and covariances. Significantly, considering covariances between the heads of adjacent cells allows a more accurate estimate of the variances of the groundwater heads. Moreover, this modeling technique requires no discretization of state variables, thus offering an efficient alternative to computationally demanding methods.
NASA Astrophysics Data System (ADS)
Ulfah, S.; Awalludin, S. A.; Wahidin
2018-01-01
Advection-diffusion model is one of the mathematical models, which can be used to understand the distribution of air pollutant in the atmosphere. It uses the 2D advection-diffusion model with time-dependent to simulate air pollution distribution in order to find out whether the pollutants are more concentrated at ground level or near the source of emission under particular atmospheric conditions such as stable, unstable, and neutral conditions. Wind profile, eddy diffusivity, and temperature are considered in the model as parameters. The model is solved by using explicit finite difference method, which is then visualized by a computer program developed using Lazarus programming software. The results show that the atmospheric conditions alone influencing the level of concentration of pollutants is not conclusive as the parameters in the model have their own effect on each atmospheric condition.
High Order Numerical Methods for LES of Turbulent Flows with Shocks
NASA Technical Reports Server (NTRS)
Kotov, D. V.; Yee, H. C.; Hadjadj, A.; Wray, A.; Sjögreen, B.
2014-01-01
Simulation of turbulent flows with shocks employing explicit subgrid-scale (SGS) filtering may encounter a loss of accuracy in the vicinity of a shock. In this work we perform a comparative study of different approaches to reduce this loss of accuracy within the framework of the dynamic Germano SGS model. One of the possible approaches is to apply Harten's subcell resolution procedure to locate and sharpen the shock, and to use a one-sided test filter at the grid points adjacent to the exact shock location. The other considered approach is local disabling of the SGS terms in the vicinity of the shock location. In this study we use a canonical shock-turbulence interaction problem for comparison of the considered modifications of the SGS filtering procedure. For the considered test case both approaches show a similar improvement in the accuracy near the shock.
NASA Astrophysics Data System (ADS)
Jathar, S. H.; Cappa, C. D.; Wexler, A. S.; Seinfeld, J. H.; Kleeman, M. J.
2015-09-01
Multi-generational oxidation of volatile organic compound (VOC) oxidation products can significantly alter the mass, chemical composition and properties of secondary organic aerosol (SOA) compared to calculations that consider only the first few generations of oxidation reactions. However, the most commonly used state-of-the-science schemes in 3-D regional or global models that account for multi-generational oxidation (1) consider only functionalization reactions but do not consider fragmentation reactions, (2) have not been constrained to experimental data; and (3) are added on top of existing parameterizations. The incomplete description of multi-generational oxidation in these models has the potential to bias source apportionment and control calculations for SOA. In this work, we used the Statistical Oxidation Model (SOM) of Cappa and Wilson (2012), constrained by experimental laboratory chamber data, to evaluate the regional implications of multi-generational oxidation considering both functionalization and fragmentation reactions. SOM was implemented into the regional UCD/CIT air quality model and applied to air quality episodes in California and the eastern US. The mass, composition and properties of SOA predicted using SOM are compared to SOA predictions generated by a traditional "two-product" model to fully investigate the impact of explicit and self-consistent accounting of multi-generational oxidation. Results show that SOA mass concentrations predicted by the UCD/CIT-SOM model are very similar to those predicted by a two-product model when both models use parameters that are derived from the same chamber data. Since the two-product model does not explicitly resolve multi-generational oxidation reactions, this finding suggests that the chamber data used to parameterize the models captures the majority of the SOA mass formation from multi-generational oxidation under the conditions tested. Consequently, the use of low and high NOx yields perturbs SOA concentrations by a factor of two and are probably a much stronger determinant in 3-D models than constrained multi-generational oxidation. While total predicted SOA mass is similar for the SOM and two-product models, the SOM model predicts increased SOA contributions from anthropogenic (alkane, aromatic) and sesquiterpenes and decreased SOA contributions from isoprene and monoterpene relative to the two-product model calculations. The SOA predicted by SOM has a much lower volatility than that predicted by the traditional model resulting in better qualitative agreement with volatility measurements of ambient OA. On account of its lower-volatility, the SOA mass produced by SOM does not appear to be as strongly influenced by the inclusion of oligomerization reactions, whereas the two-product model relies heavily on oligomerization to form low volatility SOA products. Finally, an unconstrained contemporary hybrid scheme to model multi-generational oxidation within the framework of a two-product model in which "ageing" reactions are added on top of the existing two-product parameterization is considered. This hybrid scheme formed at least three times more SOA than the SOM during regional simulations as a result of excessive transformation of semi-volatile vapors into lower volatility material that strongly partitions to the particle phase. This finding suggests that these "hybrid" multi-generational schemes should be used with great caution in regional models.
NASA Astrophysics Data System (ADS)
Jathar, S. H.; Cappa, C. D.; Wexler, A. S.; Seinfeld, J. H.; Kleeman, M. J.
2016-02-01
Multi-generational oxidation of volatile organic compound (VOC) oxidation products can significantly alter the mass, chemical composition and properties of secondary organic aerosol (SOA) compared to calculations that consider only the first few generations of oxidation reactions. However, the most commonly used state-of-the-science schemes in 3-D regional or global models that account for multi-generational oxidation (1) consider only functionalization reactions but do not consider fragmentation reactions, (2) have not been constrained to experimental data and (3) are added on top of existing parameterizations. The incomplete description of multi-generational oxidation in these models has the potential to bias source apportionment and control calculations for SOA. In this work, we used the statistical oxidation model (SOM) of Cappa and Wilson (2012), constrained by experimental laboratory chamber data, to evaluate the regional implications of multi-generational oxidation considering both functionalization and fragmentation reactions. SOM was implemented into the regional University of California at Davis / California Institute of Technology (UCD/CIT) air quality model and applied to air quality episodes in California and the eastern USA. The mass, composition and properties of SOA predicted using SOM were compared to SOA predictions generated by a traditional two-product model to fully investigate the impact of explicit and self-consistent accounting of multi-generational oxidation.Results show that SOA mass concentrations predicted by the UCD/CIT-SOM model are very similar to those predicted by a two-product model when both models use parameters that are derived from the same chamber data. Since the two-product model does not explicitly resolve multi-generational oxidation reactions, this finding suggests that the chamber data used to parameterize the models captures the majority of the SOA mass formation from multi-generational oxidation under the conditions tested. Consequently, the use of low and high NOx yields perturbs SOA concentrations by a factor of two and are probably a much stronger determinant in 3-D models than multi-generational oxidation. While total predicted SOA mass is similar for the SOM and two-product models, the SOM model predicts increased SOA contributions from anthropogenic (alkane, aromatic) and sesquiterpenes and decreased SOA contributions from isoprene and monoterpene relative to the two-product model calculations. The SOA predicted by SOM has a much lower volatility than that predicted by the traditional model, resulting in better qualitative agreement with volatility measurements of ambient OA. On account of its lower-volatility, the SOA mass produced by SOM does not appear to be as strongly influenced by the inclusion of oligomerization reactions, whereas the two-product model relies heavily on oligomerization to form low-volatility SOA products. Finally, an unconstrained contemporary hybrid scheme to model multi-generational oxidation within the framework of a two-product model in which ageing reactions are added on top of the existing two-product parameterization is considered. This hybrid scheme formed at least 3 times more SOA than the SOM during regional simulations as a result of excessive transformation of semi-volatile vapors into lower volatility material that strongly partitions to the particle phase. This finding suggests that these hybrid multi-generational schemes should be used with great caution in regional models.
Finite Element Analysis of the Maximum Stress at the Joints of the Transmission Tower
NASA Astrophysics Data System (ADS)
Itam, Zarina; Beddu, Salmia; Liyana Mohd Kamal, Nur; Bamashmos, Khaled H.
2016-03-01
Transmission towers are tall structures, usually a steel lattice tower, used to support an overhead power line. Usually, transmission towers are analyzed as frame-truss systems and the members are assumed to be pin-connected without explicitly considering the effects of joints on the tower behavior. In this research, an engineering example of joint will be analyzed with the consideration of the joint detailing to investigate how it will affect the tower analysis. A static analysis using STAAD Pro was conducted to indicate the joint with the maximum stress. This joint will then be explicitly analyzed in ANSYS using the Finite Element Method. Three approaches were used in the software which are the simple plate model, bonded contact with no bolts, and beam element bolts. Results from the joint analysis show that stress values increased with joint details consideration. This proves that joints and connections play an important role in the distribution of stress within the transmission tower.
NASA Astrophysics Data System (ADS)
Courdurier, M.; Monard, F.; Osses, A.; Romero, F.
2015-09-01
In medical single-photon emission computed tomography (SPECT) imaging, we seek to simultaneously obtain the internal radioactive sources and the attenuation map using not only ballistic measurements but also first-order scattering measurements and assuming a very specific scattering regime. The problem is modeled using the radiative transfer equation by means of an explicit non-linear operator that gives the ballistic and scattering measurements as a function of the radioactive source and attenuation distributions. First, by differentiating this non-linear operator we obtain a linearized inverse problem. Then, under regularity hypothesis for the source distribution and attenuation map and considering small attenuations, we rigorously prove that the linear operator is invertible and we compute its inverse explicitly. This allows proof of local uniqueness for the non-linear inverse problem. Finally, using the previous inversion result for the linear operator, we propose a new type of iterative algorithm for simultaneous source and attenuation recovery for SPECT based on the Neumann series and a Newton-Raphson algorithm.
Test Input Generation for Red-Black Trees using Abstraction
NASA Technical Reports Server (NTRS)
Visser, Willem; Pasareanu, Corina S.; Pelanek, Radek
2005-01-01
We consider the problem of test input generation for code that manipulates complex data structures. Test inputs are sequences of method calls from the data structure interface. We describe test input generation techniques that rely on state matching to avoid generation of redundant tests. Exhaustive techniques use explicit state model checking to explore all the possible test sequences up to predefined input sizes. Lossy techniques rely on abstraction mappings to compute and store abstract versions of the concrete states; they explore under-approximations of all the possible test sequences. We have implemented the techniques on top of the Java PathFinder model checker and we evaluate them using a Java implementation of red-black trees.
Cell communities and robustness in development.
Monk, N A
1997-11-01
The robustness of patterning events in development is a key feature that must be accounted for in proposed models of these events. When considering explicitly cellular systems, robustness can be exhibited at different levels of organization. Consideration of two widespread patterning mechanisms suggests that robustness at the level of cell communities can result from variable development at the level of individual cells; models of these mechanisms show how interactions between participating cells guarantee community-level robustness. Cooperative interactions enhance homogeneity within communities of like cells and the sharpness of boundaries between communities of distinct cells, while competitive interactions amplify small inhomogeneities within communities of initially equivalent cells, resulting in fine-grained patterns of cell specialization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sinitsyn, N. A.
We consider nonadiabatic transitions in explicitly time-dependent systems with Hamiltonians of the form Hˆ(t)=Aˆ+Bˆt+Cˆ/t, where t is time and Aˆ,Bˆ,Cˆ are Hermitian N × N matrices. We show that in any model of this type, scattering matrix elements satisfy nontrivial exact constraints that follow from the absence of the Stokes phenomenon for solutions with specific conditions at t→–∞. This allows one to continue such solutions analytically to t→+∞, and connect their asymptotic behavior at t→–∞ and t→+∞. This property becomes particularly useful when a model shows additional discrete symmetries. Specifically, we derive a number of simple exact constraints and explicitmore » expressions for scattering probabilities in such systems.« less
Sign reversals of the output autocorrelation function for the stochastic Bernoulli-Verhulst equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lumi, N., E-mail: Neeme.Lumi@tlu.ee; Mankin, R., E-mail: Romi.Mankin@tlu.ee
2015-10-28
We consider a stochastic Bernoulli-Verhulst equation as a model for population growth processes. The effect of fluctuating environment on the carrying capacity of a population is modeled as colored dichotomous noise. Relying on the composite master equation an explicit expression for the stationary autocorrelation function (ACF) of population sizes is found. On the basis of this expression a nonmonotonic decay of the ACF by increasing lag-time is shown. Moreover, in a certain regime of the noise parameters the ACF demonstrates anticorrelation as well as related sign reversals at some values of the lag-time. The conditions for the appearance of thismore » highly unexpected effect are also discussed.« less
Baker, Nathan A.; McCammon, J. Andrew
2008-01-01
The solvent reaction field potential of an uncharged protein immersed in Simple Point Charge/Extended (SPC/E) explicit solvent was computed over a series of molecular dynamics trajectories, intotal 1560 ns of simulation time. A finite, positive potential of 13 to 24 kbTec−1 (where T = 300K), dependent on the geometry of the solvent-accessible surface, was observed inside the biomolecule. The primary contribution to this potential arose from a layer of positive charge density 1.0 Å from the solute surface, on average 0.008 ec/Å3, which we found to be the product of a highly ordered first solvation shell. Significant second solvation shell effects, including additional layers of charge density and a slight decrease in the short-range solvent-solvent interaction strength, were also observed. The impact of these findings on implicit solvent models was assessed by running similar explicit-solvent simulations on the fully charged protein system. When the energy due to the solvent reaction field in the uncharged system is accounted for, correlation between per-atom electrostatic energies for the explicit solvent model and a simple implicit (Poisson) calculation is 0.97, and correlation between per-atom energies for the explicit solvent model and a previously published, optimized Poisson model is 0.99. PMID:17949217
NASA Astrophysics Data System (ADS)
Cerutti, David S.; Baker, Nathan A.; McCammon, J. Andrew
2007-10-01
The solvent reaction field potential of an uncharged protein immersed in simple point charge/extended explicit solvent was computed over a series of molecular dynamics trajectories, in total 1560ns of simulation time. A finite, positive potential of 13-24 kbTec-1 (where T =300K), dependent on the geometry of the solvent-accessible surface, was observed inside the biomolecule. The primary contribution to this potential arose from a layer of positive charge density 1.0Å from the solute surface, on average 0.008ec/Å3, which we found to be the product of a highly ordered first solvation shell. Significant second solvation shell effects, including additional layers of charge density and a slight decrease in the short-range solvent-solvent interaction strength, were also observed. The impact of these findings on implicit solvent models was assessed by running similar explicit solvent simulations on the fully charged protein system. When the energy due to the solvent reaction field in the uncharged system is accounted for, correlation between per-atom electrostatic energies for the explicit solvent model and a simple implicit (Poisson) calculation is 0.97, and correlation between per-atom energies for the explicit solvent model and a previously published, optimized Poisson model is 0.99.
Nuthmann, Antje; Einhäuser, Wolfgang; Schütz, Immo
2017-01-01
Since the turn of the millennium, a large number of computational models of visual salience have been put forward. How best to evaluate a given model's ability to predict where human observers fixate in images of real-world scenes remains an open research question. Assessing the role of spatial biases is a challenging issue; this is particularly true when we consider the tendency for high-salience items to appear in the image center, combined with a tendency to look straight ahead ("central bias"). This problem is further exacerbated in the context of model comparisons, because some-but not all-models implicitly or explicitly incorporate a center preference to improve performance. To address this and other issues, we propose to combine a-priori parcellation of scenes with generalized linear mixed models (GLMM), building upon previous work. With this method, we can explicitly model the central bias of fixation by including a central-bias predictor in the GLMM. A second predictor captures how well the saliency model predicts human fixations, above and beyond the central bias. By-subject and by-item random effects account for individual differences and differences across scene items, respectively. Moreover, we can directly assess whether a given saliency model performs significantly better than others. In this article, we describe the data processing steps required by our analysis approach. In addition, we demonstrate the GLMM analyses by evaluating the performance of different saliency models on a new eye-tracking corpus. To facilitate the application of our method, we make the open-source Python toolbox "GridFix" available.
NASA Astrophysics Data System (ADS)
Clark, Martyn P.; Kavetski, Dmitri
2010-10-01
A major neglected weakness of many current hydrological models is the numerical method used to solve the governing model equations. This paper thoroughly evaluates several classes of time stepping schemes in terms of numerical reliability and computational efficiency in the context of conceptual hydrological modeling. Numerical experiments are carried out using 8 distinct time stepping algorithms and 6 different conceptual rainfall-runoff models, applied in a densely gauged experimental catchment, as well as in 12 basins with diverse physical and hydroclimatic characteristics. Results show that, over vast regions of the parameter space, the numerical errors of fixed-step explicit schemes commonly used in hydrology routinely dwarf the structural errors of the model conceptualization. This substantially degrades model predictions, but also, disturbingly, generates fortuitously adequate performance for parameter sets where numerical errors compensate for model structural errors. Simply running fixed-step explicit schemes with shorter time steps provides a poor balance between accuracy and efficiency: in some cases daily-step adaptive explicit schemes with moderate error tolerances achieved comparable or higher accuracy than 15 min fixed-step explicit approximations but were nearly 10 times more efficient. From the range of simple time stepping schemes investigated in this work, the fixed-step implicit Euler method and the adaptive explicit Heun method emerge as good practical choices for the majority of simulation scenarios. In combination with the companion paper, where impacts on model analysis, interpretation, and prediction are assessed, this two-part study vividly highlights the impact of numerical errors on critical performance aspects of conceptual hydrological models and provides practical guidelines for robust numerical implementation.
Ngwa, Julius S; Cabral, Howard J; Cheng, Debbie M; Pencina, Michael J; Gagnon, David R; LaValley, Michael P; Cupples, L Adrienne
2016-11-03
Typical survival studies follow individuals to an event and measure explanatory variables for that event, sometimes repeatedly over the course of follow up. The Cox regression model has been used widely in the analyses of time to diagnosis or death from disease. The associations between the survival outcome and time dependent measures may be biased unless they are modeled appropriately. In this paper we explore the Time Dependent Cox Regression Model (TDCM), which quantifies the effect of repeated measures of covariates in the analysis of time to event data. This model is commonly used in biomedical research but sometimes does not explicitly adjust for the times at which time dependent explanatory variables are measured. This approach can yield different estimates of association compared to a model that adjusts for these times. In order to address the question of how different these estimates are from a statistical perspective, we compare the TDCM to Pooled Logistic Regression (PLR) and Cross Sectional Pooling (CSP), considering models that adjust and do not adjust for time in PLR and CSP. In a series of simulations we found that time adjusted CSP provided identical results to the TDCM while the PLR showed larger parameter estimates compared to the time adjusted CSP and the TDCM in scenarios with high event rates. We also observed upwardly biased estimates in the unadjusted CSP and unadjusted PLR methods. The time adjusted PLR had a positive bias in the time dependent Age effect with reduced bias when the event rate is low. The PLR methods showed a negative bias in the Sex effect, a subject level covariate, when compared to the other methods. The Cox models yielded reliable estimates for the Sex effect in all scenarios considered. We conclude that survival analyses that explicitly account in the statistical model for the times at which time dependent covariates are measured provide more reliable estimates compared to unadjusted analyses. We present results from the Framingham Heart Study in which lipid measurements and myocardial infarction data events were collected over a period of 26 years.
Empirical methods for modeling landscape change, ecosystem services, and biodiversity
David Lewis; Ralph Alig
2009-01-01
The purpose of this paper is to synthesize recent economics research aimed at integrating discrete-choice econometric models of land-use change with spatially-explicit landscape simulations and quantitative ecology. This research explicitly models changes in the spatial pattern of landscapes in two steps: 1) econometric estimation of parcel-scale transition...
SPATIALLY EXPLICIT MICRO-LEVEL MODELLING OF LAND USE CHANGE AT THE RURAL-URBAN INTERFACE. (R828012)
This paper describes micro-economic models of land use change applicable to the rural–urban interface in the US. Use of a spatially explicit micro-level modelling approach permits the analysis of regional patterns of land use as the aggregate outcomes of many, disparate...
A Galilean Invariant Explicit Algebraic Reynolds Stress Model for Curved Flows
NASA Technical Reports Server (NTRS)
Girimaji, Sharath
1996-01-01
A Galilean invariant weak-equilbrium hypothesis that is sensitive to streamline curvature is proposed. The hypothesis leads to an algebraic Reynolds stress model for curved flows that is fully explicit and self-consistent. The model is tested in curved homogeneous shear flow: the agreement is excellent with Reynolds stress closure model and adequate with available experimental data.
A functional-dynamic reflection on participatory processes in modeling projects.
Seidl, Roman
2015-12-01
The participation of nonscientists in modeling projects/studies is increasingly employed to fulfill different functions. However, it is not well investigated if and how explicitly these functions and the dynamics of a participatory process are reflected by modeling projects in particular. In this review study, I explore participatory modeling projects from a functional-dynamic process perspective. The main differences among projects relate to the functions of participation-most often, more than one per project can be identified, along with the degree of explicit reflection (i.e., awareness and anticipation) on the dynamic process perspective. Moreover, two main approaches are revealed: participatory modeling covering diverse approaches and companion modeling. It becomes apparent that the degree of reflection on the participatory process itself is not always explicit and perfectly visible in the descriptions of the modeling projects. Thus, the use of common protocols or templates is discussed to facilitate project planning, as well as the publication of project results. A generic template may help, not in providing details of a project or model development, but in explicitly reflecting on the participatory process. It can serve to systematize the particular project's approach to stakeholder collaboration, and thus quality management.
Chassin, Laurie; Presson, Clark C; Sherman, Steven J; Seo, Dong-Chul; Macy, Jonathan T
2010-12-01
The current study tested implicit and explicit attitudes as prospective predictors of smoking cessation in a Midwestern community sample of smokers. Results showed that the effects of attitudes significantly varied with levels of experienced failure to control smoking and plans to quit. Explicit attitudes significantly predicted later cessation among those with low (but not high or average) levels of experienced failure to control smoking. Conversely, however, implicit attitudes significantly predicted later cessation among those with high levels of experienced failure to control smoking, but only if they had a plan to quit. Because smoking cessation involves both controlled and automatic processes, interventions may need to consider attitude change interventions that focus on both implicit and explicit attitudes. (PsycINFO Database Record (c) 2010 APA, all rights reserved).
A Dynamic Hydrology-Critical Zone Framework for Rainfall-triggered Landslide Hazard Prediction
NASA Astrophysics Data System (ADS)
Dialynas, Y. G.; Foufoula-Georgiou, E.; Dietrich, W. E.; Bras, R. L.
2017-12-01
Watershed-scale coupled hydrologic-stability models are still in their early stages, and are characterized by important limitations: (a) either they assume steady-state or quasi-dynamic watershed hydrology, or (b) they simulate landslide occurrence based on a simple one-dimensional stability criterion. Here we develop a three-dimensional landslide prediction framework, based on a coupled hydrologic-slope stability model and incorporation of the influence of deep critical zone processes (i.e., flow through weathered bedrock and exfiltration to the colluvium) for more accurate prediction of the timing, location, and extent of landslides. Specifically, a watershed-scale slope stability model that systematically accounts for the contribution of driving and resisting forces in three-dimensional hillslope segments was coupled with a spatially-explicit and physically-based hydrologic model. The landslide prediction framework considers critical zone processes and structure, and explicitly accounts for the spatial heterogeneity of surface and subsurface properties that control slope stability, including soil and weathered bedrock hydrological and mechanical characteristics, vegetation, and slope morphology. To test performance, the model was applied in landslide-prone sites in the US, the hydrology of which has been extensively studied. Results showed that both rainfall infiltration in the soil and groundwater exfiltration exert a strong control on the timing and magnitude of landslide occurrence. We demonstrate the extent to which three-dimensional slope destabilizing factors, which are modulated by dynamic hydrologic conditions in the soil-bedrock column, control landslide initiation at the watershed scale.
A spatial model of white sturgeon rearing habitat in the lower Columbia River, USA
Hatten, J.R.; Parsley, M.J.
2009-01-01
Concerns over the potential effects of in-water placement of dredged materials prompted us to develop a GIS-based model that characterizes in a spatially explicit manner white sturgeon Acipenser transmontanus rearing habitat in the lower Columbia River, USA. The spatial model was developed using water depth, riverbed slope and roughness, fish positions collected in 2002, and Mahalanobis distance (D2). We created a habitat suitability map by identifying a Mahalanobis distance under which >50% of white sturgeon locations occurred in 2002 (i.e., high-probability habitat). White sturgeon preferred relatively moderate to high water depths, and low to moderate riverbed slope and roughness values. The eigenvectors indicated that riverbed slope and roughness were slightly more important than water depth, but all three variables were important. We estimated the impacts that fill might have on sturgeon habitat by simulating the addition of fill to the thalweg, in 3-m increments, and recomputing Mahalanobis distances. Channel filling simulations revealed that up to 9 m of fill would have little impact on high-probability habitat, but 12 and 15 m of fill resulted in habitat declines of ???12% and ???45%, respectively. This is the first spatially explicit predictive model of white sturgeon rearing habitat in the lower Columbia River, and the first to quantitatively predict the impacts of dredging operations on sturgeon habitat. Future research should consider whether water velocity improves the accuracy and specificity of the model, and to assess its applicability to other areas in the Columbia River.
High Performance Programming Using Explicit Shared Memory Model on Cray T3D1
NASA Technical Reports Server (NTRS)
Simon, Horst D.; Saini, Subhash; Grassi, Charles
1994-01-01
The Cray T3D system is the first-phase system in Cray Research, Inc.'s (CRI) three-phase massively parallel processing (MPP) program. This system features a heterogeneous architecture that closely couples DEC's Alpha microprocessors and CRI's parallel-vector technology, i.e., the Cray Y-MP and Cray C90. An overview of the Cray T3D hardware and available programming models is presented. Under Cray Research adaptive Fortran (CRAFT) model four programming methods (data parallel, work sharing, message-passing using PVM, and explicit shared memory model) are available to the users. However, at this time data parallel and work sharing programming models are not available to the user community. The differences between standard PVM and CRI's PVM are highlighted with performance measurements such as latencies and communication bandwidths. We have found that the performance of neither standard PVM nor CRI s PVM exploits the hardware capabilities of the T3D. The reasons for the bad performance of PVM as a native message-passing library are presented. This is illustrated by the performance of NAS Parallel Benchmarks (NPB) programmed in explicit shared memory model on Cray T3D. In general, the performance of standard PVM is about 4 to 5 times less than obtained by using explicit shared memory model. This degradation in performance is also seen on CM-5 where the performance of applications using native message-passing library CMMD on CM-5 is also about 4 to 5 times less than using data parallel methods. The issues involved (such as barriers, synchronization, invalidating data cache, aligning data cache etc.) while programming in explicit shared memory model are discussed. Comparative performance of NPB using explicit shared memory programming model on the Cray T3D and other highly parallel systems such as the TMC CM-5, Intel Paragon, Cray C90, IBM-SP1, etc. is presented.
Assessment of parametric uncertainty for groundwater reactive transport modeling,
Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun
2014-01-01
The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.
Covariance Between Genotypic Effects and its Use for Genomic Inference in Half-Sib Families
Wittenburg, Dörte; Teuscher, Friedrich; Klosa, Jan; Reinsch, Norbert
2016-01-01
In livestock, current statistical approaches utilize extensive molecular data, e.g., single nucleotide polymorphisms (SNPs), to improve the genetic evaluation of individuals. The number of model parameters increases with the number of SNPs, so the multicollinearity between covariates can affect the results obtained using whole genome regression methods. In this study, dependencies between SNPs due to linkage and linkage disequilibrium among the chromosome segments were explicitly considered in methods used to estimate the effects of SNPs. The population structure affects the extent of such dependencies, so the covariance among SNP genotypes was derived for half-sib families, which are typical in livestock populations. Conditional on the SNP haplotypes of the common parent (sire), the theoretical covariance was determined using the haplotype frequencies of the population from which the individual parent (dam) was derived. The resulting covariance matrix was included in a statistical model for a trait of interest, and this covariance matrix was then used to specify prior assumptions for SNP effects in a Bayesian framework. The approach was applied to one family in simulated scenarios (few and many quantitative trait loci) and using semireal data obtained from dairy cattle to identify genome segments that affect performance traits, as well as to investigate the impact on predictive ability. Compared with a method that does not explicitly consider any of the relationship among predictor variables, the accuracy of genetic value prediction was improved by 10–22%. The results show that the inclusion of dependence is particularly important for genomic inference based on small sample sizes. PMID:27402363
An image-based skeletal dosimetry model for the ICRP reference adult male—internal electron sources
NASA Astrophysics Data System (ADS)
Hough, Matthew; Johnson, Perry; Rajon, Didier; Jokisch, Derek; Lee, Choonsik; Bolch, Wesley
2011-04-01
In this study, a comprehensive electron dosimetry model of the adult male skeletal tissues is presented. The model is constructed using the University of Florida adult male hybrid phantom of Lee et al (2010 Phys. Med. Biol. 55 339-63) and the EGSnrc-based Paired Image Radiation Transport code of Shah et al (2005 J. Nucl. Med. 46 344-53). Target tissues include the active bone marrow, associated with radiogenic leukemia, and total shallow marrow, associated with radiogenic bone cancer. Monoenergetic electron emissions are considered over the energy range 1 keV to 10 MeV for the following sources: bone marrow (active and inactive), trabecular bone (surfaces and volumes), and cortical bone (surfaces and volumes). Specific absorbed fractions are computed according to the MIRD schema, and are given as skeletal-averaged values in the paper with site-specific values reported in both tabular and graphical format in an electronic annex available from http://stacks.iop.org/0031-9155/56/2309/mmedia. The distribution of cortical bone and spongiosa at the macroscopic dimensions of the phantom, as well as the distribution of trabecular bone and marrow tissues at the microscopic dimensions of the phantom, is imposed through detailed analyses of whole-body ex vivo CT images (1 mm resolution) and spongiosa-specific ex vivo microCT images (30 µm resolution), respectively, taken from a 40 year male cadaver. The method utilized in this work includes: (1) explicit accounting for changes in marrow self-dose with variations in marrow cellularity, (2) explicit accounting for electron escape from spongiosa, (3) explicit consideration of spongiosa cross-fire from cortical bone, and (4) explicit consideration of the ICRP's change in the surrogate tissue region defining the location of the osteoprogenitor cells (from a 10 µm endosteal layer covering the trabecular and cortical surfaces to a 50 µm shallow marrow layer covering trabecular and medullary cavity surfaces). Skeletal-averaged values of absorbed fraction in the present model are noted to be very compatible with those weighted by the skeletal tissue distributions found in the ICRP Publication 110 adult male and female voxel phantoms, but are in many cases incompatible with values used in current and widely implemented internal dosimetry software.
Wall modeled LES of wind turbine wakes with geometrical effects
NASA Astrophysics Data System (ADS)
Bricteux, Laurent; Benard, Pierre; Zeoli, Stephanie; Moureau, Vincent; Lartigue, Ghislain; Vire, Axelle
2017-11-01
This study focuses on prediction of wind turbine wakes when geometrical effects such as nacelle, tower, and built environment, are taken into account. The aim is to demonstrate the ability of a high order unstructured solver called YALES2 to perform wall modeled LES of wind turbine wake turbulence. The wind turbine rotor is modeled using an Actuator Line Model (ALM) while the geometrical details are explicitly meshed thanks to the use of an unstructured grid. As high Reynolds number flows are considered, sub-grid scale models as well as wall modeling are required. The first test case investigated concerns a wind turbine flow located in a wind tunnel that allows to validate the proposed methodology using experimental data. The second test case concerns the simulation of a wind turbine wake in a complex environment (e.g. a Building) using realistic turbulent inflow conditions.
The importance of explicitly mapping instructional analogies in science education
NASA Astrophysics Data System (ADS)
Asay, Loretta Johnson
Analogies are ubiquitous during instruction in science classrooms, yet research about the effectiveness of using analogies has produced mixed results. An aspect seldom studied is a model of instruction when using analogies. The few existing models for instruction with analogies have not often been examined quantitatively. The Teaching With Analogies (TWA) model (Glynn, 1991) is one of the models frequently cited in the variety of research about analogies. The TWA model outlines steps for instruction, including the step of explicitly mapping the features of the source to the target. An experimental study was conducted to examine the effects of explicitly mapping the features of the source and target in an analogy during computer-based instruction about electrical circuits. Explicit mapping was compared to no mapping and to a control with no analogy. Participants were ninth- and tenth-grade biology students who were each randomly assigned to one of three conditions (no analogy module, analogy module, or explicitly mapped analogy module) for computer-based instruction. Subjects took a pre-test before the instruction, which was used to assign them to a level of previous knowledge about electrical circuits for analysis of any differential effects. After the instruction modules, students took a post-test about electrical circuits. Two weeks later, they took a delayed post-test. No advantage was found for explicitly mapping the analogy. Learning patterns were the same, regardless of the type of instruction. Those who knew the least about electrical circuits, based on the pre-test, made the most gains. After the two-week delay, this group maintained the largest amount of their gain. Implications exist for science education classrooms, as analogy use should be based on research about effective practices. Further studies are suggested to foster the building of research-based models for classroom instruction with analogies.
Jeff Jenness; J. Judson Wynne
2005-01-01
In the field of spatially explicit modeling, well-developed accuracy assessment methodologies are often poorly applied. Deriving model accuracy metrics have been possible for decades, but these calculations were made by hand or with the use of a spreadsheet application. Accuracy assessments may be useful for: (1) ascertaining the quality of a model; (2) improving model...
Moderators of the Relationship between Implicit and Explicit Evaluation
Nosek, Brian A.
2005-01-01
Automatic and controlled modes of evaluation sometimes provide conflicting reports of the quality of social objects. This paper presents evidence for four moderators of the relationship between automatic (implicit) and controlled (explicit) evaluations. Implicit and explicit preferences were measured for a variety of object pairs using a large sample. The average correlation was r = .36, and 52 of the 57 object pairs showed a significant positive correlation. Results of multilevel modeling analyses suggested that: (a) implicit and explicit preferences are related, (b) the relationship varies as a function of the objects assessed, and (c) at least four variables moderate the relationship – self-presentation, evaluative strength, dimensionality, and distinctiveness. The variables moderated implicit-explicit correspondence across individuals and accounted for much of the observed variation across content domains. The resulting model of the relationship between automatic and controlled evaluative processes is grounded in personal experience with the targets of evaluation. PMID:16316292
Camera traps and mark-resight models: The value of ancillary data for evaluating assumptions
Parsons, Arielle W.; Simons, Theodore R.; Pollock, Kenneth H.; Stoskopf, Michael K.; Stocking, Jessica J.; O'Connell, Allan F.
2015-01-01
Unbiased estimators of abundance and density are fundamental to the study of animal ecology and critical for making sound management decisions. Capture–recapture models are generally considered the most robust approach for estimating these parameters but rely on a number of assumptions that are often violated but rarely validated. Mark-resight models, a form of capture–recapture, are well suited for use with noninvasive sampling methods and allow for a number of assumptions to be relaxed. We used ancillary data from continuous video and radio telemetry to evaluate the assumptions of mark-resight models for abundance estimation on a barrier island raccoon (Procyon lotor) population using camera traps. Our island study site was geographically closed, allowing us to estimate real survival and in situ recruitment in addition to population size. We found several sources of bias due to heterogeneity of capture probabilities in our study, including camera placement, animal movement, island physiography, and animal behavior. Almost all sources of heterogeneity could be accounted for using the sophisticated mark-resight models developed by McClintock et al. (2009b) and this model generated estimates similar to a spatially explicit mark-resight model previously developed for this population during our study. Spatially explicit capture–recapture models have become an important tool in ecology and confer a number of advantages; however, non-spatial models that account for inherent individual heterogeneity may perform nearly as well, especially where immigration and emigration are limited. Non-spatial models are computationally less demanding, do not make implicit assumptions related to the isotropy of home ranges, and can provide insights with respect to the biological traits of the local population.
Chonggang Xu; Hong S. He; Yuanman Hu; Yu Chang; Xiuzhen Li; Rencang Bu
2005-01-01
Geostatistical stochastic simulation is always combined with Monte Carlo method to quantify the uncertainty in spatial model simulations. However, due to the relatively long running time of spatially explicit forest models as a result of their complexity, it is always infeasible to generate hundreds or thousands of Monte Carlo simulations. Thus, it is of great...
Alternative Stable States, Coral Reefs, and Smooth Dynamics with a Kick.
Ippolito, Stephen; Naudot, Vincent; Noonburg, Erik G
2016-03-01
We consider a computer simulation, which was found to be faithful to time series data for Caribbean coral reefs, and an analytical model to help understand the dynamics of the simulation. The analytical model is a system of ordinary differential equations (ODE), and the authors claim this model demonstrates the existence of alternative stable states. The existence of an alternative stable state should consider a sudden shift in coral and macroalgae populations, while the grazing rate remains constant. The results of such shifts, however, are often confounded by changes in grazing rate. Although the ODE suggest alternative stable states, the ODE need modification to explicitly account for shifts or discrete events such as hurricanes. The goal of this paper will be to study the simulation dynamics through a simplified analytical representation. We proceed by modifying the original analytical model through incorporating discrete changes into the ODE. We then analyze the resulting dynamics and their bifurcations with respect to changes in grazing rate and hurricane frequency. In particular, a "kick" enabling the ODE to consider impulse events is added. Beyond adding a "kick" we employ the grazing function that is suggested by the simulation. The extended model was fit to the simulation data to support its use and predicts the existence cycles depending nonlinearly on grazing rates and hurricane frequency. These cycles may bring new insights into consideration for reef health, restoration and dynamics.
Playing relativistic billiards beyond graphene
NASA Astrophysics Data System (ADS)
Sadurní, E.; Seligman, T. H.; Mortessagne, F.
2010-05-01
The possibility of using hexagonal structures in general, and graphene in particular, to emulate the Dirac equation is the topic under consideration here. We show that Dirac oscillators with or without rest mass can be emulated by distorting a tight-binding model on a hexagonal structure. In the quest to make a toy model for such relativistic equations, we first show that a hexagonal lattice of attractive potential wells would be a good candidate. Firstly, we consider the corresponding one-dimensional (1D) model giving rise to a 1D Dirac oscillator and then construct explicitly the deformations needed in the 2D case. Finally, we discuss how such a model can be implemented as an electromagnetic billiard using arrays of dielectric resonators between two conducting plates that ensure evanescent modes outside the resonators for transversal electric modes, and we describe a feasible experimental setup.
NASA Astrophysics Data System (ADS)
Takahashi, Tatsuji; Gunji, Yukio-Pegio
2008-10-01
We pursue anticipation in second person or normative anticipation. As the first step, we make the three concepts second person, internal measurement and asynchroneity clearer by introducing the velocity of logic νl and the velocity of communication νc, in the context of social communication. After proving anticipatory nature of rule-following or language use in general via Kripke's "rule-following paradox," we present a mathematical model expressing the internality essential to second person, taking advantage of equivalences and differences in the formal language theory. As a consequence, we show some advantages of negatively considered concepts and arguments by concretizing them into an elementary and explicit formal model. The time development of the model shows a self-organizing property which never results if we adopt a third person stance.
Darkflation-One scalar to rule them all?
NASA Astrophysics Data System (ADS)
Lalak, Zygmunt; Nakonieczny, Łukasz
2017-03-01
The problem of explaining both inflationary and dark matter physics in the framework of a minimal extension of the Standard Model was investigated. To this end, the Standard Model completed by a real scalar singlet playing a role of the dark matter candidate has been considered. We assumed both the dark matter field and the Higgs doublet to be nonminimally coupled to gravity. Using quantum field theory in curved spacetime we derived an effective action for the inflationary period and analyzed its consequences. In this approach, after integrating out both dark matter and Standard Model sectors we obtained the effective action expressed purely in terms of the gravitational field. We paid special attention to determination, by explicit calculations, of the form of coefficients controlling the higher-order in curvature gravitational terms. Their connection to the Standard Model coupling constants has been discussed.
Requeno, José Ignacio; Colom, José Manuel
2014-12-01
Model checking is a generic verification technique that allows the phylogeneticist to focus on models and specifications instead of on implementation issues. Phylogenetic trees are considered as transition systems over which we interrogate phylogenetic questions written as formulas of temporal logic. Nonetheless, standard logics become insufficient for certain practices of phylogenetic analysis since they do not allow the inclusion of explicit time and probabilities. The aim of this paper is to extend the application of model checking techniques beyond qualitative phylogenetic properties and adapt the existing logical extensions and tools to the field of phylogeny. The introduction of time and probabilities in phylogenetic specifications is motivated by the study of a real example: the analysis of the ratio of lactose intolerance in some populations and the date of appearance of this phenotype.
Requeno, José Ignacio; Colom, José Manuel
2014-10-23
Model checking is a generic verification technique that allows the phylogeneticist to focus on models and specifications instead of on implementation issues. Phylogenetic trees are considered as transition systems over which we interrogate phylogenetic questions written as formulas of temporal logic. Nonetheless, standard logics become insufficient for certain practices of phylogenetic analysis since they do not allow the inclusion of explicit time and probabilities. The aim of this paper is to extend the application of model checking techniques beyond qualitative phylogenetic properties and adapt the existing logical extensions and tools to the field of phylogeny. The introduction of time and probabilities in phylogenetic specifications is motivated by the study of a real example: the analysis of the ratio of lactose intolerance in some populations and the date of appearance of this phenotype.
Quantization of the nonlinear sigma model revisited
NASA Astrophysics Data System (ADS)
Nguyen, Timothy
2016-08-01
We revisit the subject of perturbatively quantizing the nonlinear sigma model in two dimensions from a rigorous, mathematical point of view. Our main contribution is to make precise the cohomological problem of eliminating potential anomalies that may arise when trying to preserve symmetries under quantization. The symmetries we consider are twofold: (i) diffeomorphism covariance for a general target manifold; (ii) a transitive group of isometries when the target manifold is a homogeneous space. We show that there are no anomalies in case (i) and that (ii) is also anomaly-free under additional assumptions on the target homogeneous space, in agreement with the work of Friedan. We carry out some explicit computations for the O(N)-model. Finally, we show how a suitable notion of the renormalization group establishes the Ricci flow as the one loop renormalization group flow of the nonlinear sigma model.
Heinrichs, Julie; Aldridge, Cameron L.; O'Donnell, Michael; Schumaker, Nathan
2017-01-01
Prioritizing habitats for conservation is a challenging task, particularly for species with fluctuating populations and seasonally dynamic habitat needs. Although the use of resource selection models to identify and prioritize habitat for conservation is increasingly common, their ability to characterize important long-term habitats for dynamic populations are variable. To examine how habitats might be prioritized differently if resource selection was directly and dynamically linked with population fluctuations and movement limitations among seasonal habitats, we constructed a spatially explicit individual-based model for a dramatically fluctuating population requiring temporally varying resources. Using greater sage-grouse (Centrocercus urophasianus) in Wyoming as a case study, we used resource selection function maps to guide seasonal movement and habitat selection, but emergent population dynamics and simulated movement limitations modified long-term habitat occupancy. We compared priority habitats in RSF maps to long-term simulated habitat use. We examined the circumstances under which the explicit consideration of movement limitations, in combination with population fluctuations and trends, are likely to alter predictions of important habitats. In doing so, we assessed the future occupancy of protected areas under alternative population and habitat conditions. Habitat prioritizations based on resource selection models alone predicted high use in isolated parcels of habitat and in areas with low connectivity among seasonal habitats. In contrast, results based on more biologically-informed simulations emphasized central and connected areas near high-density populations, sometimes predicted to be low selection value. Dynamic models of habitat use can provide additional biological realism that can extend, and in some cases, contradict habitat use predictions generated from short-term or static resource selection analyses. The explicit inclusion of population dynamics and movement propensities via spatial simulation modeling frameworks may provide an informative means of predicting long-term habitat use, particularly for fluctuating populations with complex seasonal habitat needs. Importantly, our results indicate the possible need to consider habitat selection models as a starting point rather than the common end point for refining and prioritizing habitats for protection for cyclic and highly variable populations.
Linking definitions, mechanisms, and modeling of drought-induced tree death.
Anderegg, William R L; Berry, Joseph A; Field, Christopher B
2012-12-01
Tree death from drought and heat stress is a critical and uncertain component in forest ecosystem responses to a changing climate. Recent research has illuminated how tree mortality is a complex cascade of changes involving interconnected plant systems over multiple timescales. Explicit consideration of the definitions, dynamics, and temporal and biological scales of tree mortality research can guide experimental and modeling approaches. In this review, we draw on the medical literature concerning human death to propose a water resource-based approach to tree mortality that considers the tree as a complex organism with a distinct growth strategy. This approach provides insight into mortality mechanisms at the tree and landscape scales and presents promising avenues into modeling tree death from drought and temperature stress. Copyright © 2012 Elsevier Ltd. All rights reserved.
Exact results for models of multichannel quantum nonadiabatic transitions
Sinitsyn, N. A.
2014-12-11
We consider nonadiabatic transitions in explicitly time-dependent systems with Hamiltonians of the form Hˆ(t)=Aˆ+Bˆt+Cˆ/t, where t is time and Aˆ,Bˆ,Cˆ are Hermitian N × N matrices. We show that in any model of this type, scattering matrix elements satisfy nontrivial exact constraints that follow from the absence of the Stokes phenomenon for solutions with specific conditions at t→–∞. This allows one to continue such solutions analytically to t→+∞, and connect their asymptotic behavior at t→–∞ and t→+∞. This property becomes particularly useful when a model shows additional discrete symmetries. Specifically, we derive a number of simple exact constraints and explicitmore » expressions for scattering probabilities in such systems.« less
2008-09-30
problematic. However, such unfinished fuels are unlikely to be commercially available. Finishing those fuels by hydrotreating to remove excess...comparison, biodiesel does not meet several explicit properties because of higher acidity , viscosity, instability, pour point, and metals and ash content...would improve the remaining limiting properties of F-T and biodiesel fuels. Lubricity additives should remove the potential liability of the low
Exact solutions of unsteady Korteweg-de Vries and time regularized long wave equations.
Islam, S M Rayhanul; Khan, Kamruzzaman; Akbar, M Ali
2015-01-01
In this paper, we implement the exp(-Φ(ξ))-expansion method to construct the exact traveling wave solutions for nonlinear evolution equations (NLEEs). Here we consider two model equations, namely the Korteweg-de Vries (KdV) equation and the time regularized long wave (TRLW) equation. These equations play significant role in nonlinear sciences. We obtained four types of explicit function solutions, namely hyperbolic, trigonometric, exponential and rational function solutions of the variables in the considered equations. It has shown that the applied method is quite efficient and is practically well suited for the aforementioned problems and so for the other NLEEs those arise in mathematical physics and engineering fields. PACS numbers: 02.30.Jr, 02.70.Wz, 05.45.Yv, 94.05.Fq.
Imtiaz, Maria; Hayat, Tasawar; Alsaedi, Ahmed
2016-01-01
This paper looks at the flow of Jeffrey fluid due to a curved stretching sheet. Effect of homogeneous-heterogeneous reactions is considered. An electrically conducting fluid in the presence of applied magnetic field is considered. Convective boundary conditions model the heat transfer analysis. Transformation method reduces the governing nonlinear partial differential equations into the ordinary differential equations. Convergence of the obtained series solutions is explicitly discussed. Characteristics of sundry parameters on the velocity, temperature and concentration profiles are analyzed by plotting graphs. Computations for pressure, skin friction coefficient and surface heat transfer rate are presented and examined. It is noted that fluid velocity and temperature through curvature parameter are enhanced. Increasing values of Biot number correspond to the enhancement in temperature and Nusselt number. PMID:27583457
Secondary dispersal driven by overland flow in drylands: Review and mechanistic model development.
Thompson, Sally E; Assouline, Shmuel; Chen, Li; Trahktenbrot, Ana; Svoray, Tal; Katul, Gabriel G
2014-01-01
Seed dispersal alters gene flow, reproduction, migration and ultimately spatial organization of dryland ecosystems. Because many seeds in drylands lack adaptations for long-distance dispersal, seed transport by secondary processes such as tumbling in the wind or mobilization in overland flow plays a dominant role in determining where seeds ultimately germinate. Here, recent developments in modeling runoff generation in spatially complex dryland ecosystems are reviewed with the aim of proposing improvements to mechanistic modeling of seed dispersal processes. The objective is to develop a physically-based yet operational framework for determining seed dispersal due to surface runoff, a process that has gained recent experimental attention. A Buoyant OBject Coupled Eulerian - Lagrangian Closure model (BOB-CELC) is proposed to represent seed movement in shallow surface flows. The BOB-CELC is then employed to investigate the sensitivity of seed transport to landscape and storm properties and to the spatial configuration of vegetation patches interspersed within bare earth. The potential to simplify seed transport outcomes by considering the limiting behavior of multiple runoff events is briefly considered, as is the potential for developing highly mechanistic, spatially explicit models that link seed transport, vegetation structure and water movement across multiple generations of dryland plants.
Isobel, Sophie; Edwards, Clair
2017-02-01
Without agreeing on an explicit approach to care, mental health nurses may resort to problem focused, task oriented practice. Defining a model of care is important but there is also a need to consider the philosophical basis of any model. The use of Trauma Informed Care as a guiding philosophy provides a robust framework from which to review nursing practice. This paper describes a nursing workforce practice development process to implement Trauma Informed Care as an inpatient model of mental health nursing care. Trauma Informed Care is an evidence-based approach to care delivery that is applicable to mental health inpatient units; while there are differing strategies for implementation, there is scope for mental health nurses to take on Trauma Informed Care as a guiding philosophy, a model of care or a practice development project within all of their roles and settings in order to ensure that it has considered, relevant and meaningful implementation. The principles of Trauma Informed Care may also offer guidance for managing workforce stress and distress associated with practice change. © 2016 Australian College of Mental Health Nurses Inc.
NASA Astrophysics Data System (ADS)
Sarif; Kurauchi, Shinya; Yoshii, Toshio
2017-06-01
In the conventional travel behavior models such as logit and probit, decision makers are assumed to conduct the absolute evaluations on the attributes of the choice alternatives. On the other hand, many researchers in cognitive psychology and marketing science have been suggesting that the perceptions of attributes are characterized by the benchmark called “reference points” and the relative evaluations based on them are often employed in various choice situations. Therefore, this study developed a travel behavior model based on the mental accounting theory in which the internal reference points are explicitly considered. A questionnaire survey about the shopping trip to the CBD in Matsuyama city was conducted, and then the roles of reference points in travel mode choice contexts were investigated. The result showed that the goodness-of-fit of the developed model was higher than that of the conventional model, indicating that the internal reference points might play the major roles in the choice of travel mode. Also shown was that the respondents seem to utilize various reference points: some tend to adopt the lowest fuel price they have experienced, others employ fare price they feel in perceptions of the travel cost.
NASA Astrophysics Data System (ADS)
San Juan, M.; de la Iglesia, J. M.; Martín, O.; Santos, F. J.
2009-11-01
In despite of the important progresses achieved in the knowledge of cutting processes, the study of certain aspects has undergone the very limitations of the experimental means: temperature gradients, frictions, contact, etc… Therefore, the development of numerical models is a valid tool as a first approach to study of those problems. In the present work, a calculation model under Abaqus Explicit code is developed to represent the orthogonal cutting of AISI 4140 steel. A bidimensional simulation under plane strain conditions, which is considered as adiabatic due to the high speed of the material flow, is chosen. The chip separation is defined by means of a fracture law that allows complex simulations of tool penetration in the workpiece. The strong influence of friction on cutting is proved, therefore a very good definition of materials behaviour laws could be obtained, but an erroneous value of friction coefficient could notably reduce the reliability. Considering the difficulty of checking the friction models used in the simulation, from the tests carried out habitually, the most efficacious way to characterize the friction would be to combine simulation models with cutting tests.
Carbon sequestration and water flow regulation services in mature Mediterranean Forest
NASA Astrophysics Data System (ADS)
Beguería, S.; Ovando, P.
2015-12-01
We develop a forestland use and management model that integrates spatially-explicit biophysical and economic data, to estimate the expected pattern of climate regulation services through carbon dioxide (CO2) sequestration in tree and shrubs biomass, and water flow regulation. We apply this model to examine the potential trade-offs and synergies in the supply of CO2 sequestration and water flow services in mature Mediterranean forest, considering two alternative forest management settings. A forest restoration scenario through investments in facilitating forest regeneration, and a forestry activity abandonment scenario as result of unprofitable forest regeneration investment. The analysis is performed for different discount rates and price settings for carbon and water. The model is applied at the farm level in a group of 567 private silvopastoral farms across Andalusia (Spain), considering the main forest species in this region: Quercus ilex, Q. suber, Pinus pinea, P. halepensis, P. pinaster and Eucalyptus sp., as well as for tree-less shrubland and pastures. The results of this research are provided by forest land unit, vegetation, farm and for the group of municipalities where the farms are located. Our results draw attention to the spatial variability of CO2 and water flow regulation services, and point towards a trade-off between those services. The pattern of economic benefits associated to water and carbon services fluctuates according to the assumptions regarding price levels and discounting rates, as well as in connection to the expected forest management and tree growth models, and to spatially-explicit forest attributes such as existing tree and shrubs inventories, the quality of the sites for growing different tree species, soil structure or the climatic characteristics. The assumptions made regarding the inter-temporal preferences and relative prices have a large effect on the estimated economic value of carbon and water services. These results highlight the uncertainties over the provision of forest ecosistem services under changing economic conditions and social preferences.
Implicit and explicit ethnocentrism: revisiting the ideologies of prejudice.
Cunningham, William A; Nezlek, John B; Banaji, Mahzarin R
2004-10-01
Two studies investigated relationships among individual differences in implicit and explicit prejudice, right-wing ideology, and rigidity in thinking. The first study examined these relationships focusing on White Americans' prejudice toward Black Americans. The second study provided the first test of implicit ethnocentrism and its relationship to explicit ethnocentrism by studying the relationship between attitudes toward five social groups. Factor analyses found support for both implicit and explicit ethnocentrism. In both studies, mean explicit attitudes toward out groups were positive, whereas implicit attitudes were negative, suggesting that implicit and explicit prejudices are distinct; however, in both studies, implicit and explicit attitudes were related (r = .37, .47). Latent variable modeling indicates a simple structure within this ethnocentric system, with variables organized in order of specificity. These results lead to the conclusion that (a) implicit ethnocentrism exists and (b) it is related to and distinct from explicit ethnocentrism.
NASA Astrophysics Data System (ADS)
Colombo, Ivo; Porta, Giovanni M.; Ruffo, Paolo; Guadagnini, Alberto
2017-03-01
This study illustrates a procedure conducive to a preliminary risk analysis of overpressure development in sedimentary basins characterized by alternating depositional events of sandstone and shale layers. The approach rests on two key elements: (1) forward modeling of fluid flow and compaction, and (2) application of a model-complexity reduction technique based on a generalized polynomial chaos expansion (gPCE). The forward model considers a one-dimensional vertical compaction processes. The gPCE model is then used in an inverse modeling context to obtain efficient model parameter estimation and uncertainty quantification. The methodology is applied to two field settings considered in previous literature works, i.e. the Venture Field (Scotian Shelf, Canada) and the Navarin Basin (Bering Sea, Alaska, USA), relying on available porosity and pressure information for model calibration. It is found that the best result is obtained when porosity and pressure data are considered jointly in the model calibration procedure. Uncertainty propagation from unknown input parameters to model outputs, such as pore pressure vertical distribution, is investigated and quantified. This modeling strategy enables one to quantify the relative importance of key phenomena governing the feedback between sediment compaction and fluid flow processes and driving the buildup of fluid overpressure in stratified sedimentary basins characterized by the presence of low-permeability layers. The results here illustrated (1) allow for diagnosis of the critical role played by the parameters of quantitative formulations linking porosity and permeability in compacted shales and (2) provide an explicit and detailed quantification of the effects of their uncertainty in field settings.
ERIC Educational Resources Information Center
Glock, Sabine; Beverborg, Arnoud Oude Groote; Müller, Barbara C. N.
2016-01-01
Obese children experience disadvantages in school and discrimination from their teachers. Teachers' implicit and explicit attitudes have been identified as contributing to these disadvantages. Drawing on dual process models, we investigated the nature of pre-service teachers' implicit and explicit attitudes, their motivation to respond without…
Stull, Laura G; McConnell, Haley; McGrew, John; Salyers, Michelle P
2017-01-01
While explicit negative stereotypes of mental illness are well established as barriers to recovery, implicit attitudes also may negatively impact outcomes. The current study is unique in its focus on both explicit and implicit stigma as predictors of recovery attitudes of mental health practitioners. Assertive Community Treatment practitioners (n = 154) from 55 teams completed online measures of stigma, recovery attitudes, and an Implicit Association Test (IAT). Three of four explicit stigma variables (perceptions of blameworthiness, helplessness, and dangerousness) and all three implicit stigma variables were associated with lower recovery attitudes. In a multivariate, hierarchical model, however, implicit stigma did not explain additional variance in recovery attitudes. In the overall model, perceptions of dangerousness and implicitly associating mental illness with "bad" were significant individual predictors of lower recovery attitudes. The current study demonstrates a need for interventions to lower explicit stigma, particularly perceptions of dangerousness, to increase mental health providers' expectations for recovery. The extent to which implicit and explicit stigma differentially predict outcomes, including recovery attitudes, needs further research.
NASA Astrophysics Data System (ADS)
Rinaldo, A.; Gatto, M.; Mari, L.; Casagrandi, R.; Righetto, L.; Bertuzzo, E.; Rodriguez-Iturbe, I.
2012-12-01
Metacommunity and individual-based theoretical models are studied in the context of the spreading of infections of water-borne diseases along the ecological corridors defined by river basins and networks of human mobility. The overarching claim is that mathematical models can indeed provide predictive insight into the course of an ongoing epidemic, potentially aiding real-time emergency management in allocating health care resources and by anticipating the impact of alternative interventions. To support the claim, we examine the ex-post reliability of published predictions of the 2010-2011 Haiti cholera outbreak from four independent modeling studies that appeared almost simultaneously during the unfolding epidemic. For each modeled epidemic trajectory, it is assessed how well predictions reproduced the observed spatial and temporal features of the outbreak to date. The impact of different approaches is considered to the modeling of the spatial spread of V. cholera, the mechanics of cholera transmission and in accounting for the dynamics of susceptible and infected individuals within different local human communities. A generalized model for Haitian epidemic cholera and the related uncertainty is thus constructed and applied to the year-long dataset of reported cases now available. Specific emphasis will be dedicated to models of human mobility, a fundamental infection mechanism. Lessons learned and open issues are discussed and placed in perspective, supporting the conclusion that, despite differences in methods that can be tested through model-guided field validation, mathematical modeling of large-scale outbreaks emerges as an essential component of future cholera epidemic control. Although explicit spatial modeling is made routinely possible by widespread data mapping of hydrology, transportation infrastructure, population distribution, and sanitation, the precise condition under which a waterborne disease epidemic can start in a spatially explicit setting is still lacking. Here, we show that the requirement that all the local reproduction numbers R0 be larger than unity is neither necessary nor sufficient for outbreaks to occur when local settlements are connected by networks of primary and secondary infection mechanisms. To determine onset conditions, we derive general analytical expressions for a reproduction matrix G0 explicitly accounting for spatial distributions of human settlements and pathogen transmission via hydrological and human mobility networks. At disease onset, a generalized reproduction number Λ0 (the dominant eigenvalue of G0) must be larger than unity. We also show that geographical outbreak patterns in complex environments are linked to the dominant eigenvector and to spectral properties of G0. Tests against data and computations for the 2010 Haiti and 2000 KwaZulu-Natal cholera outbreaks, as well as against computations for metapopulation networks, demonstrate that eigenvectors of G0 provide a synthetic and effective tool for predicting the disease course in space and time. Networked connectivity models, describing the interplay between hydrology, epidemiology and social behavior sustaining human mobility, thus prove to be key tools for emergency management of waterborne infections.
NASA Astrophysics Data System (ADS)
Wellen, Christopher; Arhonditsis, George B.; Labencki, Tanya; Boyd, Duncan
2012-10-01
Regression-type, hybrid empirical/process-based models (e.g., SPARROW, PolFlow) have assumed a prominent role in efforts to estimate the sources and transport of nutrient pollution at river basin scales. However, almost no attempts have been made to explicitly accommodate interannual nutrient loading variability in their structure, despite empirical and theoretical evidence indicating that the associated source/sink processes are quite variable at annual timescales. In this study, we present two methodological approaches to accommodate interannual variability with the Spatially Referenced Regressions on Watershed attributes (SPARROW) nonlinear regression model. The first strategy uses the SPARROW model to estimate a static baseline load and climatic variables (e.g., precipitation) to drive the interannual variability. The second approach allows the source/sink processes within the SPARROW model to vary at annual timescales using dynamic parameter estimation techniques akin to those used in dynamic linear models. Model parameterization is founded upon Bayesian inference techniques that explicitly consider calibration data and model uncertainty. Our case study is the Hamilton Harbor watershed, a mixed agricultural and urban residential area located at the western end of Lake Ontario, Canada. Our analysis suggests that dynamic parameter estimation is the more parsimonious of the two strategies tested and can offer insights into the temporal structural changes associated with watershed functioning. Consistent with empirical and theoretical work, model estimated annual in-stream attenuation rates varied inversely with annual discharge. Estimated phosphorus source areas were concentrated near the receiving water body during years of high in-stream attenuation and dispersed along the main stems of the streams during years of low attenuation, suggesting that nutrient source areas are subject to interannual variability.
McClelland, James L.
2013-01-01
This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation (IA) model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role of context in interactive neural networks, in which both bottom-up and top-down signals affect the interpretation of sensory inputs. It is pointed out that connectionist units that use the logistic or softmax activation functions can exactly compute Bayesian posterior probabilities when the bias terms and connection weights affecting such units are set to the logarithms of appropriate probabilistic quantities. Bayesian concepts such the prior, likelihood, (joint and marginal) posterior, probability matching and maximizing, and calculating vs. sampling from the posterior are all reviewed and linked to neural network computations. Probabilistic and neural network models are explicitly linked to the concept of a probabilistic generative model that describes the relationship between the underlying target of perception (e.g., the word intended by a speaker or other source of sensory stimuli) and the sensory input that reaches the perceiver for use in inferring the underlying target. It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Ways in which these computations might be realized in real neural systems are also considered. PMID:23970868
McClelland, James L
2013-01-01
This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation (IA) model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role of context in interactive neural networks, in which both bottom-up and top-down signals affect the interpretation of sensory inputs. It is pointed out that connectionist units that use the logistic or softmax activation functions can exactly compute Bayesian posterior probabilities when the bias terms and connection weights affecting such units are set to the logarithms of appropriate probabilistic quantities. Bayesian concepts such the prior, likelihood, (joint and marginal) posterior, probability matching and maximizing, and calculating vs. sampling from the posterior are all reviewed and linked to neural network computations. Probabilistic and neural network models are explicitly linked to the concept of a probabilistic generative model that describes the relationship between the underlying target of perception (e.g., the word intended by a speaker or other source of sensory stimuli) and the sensory input that reaches the perceiver for use in inferring the underlying target. It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Ways in which these computations might be realized in real neural systems are also considered.
Reynolds-Averaged Turbulence Model Assessment for a Highly Back-Pressured Isolator Flowfield
NASA Technical Reports Server (NTRS)
Baurle, Robert A.; Middleton, Troy F.; Wilson, L. G.
2012-01-01
The use of computational fluid dynamics in scramjet engine component development is widespread in the existing literature. Unfortunately, the quantification of model-form uncertainties is rarely addressed with anything other than sensitivity studies, requiring that the computational results be intimately tied to and calibrated against existing test data. This practice must be replaced with a formal uncertainty quantification process for computational fluid dynamics to play an expanded role in the system design, development, and flight certification process. Due to ground test facility limitations, this expanded role is believed to be a requirement by some in the test and evaluation community if scramjet engines are to be given serious consideration as a viable propulsion device. An effort has been initiated at the NASA Langley Research Center to validate several turbulence closure models used for Reynolds-averaged simulations of scramjet isolator flows. The turbulence models considered were the Menter BSL, Menter SST, Wilcox 1998, Wilcox 2006, and the Gatski-Speziale explicit algebraic Reynolds stress models. The simulations were carried out using the VULCAN computational fluid dynamics package developed at the NASA Langley Research Center. A procedure to quantify the numerical errors was developed to account for discretization errors in the validation process. This procedure utilized the grid convergence index defined by Roache as a bounding estimate for the numerical error. The validation data was collected from a mechanically back-pressured constant area (1 2 inch) isolator model with an isolator entrance Mach number of 2.5. As expected, the model-form uncertainty was substantial for the shock-dominated, massively separated flowfield within the isolator as evidenced by a 6 duct height variation in shock train length depending on the turbulence model employed. Generally speaking, the turbulence models that did not include an explicit stress limiter more closely matched the measured surface pressures. This observation is somewhat surprising, given that stress-limiting models have generally been developed to better predict shock-separated flows. All of the models considered also failed to properly predict the shape and extent of the separated flow region caused by the shock boundary layer interactions. However, the best performing models were able to predict the isolator shock train length (an important metric for isolator operability margin) to within 1 isolator duct height.
Effective field model of roughness in magnetic nano-structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lepadatu, Serban, E-mail: SLepadatu@uclan.ac.uk
2015-12-28
An effective field model is introduced here within the micromagnetics formulation, to study roughness in magnetic structures, by considering sub-exchange length roughness levels as a perturbation on a smooth structure. This allows the roughness contribution to be separated, which is found to give rise to an effective configurational anisotropy for both edge and surface roughness, and accurately model its effects with fine control over the roughness depth without the explicit need to refine the computational cell size to accommodate the roughness profile. The model is validated by comparisons with directly roughened structures for a series of magnetization switching and domainmore » wall velocity simulations and found to be in excellent agreement for roughness levels up to the exchange length. The model is further applied to vortex domain wall velocity simulations with surface roughness, which is shown to significantly modify domain wall movement and result in dynamic pinning and stochastic creep effects.« less
Tang, Xiaoming; Qu, Hongchun; Wang, Ping; Zhao, Meng
2015-03-01
This paper investigates the off-line synthesis approach of model predictive control (MPC) for a class of networked control systems (NCSs) with network-induced delays. A new augmented model which can be readily applied to time-varying control law, is proposed to describe the NCS where bounded deterministic network-induced delays may occur in both sensor to controller (S-A) and controller to actuator (C-A) links. Based on this augmented model, a sufficient condition of the closed-loop stability is derived by applying the Lyapunov method. The off-line synthesis approach of model predictive control is addressed using the stability results of the system, which explicitly considers the satisfaction of input and state constraints. Numerical example is given to illustrate the effectiveness of the proposed method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Communication: Role of explicit water models in the helix folding/unfolding processes
NASA Astrophysics Data System (ADS)
Palazzesi, Ferruccio; Salvalaglio, Matteo; Barducci, Alessandro; Parrinello, Michele
2016-09-01
In the last years, it has become evident that computer simulations can assume a relevant role in modelling protein dynamical motions for their ability to provide a full atomistic image of the processes under investigation. The ability of the current protein force-fields in reproducing the correct thermodynamics and kinetics systems behaviour is thus an essential ingredient to improve our understanding of many relevant biological functionalities. In this work, employing the last developments of the metadynamics framework, we compare the ability of state-of-the-art all-atom empirical functions and water models to consistently reproduce the folding and unfolding of a helix turn motif in a model peptide. This theoretical study puts in evidence that the choice of the water models can influence the thermodynamic and the kinetics of the system under investigation, and for this reason cannot be considered trivial.
Li, Zhengkai; Spaulding, Malcolm; French McCay, Deborah; Crowley, Deborah; Payne, James R
2017-01-15
An oil droplet size model was developed for a variety of turbulent conditions based on non-dimensional analysis of disruptive and restorative forces, which is applicable to oil droplet formation under both surface breaking-wave and subsurface-blowout conditions, with or without dispersant application. This new model was calibrated and successfully validated with droplet size data obtained from controlled laboratory studies of dispersant-treated and non-treated oil in subsea dispersant tank tests and field surveys, including the Deep Spill experimental release and the Deepwater Horizon blowout oil spill. This model is an advancement over prior models, as it explicitly addresses the effects of the dispersed phase viscosity, resulting from dispersant application and constrains the maximum stable droplet size based on Rayleigh-Taylor instability that is invoked for a release from a large aperture. Copyright © 2016 Elsevier Ltd. All rights reserved.
Regulatory T cell effects in antitumor laser immunotherapy: a mathematical model and analysis
NASA Astrophysics Data System (ADS)
Dawkins, Bryan A.; Laverty, Sean M.
2016-03-01
Regulatory T cells (Tregs) have tremendous influence on treatment outcomes in patients receiving immunotherapy for cancerous tumors. We present a mathematical model incorporating the primary cellular and molecular components of antitumor laser immunotherapy. We explicitly model developmental classes of dendritic cells (DCs), cytotoxic T cells (CTLs), primary and metastatic tumor cells, and tumor antigen. Regulatory T cells have been shown to kill antigen presenting cells, to influence dendritic cell maturation and migration, to kill activated killer CTLs in the tumor microenvironment, and to influence CTL proliferation. Since Tregs affect explicitly modeled cells, but we do not explicitly model dynamics of Treg themselves, we use model parameters to analyze effects of Treg immunosuppressive activity. We will outline a systematic method for assigning clinical outcomes to model simulations and use this condition to associate simulated patient treatment outcome with Treg activity.
Fluctuation theorems for discrete kinetic models of molecular motors
NASA Astrophysics Data System (ADS)
Faggionato, Alessandra; Silvestri, Vittoria
2017-04-01
Motivated by discrete kinetic models for non-cooperative molecular motors on periodic tracks, we consider random walks (also not Markov) on quasi one dimensional (1d) lattices, obtained by gluing several copies of a fundamental graph in a linear fashion. We show that, for a suitable class of quasi-1d lattices, the large deviation rate function associated to the position of the walker satisfies a Gallavotti-Cohen symmetry for any choice of the dynamical parameters defining the stochastic walk. This class includes the linear model considered in Lacoste et al (2008 Phys. Rev. E 78 011915). We also derive fluctuation theorems for the time-integrated cycle currents and discuss how the matrix approach of Lacoste et al (2008 Phys. Rev. E 78 011915) can be extended to derive the above Gallavotti-Cohen symmetry for any Markov random walk on {Z} with periodic jump rates. Finally, we review in the present context some large deviation results of Faggionato and Silvestri (2017 Ann. Inst. Henri Poincaré 53 46-78) and give some specific examples with explicit computations.
Gothe, Emma; Sandin, Leonard; Allen, Craig R.; Angeler, David G.
2014-01-01
The distribution of functional traits within and across spatiotemporal scales has been used to quantify and infer the relative resilience across ecosystems. We use explicit spatial modeling to evaluate within- and cross-scale redundancy in headwater streams, an ecosystem type with a hierarchical and dendritic network structure. We assessed the cross-scale distribution of functional feeding groups of benthic invertebrates in Swedish headwater streams during two seasons. We evaluated functional metrics, i.e., Shannon diversity, richness, and evenness, and the degree of redundancy within and across modeled spatial scales for individual feeding groups. We also estimated the correlates of environmental versus spatial factors of both functional composition and the taxonomic composition of functional groups for each spatial scale identified. Measures of functional diversity and within-scale redundancy of functions were similar during both seasons, but both within- and cross-scale redundancy were low. This apparent low redundancy was partly attributable to a few dominant taxa explaining the spatial models. However, rare taxa with stochastic spatial distributions might provide additional information and should therefore be considered explicitly for complementing future resilience assessments. Otherwise, resilience may be underestimated. Finally, both environmental and spatial factors correlated with the scale-specific functional and taxonomic composition. This finding suggests that resilience in stream networks emerges as a function of not only local conditions but also regional factors such as habitat connectivity and invertebrate dispersal.
Superfluid transition in the attractive Hofstadter-Hubbard model
NASA Astrophysics Data System (ADS)
Umucalılar, R. O.; Iskin, M.
2016-08-01
We consider a Fermi gas that is loaded onto a square optical lattice and subjected to a perpendicular artificial magnetic field, and determine its superfluid transition boundary by adopting a BCS-like mean-field approach in momentum space. The multiband structure of the single-particle Hofstadter spectrum is taken explicitly into account while deriving a generalized pairing equation. We present the numerical solutions as functions of the artificial magnetic flux, interaction strength, Zeeman field, chemical potential, and temperature, with a special emphasis on the roles played by the density of single-particle states and center-of-mass momentum of Cooper pairs.
Axisymmetric buckling of the circular graphene sheets with the nonlocal continuum plate model
NASA Astrophysics Data System (ADS)
Farajpour, A.; Mohammadi, M.; Shahidi, A. R.; Mahzoon, M.
2011-08-01
In this article, the buckling behavior of nanoscale circular plates under uniform radial compression is studied. Small-scale effect is taken into consideration. Using nonlocal elasticity theory the governing equations are derived for the circular single-layered graphene sheets (SLGS). Explicit expressions for the buckling loads are obtained for clamped and simply supported boundary conditions. It is shown that nonlocal effects play an important role in the buckling of circular nanoplates. The effects of the small scale on the buckling loads considering various parameters such as the radius of the plate and mode numbers are investigated.
Statistical turbulence theory and turbulence phenomenology
NASA Technical Reports Server (NTRS)
Herring, J. R.
1973-01-01
The application of deductive turbulence theory for validity determination of turbulence phenomenology at the level of second-order, single-point moments is considered. Particular emphasis is placed on the phenomenological formula relating the dissipation to the turbulence energy and the Rotta-type formula for the return to isotropy. Methods which deal directly with most or all the scales of motion explicitly are reviewed briefly. The statistical theory of turbulence is presented as an expansion about randomness. Two concepts are involved: (1) a modeling of the turbulence as nearly multipoint Gaussian, and (2) a simultaneous introduction of a generalized eddy viscosity operator.
Conformal and projective symmetries in Newtonian cosmology
NASA Astrophysics Data System (ADS)
Duval, C.; Gibbons, G. W.; Horváthy, P. A.
2017-02-01
Definitions of non-relativistic conformal transformations are considered both in the Newton-Cartan and in the Kaluza-Klein-type Eisenhart/Bargmann geometrical frameworks. The symmetry groups that come into play are exemplified by the cosmological, and also the Newton-Hooke solutions of Newton's gravitational field equations. It is shown, in particular, that the maximal symmetry group of the standard cosmological model is isomorphic to the 13-dimensional conformal-Newton-Cartan group whose conformal-Bargmann extension is explicitly worked out. Attention is drawn to the appearance of independent space and time dilations, in contrast with the Schrödinger group or the Conformal Galilei Algebra.
Mean field treatment of heterogeneous steady state kinetics
NASA Astrophysics Data System (ADS)
Geva, Nadav; Vaissier, Valerie; Shepherd, James; Van Voorhis, Troy
2017-10-01
We propose a method to quickly compute steady state populations of species undergoing a set of chemical reactions whose rate constants are heterogeneous. Using an average environment in place of an explicit nearest neighbor configuration, we obtain a set of equations describing a single fluctuating active site in the presence of an averaged bath. We apply this Mean Field Steady State (MFSS) method to a model of H2 production on a disordered surface for which the activation energy for the reaction varies from site to site. The MFSS populations quantitatively reproduce the KMC results across the range of rate parameters considered.
Noncommutative quantum mechanics
NASA Astrophysics Data System (ADS)
Gamboa, J.; Loewe, M.; Rojas, J. C.
2001-09-01
A general noncommutative quantum mechanical system in a central potential V=V(r) in two dimensions is considered. The spectrum is bounded from below and, for large values of the anticommutative parameter θ, we find an explicit expression for the eigenvalues. In fact, any quantum mechanical system with these characteristics is equivalent to a commutative one in such a way that the interaction V(r) is replaced by V=V(HHO,Lz), where HHO is the Hamiltonian of the two-dimensional harmonic oscillator and Lz is the z component of the angular momentum. For other finite values of θ the model can be solved by using perturbation theory.
Quantum measurement incompatibility does not imply Bell nonlocality
NASA Astrophysics Data System (ADS)
Hirsch, Flavien; Quintino, Marco Túlio; Brunner, Nicolas
2018-01-01
We discuss the connection between the incompatibility of quantum measurements, as captured by the notion of joint measurability, and the violation of Bell inequalities. Specifically, we explicitly present a given set of non-jointly-measurable positive-operator-value measures (POVMs) MA with the following property. Considering a bipartite Bell test where Alice uses MA, then for any possible shared entangled state ρ and any set of (possibly infinitely many) POVMs NB performed by Bob, the resulting statistics admits a local model and can thus never violate any Bell inequality. This shows that quantum measurement incompatibility does not imply Bell nonlocality in general.
NASA Astrophysics Data System (ADS)
Khan, M. Ijaz; Zia, Q. M. Zaigham; Alsaedi, A.; Hayat, T.
2018-03-01
This attempt explores stagnation point flow of second grade material towards an impermeable stretched cylinder. Non-Fourier heat flux and thermal stratification are considered. Thermal conductivity dependents upon temperature. Governing non-linear differential system is solved using homotopic procedure. Interval of convergence for the obtained series solutions is explicitly determined. Physical quantities of interest have been examined for the influential variables entering into the problems. It is examined that curvature parameter leads to an enhancement in velocity and temperature. Further temperature for non-Fourier heat flux model is less than Fourier's heat conduction law.
A Unified Framework for Monetary Theory and Policy Analysis.
ERIC Educational Resources Information Center
Lagos, Ricardo; Wright, Randall
2005-01-01
Search-theoretic models of monetary exchange are based on explicit descriptions of the frictions that make money essential. However, tractable versions of these models typically make strong assumptions that render them ill suited for monetary policy analysis. We propose a new framework, based on explicit micro foundations, within which macro…
A Naturalistic Inquiry into Praxis When Education Instructors Use Explicit Metacognitive Modeling
ERIC Educational Resources Information Center
Shannon, Nancy Gayle
2014-01-01
This naturalistic inquiry brought together six education instructors in one small teacher preparation program to explore what happens to educational instructors' praxis when the education instructors use explicit metacognitive modeling to reveal their thinking behind their pedagogical decision-making. The participants, while teaching an…
Modeling trends from North American Breeding Bird Survey data: a spatially explicit approach
Bled, Florent; Sauer, John R.; Pardieck, Keith L.; Doherty, Paul; Royle, J. Andy
2013-01-01
Population trends, defined as interval-specific proportional changes in population size, are often used to help identify species of conservation interest. Efficient modeling of such trends depends on the consideration of the correlation of population changes with key spatial and environmental covariates. This can provide insights into causal mechanisms and allow spatially explicit summaries at scales that are of interest to management agencies. We expand the hierarchical modeling framework used in the North American Breeding Bird Survey (BBS) by developing a spatially explicit model of temporal trend using a conditional autoregressive (CAR) model. By adopting a formal spatial model for abundance, we produce spatially explicit abundance and trend estimates. Analyses based on large-scale geographic strata such as Bird Conservation Regions (BCR) can suffer from basic imbalances in spatial sampling. Our approach addresses this issue by providing an explicit weighting based on the fundamental sample allocation unit of the BBS. We applied the spatial model to three species from the BBS. Species have been chosen based upon their well-known population change patterns, which allows us to evaluate the quality of our model and the biological meaning of our estimates. We also compare our results with the ones obtained for BCRs using a nonspatial hierarchical model (Sauer and Link 2011). Globally, estimates for mean trends are consistent between the two approaches but spatial estimates provide much more precise trend estimates in regions on the edges of species ranges that were poorly estimated in non-spatial analyses. Incorporating a spatial component in the analysis not only allows us to obtain relevant and biologically meaningful estimates for population trends, but also enables us to provide a flexible framework in order to obtain trend estimates for any area.
Can a continuum solvent model reproduce the free energy landscape of a -hairpin folding in water?
NASA Astrophysics Data System (ADS)
Zhou, Ruhong; Berne, Bruce J.
2002-10-01
The folding free energy landscape of the C-terminal -hairpin of protein G is explored using the surface-generalized Born (SGB) implicit solvent model, and the results are compared with the landscape from an earlier study with explicit solvent model. The OPLSAA force field is used for the -hairpin in both implicit and explicit solvent simulations, and the conformational space sampling is carried out with a highly parallel replica-exchange method. Surprisingly, we find from exhaustive conformation space sampling that the free energy landscape from the implicit solvent model is quite different from that of the explicit solvent model. In the implicit solvent model some nonnative states are heavily overweighted, and more importantly, the lowest free energy state is no longer the native -strand structure. An overly strong salt-bridge effect between charged residues (E42, D46, D47, E56, and K50) is found to be responsible for this behavior in the implicit solvent model. Despite this, we find that the OPLSAA/SGB energies of all the nonnative structures are higher than that of the native structure; thus the OPLSAA/SGB energy is still a good scoring function for structure prediction for this -hairpin. Furthermore, the -hairpin population at 282 K is found to be less than 40% from the implicit solvent model, which is much smaller than the 72% from the explicit solvent model and 80% from experiment. On the other hand, both implicit and explicit solvent simulations with the OPLSAA force field exhibit no meaningful helical content during the folding process, which is in contrast to some very recent studies using other force fields.
Can a continuum solvent model reproduce the free energy landscape of a β-hairpin folding in water?
Zhou, Ruhong; Berne, Bruce J.
2002-01-01
The folding free energy landscape of the C-terminal β-hairpin of protein G is explored using the surface-generalized Born (SGB) implicit solvent model, and the results are compared with the landscape from an earlier study with explicit solvent model. The OPLSAA force field is used for the β-hairpin in both implicit and explicit solvent simulations, and the conformational space sampling is carried out with a highly parallel replica-exchange method. Surprisingly, we find from exhaustive conformation space sampling that the free energy landscape from the implicit solvent model is quite different from that of the explicit solvent model. In the implicit solvent model some nonnative states are heavily overweighted, and more importantly, the lowest free energy state is no longer the native β-strand structure. An overly strong salt-bridge effect between charged residues (E42, D46, D47, E56, and K50) is found to be responsible for this behavior in the implicit solvent model. Despite this, we find that the OPLSAA/SGB energies of all the nonnative structures are higher than that of the native structure; thus the OPLSAA/SGB energy is still a good scoring function for structure prediction for this β-hairpin. Furthermore, the β-hairpin population at 282 K is found to be less than 40% from the implicit solvent model, which is much smaller than the 72% from the explicit solvent model and ≈80% from experiment. On the other hand, both implicit and explicit solvent simulations with the OPLSAA force field exhibit no meaningful helical content during the folding process, which is in contrast to some very recent studies using other force fields. PMID:12242327
Zhou, Ruhong; Berne, Bruce J
2002-10-01
The folding free energy landscape of the C-terminal beta-hairpin of protein G is explored using the surface-generalized Born (SGB) implicit solvent model, and the results are compared with the landscape from an earlier study with explicit solvent model. The OPLSAA force field is used for the beta-hairpin in both implicit and explicit solvent simulations, and the conformational space sampling is carried out with a highly parallel replica-exchange method. Surprisingly, we find from exhaustive conformation space sampling that the free energy landscape from the implicit solvent model is quite different from that of the explicit solvent model. In the implicit solvent model some nonnative states are heavily overweighted, and more importantly, the lowest free energy state is no longer the native beta-strand structure. An overly strong salt-bridge effect between charged residues (E42, D46, D47, E56, and K50) is found to be responsible for this behavior in the implicit solvent model. Despite this, we find that the OPLSAA/SGB energies of all the nonnative structures are higher than that of the native structure; thus the OPLSAA/SGB energy is still a good scoring function for structure prediction for this beta-hairpin. Furthermore, the beta-hairpin population at 282 K is found to be less than 40% from the implicit solvent model, which is much smaller than the 72% from the explicit solvent model and approximately equal 80% from experiment. On the other hand, both implicit and explicit solvent simulations with the OPLSAA force field exhibit no meaningful helical content during the folding process, which is in contrast to some very recent studies using other force fields.
Estimating European soil organic carbon mitigation potential in a global integrated land use model
NASA Astrophysics Data System (ADS)
Frank, Stefan; Böttcher, Hannes; Schneider, Uwe; Schmid, Erwin; Havlík, Petr
2013-04-01
Several studies have shown the dynamic interaction between soil organic carbon (SOC) sequestration rates, soil management decisions and SOC levels. Management practices such as reduced and no-tillage, improved residue management and crop rotations as well as the conversion of marginal cropland to native vegetation or conversion of cultivated land to permanent grassland offer the potential to increase SOC content. Even though dynamic interactions are widely acknowledged in literature, they have not been implemented in most existing land use decision models. A major obstacle is the high data and computing requirements for an explicit representation of alternative land use sequences since a model has to be able to track all different management decision paths. To our knowledge no study accounted so far for SOC dynamics explicitly in a global integrated land use model. To overcome these conceptual difficulties described above we apply an approach capable of accounting for SOC dynamics in GLOBIOM (Global Biosphere Management Model), a global recursive dynamic partial equilibrium bottom-up model integrating the agricultural, bioenergy and forestry sectors. GLOBIOM represents all major land based sectors and therefore is able to account for direct and indirect effects of land use change as well as leakage effects (e.g. through trade) implicitly. Together with the detailed representation of technologies (e.g. tillage and fertilizer management systems), these characteristics make the model a highly valuable tool for assessing European SOC emissions and mitigation potential. Demand and international trade are represented in this version of the model at the level of 27 EU member states and 23 aggregated world regions outside Europe. Changes in the demand on the one side, and profitability of the different land based activities on the other side, are the major determinants of land use change in GLOBIOM. In this paper we estimate SOC emissions from cropland for the EU until 2050 explicitly considering SOC dynamics due to land use and land management in a global integrated land use model. Moreover, we calculate the EU SOC mitigation potential taking into account leakage effects outside Europe as well as related feed backs from other sectors. In sensitivity analysis, we disaggregate the SOC mitigation potential i.e. we quantify the impact of different management systems and crop rotations to identify most promising mitigation strategies.
There is increasing awareness that improved environmental management can be achieved by considering more explicitly the benefits that humans receive from ecosystems. In a broad sense, the contributions of ecological systems to the health and well being of people can be considered...
Research in digital adaptive flight controllers
NASA Technical Reports Server (NTRS)
Kaufman, H.
1976-01-01
A design study of adaptive control logic suitable for implementation in modern airborne digital flight computers was conducted. Both explicit controllers which directly utilize parameter identification and implicit controllers which do not require identification were considered. Extensive analytical and simulation efforts resulted in the recommendation of two explicit digital adaptive flight controllers. Interface weighted least squares estimation procedures with control logic were developed using either optimal regulator theory or with control logic based upon single stage performance indices.
NASA Astrophysics Data System (ADS)
Devynck, Fabien; Iannuzzi, Marcella; Krack, Matthias
2012-05-01
The oxygen and uranium Frenkel pair (FP) recombination mechanisms are studied in UO2 using an empirical interatomic potential accounting for the polarizability of the ions, namely a dynamical core-shell model. The results are compared to a more conventional rigid-ion model. Both model types have been implemented into the cp2k program package and thoroughly validated. The overall picture indicates that the FP recombination mechanism is a complex process involving several phenomena. The FP recombination can happen instantaneously when the distance between the interstitial and the vacancy is small or can be thermally activated at larger separation distances. However, other criteria can prevail over the interstitial-vacancy distance. The surrounding environment of the FP defect, the mechanical stiffness of the matrix, and the orientation of the migration path are shown to be major factors acting on the FP lifetime. The core-shell and rigid-ion models provide a similar qualitative description of the FP recombination mechanism. However, the FP stabilities determined by both models significantly differ in the lower temperature range considered. Indeed, the recombination time of the oxygen and uranium FPs can be up to an order of magnitude lower in the core-shell model at T=600 K and T=1800 K, respectively. These differences highlight the importance of the explicit description of polarizability on some crucial properties such as the resistance to amorphization. This refined description of the interatomic interactions would certainly affect the description of the recrystallization process following a displacement cascade. In turn, the self-healing phase would be better accounted for in the core-shell model and the misestimate inherent to the lack of polarizability in the rigid-ion model corrected.
NASA Astrophysics Data System (ADS)
Sun, Xiaoqiang; Cai, Yingfeng; Wang, Shaohua; Liu, Yanling; Chen, Long
2016-01-01
The control problems associated with vehicle height adjustment of electronically controlled air suspension (ECAS) still pose theoretical challenges for researchers, which manifest themselves in the publications on this subject over the last years. This paper deals with modeling and control of a vehicle height adjustment system for ECAS, which is an example of a hybrid dynamical system due to the coexistence and coupling of continuous variables and discrete events. A mixed logical dynamical (MLD) modeling approach is chosen for capturing enough details of the vehicle height adjustment process. The hybrid dynamic model is constructed on the basis of some assumptions and piecewise linear approximation for components nonlinearities. Then, the on-off statuses of solenoid valves and the piecewise approximation process are described by propositional logic, and the hybrid system is transformed into the set of linear mixed-integer equalities and inequalities, denoted as MLD model, automatically by HYSDEL. Using this model, a hybrid model predictive controller (HMPC) is tuned based on online mixed-integer quadratic optimization (MIQP). Two different scenarios are considered in the simulation, whose results verify the height adjustment effectiveness of the proposed approach. Explicit solutions of the controller are computed to control the vehicle height adjustment system in realtime using an offline multi-parametric programming technology (MPT), thus convert the controller into an equivalent explicit piecewise affine form. Finally, bench experiments for vehicle height lifting, holding and lowering procedures are conducted, which demonstrate that the HMPC can adjust the vehicle height by controlling the on-off statuses of solenoid valves directly. This research proposes a new modeling and control method for vehicle height adjustment of ECAS, which leads to a closed-loop system with favorable dynamical properties.
ERIC Educational Resources Information Center
Schneider, Darryl W.; Logan, Gordon D.
2005-01-01
Switch costs in task switching are commonly attributed to an executive control process of task-set reconfiguration, particularly in studies involving the explicit task-cuing procedure. The authors propose an alternative account of explicitly cued performance that is based on 2 mechanisms: priming of cue encoding from residual activation of cues in…
DYNA3D: A computer code for crashworthiness engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hallquist, J.O.; Benson, D.J.
1986-09-01
A finite element program with crashworthiness applications has been developed at LLNL. DYNA3D, an explicit, fully vectorized, finite deformation structural dynamics program, has four capabilities that are critical for the efficient and realistic modeling crash phenomena: (1) fully optimized nonlinear solid, shell, and beam elements for representing a structure; (2) a broad range of constitutive models for simulating material behavior; (3) sophisticated contact algorithms for impact interactions; (4) a rigid body capability to represent the bodies away from the impact region at a greatly reduced cost without sacrificing accuracy in the momentum calculations. Basic methodologies of the program are brieflymore » presented along with several crashworthiness calculations. Efficiencies of the Hughes-Liu and Belytschko-Tsay shell formulations are considered.« less
Multiserver Queueing Model subject to Single Exponential Vacation
NASA Astrophysics Data System (ADS)
Vijayashree, K. V.; Janani, B.
2018-04-01
A multi-server queueing model subject to single exponential vacation is considered. The arrivals are allowed to join the queue according to a Poisson distribution and services takes place according to an exponential distribution. Whenever the system becomes empty, all the servers goes for a vacation and returns back after a fixed interval of time. The servers then starts providing service if there are waiting customers otherwise they will wait to complete the busy period. The vacation times are also assumed to be exponentially distributed. In this paper, the stationary and transient probabilities for the number of customers during ideal and functional state of the server are obtained explicitly. Also, numerical illustrations are added to visualize the effect of various parameters.
AdS Black Disk Model for Small-x Deep Inelastic Scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cornalba, Lorenzo; Costa, Miguel S.; Penedones, Joao
2010-08-13
Using the approximate conformal invariance of QCD at high energies we consider a simple anti-de Sitter black disk model to describe saturation in deep inelastic scattering. Deep inside saturation the structure functions have the same power law scaling, F{sub T}{approx}F{sub L}{approx}x{sup -{omega}}, where {omega} is related to the expansion rate of the black disk with energy. Furthermore, the ratio F{sub L}/F{sub T} is given by the universal value (1+{omega}/3+{omega}), independently of the target. For {gamma}*-{gamma}* scattering at high energies we obtain explicit expressions and ratios for the total cross sections of transverse and longitudinal photons in terms of the singlemore » parameter {omega}.« less
Optimal solutions for the evolution of a social obesity epidemic model
NASA Astrophysics Data System (ADS)
Sikander, Waseem; Khan, Umar; Mohyud-Din, Syed Tauseef
2017-06-01
In this work, a novel modification in the traditional homotopy perturbation method (HPM) is proposed by embedding an auxiliary parameter in the boundary condition. The scheme is used to carry out a mathematical evaluation of the social obesity epidemic model. The incidence of excess weight and obesity in adulthood population and prediction of its behavior in the coming years is analyzed by using a modified algorithm. The proposed method increases the convergence of the approximate analytical solution over the domain of the problem. Furthermore, a convenient way is considered for choosing an optimal value of auxiliary parameters via minimizing the total residual error. The graphical comparison of the obtained results with the standard HPM explicitly reveals the accuracy and efficiency of the developed scheme.
NASA Astrophysics Data System (ADS)
Li, Can; Deng, Wei-Hua
2014-07-01
Following the fractional cable equation established in the letter [B.I. Henry, T.A.M. Langlands, and S.L. Wearne, Phys. Rev. Lett. 100 (2008) 128103], we present the time-space fractional cable equation which describes the anomalous transport of electrodiffusion in nerve cells. The derivation is based on the generalized fractional Ohm's law; and the temporal memory effects and spatial-nonlocality are involved in the time-space fractional model. With the help of integral transform method we derive the analytical solutions expressed by the Green's function; the corresponding fractional moments are calculated; and their asymptotic behaviors are discussed. In addition, the explicit solutions of the considered model with two different external current injections are also presented.
Epidemic spreading on random surfer networks with optimal interaction radius
NASA Astrophysics Data System (ADS)
Feng, Yun; Ding, Li; Hu, Ping
2018-03-01
In this paper, the optimal control problem of epidemic spreading on random surfer heterogeneous networks is considered. An epidemic spreading model is established according to the classification of individual's initial interaction radii. Then, a control strategy is proposed based on adjusting individual's interaction radii. The global stability of the disease free and endemic equilibrium of the model is investigated. We prove that an optimal solution exists for the optimal control problem and the explicit form of which is presented. Numerical simulations are conducted to verify the correctness of the theoretical results. It is proved that the optimal control strategy is effective to minimize the density of infected individuals and the cost associated with the adjustment of interaction radii.
The Things You Do: Internal Models of Others’ Expected Behaviour Guide Action Observation
Schenke, Kimberley C.; Wyer, Natalie A.; Bach, Patric
2016-01-01
Predictions allow humans to manage uncertainties within social interactions. Here, we investigate how explicit and implicit person models–how different people behave in different situations–shape these predictions. In a novel action identification task, participants judged whether actors interacted with or withdrew from objects. In two experiments, we manipulated, unbeknownst to participants, the two actors action likelihoods across situations, such that one actor typically interacted with one object and withdrew from the other, while the other actor showed the opposite behaviour. In Experiment 2, participants additionally received explicit information about the two individuals that either matched or mismatched their actual behaviours. The data revealed direct but dissociable effects of both kinds of person information on action identification. Implicit action likelihoods affected response times, speeding up the identification of typical relative to atypical actions, irrespective of the explicit knowledge about the individual’s behaviour. Explicit person knowledge, in contrast, affected error rates, causing participants to respond according to expectations instead of observed behaviour, even when they were aware that the explicit information might not be valid. Together, the data show that internal models of others’ behaviour are routinely re-activated during action observation. They provide first evidence of a person-specific social anticipation system, which predicts forthcoming actions from both explicit information and an individuals’ prior behaviour in a situation. These data link action observation to recent models of predictive coding in the non-social domain where similar dissociations between implicit effects on stimulus identification and explicit behavioural wagers have been reported. PMID:27434265
Ramirez, Jason J.; Dennhardt, Ashley A.; Baldwin, Scott A.; Murphy, James G.; Lindgren, Kristen P.
2016-01-01
Behavioral economic demand curve indices of alcohol consumption reflect decisions to consume alcohol at varying costs. Although these indices predict alcohol-related problems beyond established predictors, little is known about the determinants of elevated demand. Two cognitive constructs that may underlie alcohol demand are alcohol-approach inclinations and drinking identity. The aim of this study was to evaluate implicit and explicit measures of these constructs as predictors of alcohol demand curve indices. College student drinkers (N = 223, 59% female) completed implicit and explicit measures of drinking identity and alcohol-approach inclinations at three timepoints separated by three-month intervals, and completed the Alcohol Purchase Task to assess demand at Time 3. Given no change in our alcohol-approach inclinations and drinking identity measures over time, random intercept-only models were used to predict two demand indices: Amplitude, which represents maximum hypothetical alcohol consumption and expenditures, and Persistence, which represents sensitivity to increasing prices. When modeled separately, implicit and explicit measures of drinking identity and alcohol-approach inclinations positively predicted demand indices. When implicit and explicit measures were included in the same model, both measures of drinking identity predicted Amplitude, but only explicit drinking identity predicted Persistence. In contrast, explicit measures of alcohol-approach inclinations, but not implicit measures, predicted both demand indices. Therefore, there was more support for explicit, versus implicit, measures as unique predictors of alcohol demand. Overall, drinking identity and alcohol-approach inclinations both exhibit positive associations with alcohol demand and represent potentially modifiable cognitive constructs that may underlie elevated demand in college student drinkers. PMID:27379444
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kudryashov, Nikolay A.; Shilnikov, Kirill E.
Numerical computation of the three dimensional problem of the freezing interface propagation during the cryosurgery coupled with the multi-objective optimization methods is used in order to improve the efficiency and safety of the cryosurgery operations performing. Prostate cancer treatment and cutaneous cryosurgery are considered. The heat transfer in soft tissue during the thermal exposure to low temperature is described by the Pennes bioheat model and is coupled with an enthalpy method for blurred phase change computations. The finite volume method combined with the control volume approximation of the heat fluxes is applied for the cryosurgery numerical modeling on the tumormore » tissue of a quite arbitrary shape. The flux relaxation approach is used for the stability improvement of the explicit finite difference schemes. The method of the additional heating elements mounting is studied as an approach to control the cellular necrosis front propagation. Whereas the undestucted tumor tissue and destucted healthy tissue volumes are considered as objective functions, the locations of additional heating elements in cutaneous cryosurgery and cryotips in prostate cancer cryotreatment are considered as objective variables in multi-objective problem. The quasi-gradient method is proposed for the searching of the Pareto front segments as the multi-objective optimization problem solutions.« less
Lorenz, Marco; Fürst, Christine; Thiel, Enrico
2013-09-01
Regarding increasing pressures by global societal and climate change, the assessment of the impact of land use and land management practices on land degradation and the related decrease in sustainable provision of ecosystem services gains increasing interest. Existing approaches to assess agricultural practices focus on the assessment of single crops or statistical data because spatially explicit information on practically applied crop rotations is mostly not available. This provokes considerable uncertainties in crop production models as regional specifics have to be neglected or cannot be considered in an appropriate way. In a case study in Saxony, we developed an approach to (i) derive representative regional crop rotations by combining different data sources and expert knowledge. This includes the integration of innovative crop sequences related to bio-energy production or organic farming and different soil tillage, soil management and soil protection techniques. Furthermore, (ii) we developed a regionalization approach for transferring crop rotations and related soil management strategies on the basis of statistical data and spatially explicit data taken from so called field blocks. These field blocks are the smallest spatial entity for which agricultural practices must be reported to apply for agricultural funding within the frame of the European Agricultural Fund for Rural Development (EAFRD) program. The information was finally integrated into the spatial decision support tool GISCAME to assess and visualize in spatially explicit manner the impact of alternative agricultural land use strategies on soil erosion risk and ecosystem services provision. Objective of this paper is to present the approach how to create spatially explicit information on agricultural management practices for a study area around Dresden, the capital of the German Federal State Saxony. Copyright © 2013 Elsevier Ltd. All rights reserved.
Dutta, Priyanka; Botlani, Mohsen; Varma, Sameer
2014-12-26
The dynamical properties of water at protein-water interfaces are unlike those in the bulk. Here we utilize molecular dynamics simulations to study water dynamics in interstitial regions between two proteins. We consider two natural protein-protein complexes, one in which the Nipah virus G protein binds to cellular ephrin B2 and the other in which the same G protein binds to ephrin B3. While the two complexes are structurally similar, the two ephrins share only a modest sequence identity of ∼50%. X-ray crystallography also suggests that these interfaces are fairly extensive and contain exceptionally large amounts of waters. We find that while the interstitial waters tend to occupy crystallographic sites, almost all waters exhibit residence times of less than hundred picoseconds in the interstitial region. We also find that while the differences in the sequence of the two ephrins result in quantitative differences in the dynamics of interstitial waters, the trends in the shifts with respect to bulk values are similar. Despite the high wetness of the protein-protein interfaces, the dynamics of interstitial waters are considerably slower compared to the bulk-the interstitial waters diffuse an order of magnitude slower and have 2-3 fold longer hydrogen bond lifetimes and 2-1000 fold slower dipole relaxation rates. To understand the role of interstitial waters, we examine how implicit solvent models compare against explicit solvent models in producing ephrin-induced shifts in the G conformational density. Ephrin-induced shifts in the G conformational density are critical to the allosteric activation of another viral protein that mediates fusion. We find that in comparison with the explicit solvent model, the implicit solvent model predicts a more compact G-B2 interface, presumably because of the absence of discrete waters at the G-B2 interface. Simultaneously, we find that the two models yield strikingly different induced changes in the G conformational density, even for those residues whose conformational densities in the apo state are unaffected by the treatment of the bulk solvent. Together, these results show that the explicit treatment of interstitial water molecules is necessary for a proper description of allosteric transitions.
A Three-Stage Model of Housing Search,
1980-05-01
Hanushek and Quigley, 1978) that recognize housing search as a transaction cost but rarely - .. examine search behavior; and descriptive studies of search...explicit mobility models that have recently appeared in the liter- ature (Speare et al., 1975; Hanushek and Quigley, 1978; Brummell, 1979). Although...1978; Hanushek and Quigley, 1978; Cronin, 1978). By explicitly assigning dollar values, the economic models attempt to obtain an objective measure of
DoD Product Line Practice Workshop Report
1998-05-01
capability. The essential enterprise management practices include ensuring sound business goals providing an appropriate funding model performing...business. This way requires vision and explicit support at the organizational level. There must be an explicit funding model to support the development...the same group seems to work best in smaller organizations. A funding model for core asset development also needs to be developed because the core
Xiao, Li; Luo, Ray
2017-12-07
We explored a multi-scale algorithm for the Poisson-Boltzmann continuum solvent model for more robust simulations of biomolecules. In this method, the continuum solvent/solute interface is explicitly simulated with a numerical fluid dynamics procedure, which is tightly coupled to the solute molecular dynamics simulation. There are multiple benefits to adopt such a strategy as presented below. At this stage of the development, only nonelectrostatic interactions, i.e., van der Waals and hydrophobic interactions, are included in the algorithm to assess the quality of the solvent-solute interface generated by the new method. Nevertheless, numerical challenges exist in accurately interpolating the highly nonlinear van der Waals term when solving the finite-difference fluid dynamics equations. We were able to bypass the challenge rigorously by merging the van der Waals potential and pressure together when solving the fluid dynamics equations and by considering its contribution in the free-boundary condition analytically. The multi-scale simulation method was first validated by reproducing the solute-solvent interface of a single atom with analytical solution. Next, we performed the relaxation simulation of a restrained symmetrical monomer and observed a symmetrical solvent interface at equilibrium with detailed surface features resembling those found on the solvent excluded surface. Four typical small molecular complexes were then tested, both volume and force balancing analyses showing that these simple complexes can reach equilibrium within the simulation time window. Finally, we studied the quality of the multi-scale solute-solvent interfaces for the four tested dimer complexes and found that they agree well with the boundaries as sampled in the explicit water simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wieder, William R.; Allison, Steven D.; Davidson, Eric A.
Microbes influence soil organic matter (SOM) decomposition and the long-term stabilization of carbon (C) in soils. We contend that by revising the representation of microbial processes and their interactions with the physicochemical soil environment, Earth system models (ESMs) may make more realistic global C cycle projections. Explicit representation of microbial processes presents considerable challenges due to the scale at which these processes occur. Thus, applying microbial theory in ESMs requires a framework to link micro-scale process-level understanding and measurements to macro-scale models used to make decadal- to century-long projections. Here, we review the diversity, advantages, and pitfalls of simulating soilmore » biogeochemical cycles using microbial-explicit modeling approaches. We present a roadmap for how to begin building, applying, and evaluating reliable microbial-explicit model formulations that can be applied in ESMs. Drawing from experience with traditional decomposition models we suggest: (1) guidelines for common model parameters and output that can facilitate future model intercomparisons; (2) development of benchmarking and model-data integration frameworks that can be used to effectively guide, inform, and evaluate model parameterizations with data from well-curated repositories; and (3) the application of scaling methods to integrate microbial-explicit soil biogeochemistry modules within ESMs. With contributions across scientific disciplines, we feel this roadmap can advance our fundamental understanding of soil biogeochemical dynamics and more realistically project likely soil C response to environmental change at global scales.« less
Self-Love or Other-Love? Explicit Other-Preference but Implicit Self-Preference
Gebauer, Jochen E.; Göritz, Anja S.; Hofmann, Wilhelm; Sedikides, Constantine
2012-01-01
Do humans prefer the self even over their favorite other person? This question has pervaded philosophy and social-behavioral sciences. Psychology’s distinction between explicit and implicit preferences calls for a two-tiered solution. Our evolutionarily-based Dissociative Self-Preference Model offers two hypotheses. Other-preferences prevail at an explicit level, because they convey caring for others, which strengthens interpersonal bonds–a major evolutionary advantage. Self-preferences, however, prevail at an implicit level, because they facilitate self-serving automatic behavior, which favors the self in life-or-die situations–also a major evolutionary advantage. We examined the data of 1,519 participants, who completed an explicit measure and one of five implicit measures of preferences for self versus favorite other. The results were consistent with the Dissociative Self-Preference Model. Explicitly, participants preferred their favorite other over the self. Implicitly, however, they preferred the self over their favorite other (be it their child, romantic partner, or best friend). Results are discussed in relation to evolutionary theorizing on self-deception. PMID:22848605
ERIC Educational Resources Information Center
Petty, Richard E.; Brinol, Pablo
2006-01-01
Comments on the article by B. Gawronski and G. V. Bodenhausen (see record 2006-10465-003). A metacognitive model (MCM) is presented to describe how automatic (implicit) and deliberative (explicit) measures of attitudes respond to change attempts. The model assumes that contemporary implicit measures tap quick evaluative associations, whereas…
We demonstrate a spatially-explicit regional assessment of current condition of aquatic ecoservices in the Coal River Basin (CRB), with limited sensitivity analysis for the atmospheric contaminant mercury. The integrated modeling framework (IMF) forecasts water quality and quant...
We have developed a modeling framework to support grid-based simulation of ecosystems at multiple spatial scales, the Ecological Component Library for Parallel Spatial Simulation (ECLPSS). ECLPSS helps ecologists to build robust spatially explicit simulations of ...
Marissen, Marlies A E; Brouwer, Marlies E; Hiemstra, Annemarie M F; Deen, Mathijs L; Franken, Ingmar H A
2016-08-30
The mask model of narcissism states that the narcissistic traits of patients with NPD are the result of a compensatory reaction to underlying ego fragility. This model assumes that high explicit self-esteem masks low implicit self-esteem. However, research on narcissism has predominantly focused on non-clinical participants and data derived from patients diagnosed with Narcissistic Personality Disorder (NPD) remain scarce. Therefore, the goal of the present study was to test the mask model hypothesis of narcissism among patients with NPD. Male patients with NPD were compared to patients with other PD's and healthy participants on implicit and explicit self-esteem. NPD patients did not differ in levels of explicit and implicit self-esteem compared to both the psychiatric and the healthy control group. Overall, the current study found no evidence in support of the mask model of narcissism among a clinical group. This implicates that it might not be relevant for clinicians to focus treatment of NPD on an underlying negative self-esteem. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Bocedi, Greta; Reid, Jane M.
2017-01-01
Abstract Ongoing ambitions are to understand the evolution of costly polyandry and its consequences for species ecology and evolution. Emerging patterns could stem from feed‐back dynamics between the evolving mating system and its genetic environment, defined by interactions among kin including inbreeding. However, such feed‐backs are rarely considered in nonselfing systems. We use a genetically explicit model to demonstrate a mechanism by which inbreeding depression can select for polyandry to mitigate the negative consequences of mating with inbred males, rather than to avoid inbreeding, and to elucidate underlying feed‐backs. Specifically, given inbreeding depression in sperm traits, costly polyandry evolved to ensure female fertility, without requiring explicit inbreeding avoidance. Resulting sperm competition caused evolution of sperm traits and further mitigated the negative effect of inbreeding depression on female fertility. The evolving mating system fed back to decrease population‐wide homozygosity, and hence inbreeding. However, the net overall decrease was small due to compound effects on the variances in sex‐specific reproductive success and paternity skew. Purging of deleterious mutations did not eliminate inbreeding depression in sperm traits or hence selection for polyandry. Overall, our model illustrates that polyandry evolution, both directly and through sperm competition, might facilitate evolutionary rescue for populations experiencing sudden increases in inbreeding. PMID:28895138
On push-forward representations in the standard gyrokinetic model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyato, N., E-mail: miyato.naoaki@jaea.go.jp; Yagi, M.; Scott, B. D.
2015-01-15
Two representations of fluid moments in terms of a gyro-center distribution function and gyro-center coordinates, which are called push-forward representations, are compared in the standard electrostatic gyrokinetic model. In the representation conventionally used to derive the gyrokinetic Poisson equation, the pull-back transformation of the gyro-center distribution function contains effects of the gyro-center transformation and therefore electrostatic potential fluctuations, which is described by the Poisson brackets between the distribution function and scalar functions generating the gyro-center transformation. Usually, only the lowest order solution of the generating function at first order is considered to explicitly derive the gyrokinetic Poisson equation. This ismore » true in explicitly deriving representations of scalar fluid moments with polarization terms. One also recovers the particle diamagnetic flux at this order because it is associated with the guiding-center transformation. However, higher-order solutions are needed to derive finite Larmor radius terms of particle flux including the polarization drift flux from the conventional representation. On the other hand, the lowest order solution is sufficient for the other representation, in which the gyro-center transformation part is combined with the guiding-center one and the pull-back transformation of the distribution function does not appear.« less
Kessner, Darren; Novembre, John
2015-01-01
Evolve and resequence studies combine artificial selection experiments with massively parallel sequencing technology to study the genetic basis for complex traits. In these experiments, individuals are selected for extreme values of a trait, causing alleles at quantitative trait loci (QTL) to increase or decrease in frequency in the experimental population. We present a new analysis of the power of artificial selection experiments to detect and localize quantitative trait loci. This analysis uses a simulation framework that explicitly models whole genomes of individuals, quantitative traits, and selection based on individual trait values. We find that explicitly modeling QTL provides qualitatively different insights than considering independent loci with constant selection coefficients. Specifically, we observe how interference between QTL under selection affects the trajectories and lengthens the fixation times of selected alleles. We also show that a substantial portion of the genetic variance of the trait (50–100%) can be explained by detected QTL in as little as 20 generations of selection, depending on the trait architecture and experimental design. Furthermore, we show that power depends crucially on the opportunity for recombination during the experiment. Finally, we show that an increase in power is obtained by leveraging founder haplotype information to obtain allele frequency estimates. PMID:25672748
Three-Dimensional Modeling of Flow and Thermochemical Behavior in a Blast Furnace
NASA Astrophysics Data System (ADS)
Shen, Yansong; Guo, Baoyu; Chew, Sheng; Austin, Peter; Yu, Aibing
2015-02-01
An ironmaking blast furnace (BF) is a complex high-temperature moving bed reactor involving counter-, co- and cross-current flows of gas, liquid and solid, coupled with heat and mass exchange and chemical reactions. Two-dimensional (2D) models were widely used for understanding its internal state in the past. In this paper, a three-dimensional (3D) CFX-based mathematical model is developed for describing the internal state of a BF in terms of multiphase flow and the related thermochemical behavior, as well as process indicators. This model considers the intense interactions between gas, solid and liquid phases, and also their competition for the space. The model is applied to a BF covering from the burden surface at the top to the liquid surface in the hearth, where the raceway cavity is considered explicitly. The results show that the key in-furnace phenomena such as flow/temperature patterns and component distributions of solid, gas and liquid phases can be described and characterized in different regions inside the BF, including the gas and liquids flow circumferentially over the 3D raceway surface. The in-furnace distributions of key performance indicators such as reduction degree and gas utilization can also be predicted. This model offers a cost-effective tool to understand and control the complex BF flow and performance.
Sun, Lifan; Ji, Baofeng; Lan, Jian; He, Zishu; Pu, Jiexin
2017-01-01
The key to successful maneuvering complex extended object tracking (MCEOT) using range extent measurements provided by high resolution sensors lies in accurate and effective modeling of both the extension dynamics and the centroid kinematics. During object maneuvers, the extension dynamics of an object with a complex shape is highly coupled with the centroid kinematics. However, this difficult but important problem is rarely considered and solved explicitly. In view of this, this paper proposes a general approach to modeling a maneuvering complex extended object based on Minkowski sum, so that the coupled turn maneuvers in both the centroid states and extensions can be described accurately. The new model has a concise and unified form, in which the complex extension dynamics can be simply and jointly characterized by multiple simple sub-objects’ extension dynamics based on Minkowski sum. The proposed maneuvering model fits range extent measurements very well due to its favorable properties. Based on this model, an MCEOT algorithm dealing with motion and extension maneuvers is also derived. Two different cases of the turn maneuvers with known/unknown turn rates are specifically considered. The proposed algorithm which jointly estimates the kinematic state and the object extension can also be easily implemented. Simulation results demonstrate the effectiveness of the proposed modeling and tracking approaches. PMID:28937629
NASA Astrophysics Data System (ADS)
Kumar, Manoj; Haldar, Subhasis; Gupta, Mridula; Gupta, R. S.
2016-10-01
The threshold voltage degradation due to the hot carrier induced localized charges (LC) is a major reliability concern for nanoscale Schottky barrier (SB) cylindrical gate all around (GAA) metal-oxide-semiconductor field-effect transistors (MOSFETs). The degradation physics of gate material engineered (GME)-SB-GAA MOSFETs due to LC is still unexplored. An explicit threshold voltage degradation model for GME-SB-GAA-MOSFETs with the incorporation of localized charges (N it) is developed. To accurately model the threshold voltage the minimum channel carrier density has been taken into account. The model renders how +/- LC affects the device subthreshold performance. One-dimensional (1D) Poisson’s and 2D Laplace equations have been solved for two different regions (fresh and damaged) with two different gate metal work-functions. LCs are considered at the drain side with low gate metal work-function as N it is more vulnerable towards the drain. For the reduction of carrier mobility degradation, a lightly doped channel has been considered. The proposed model also includes the effect of barrier height lowering at the metal-semiconductor interface. The developed model results have been verified using numerical simulation data obtained by the ATLAS-3D device simulator and excellent agreement is observed between analytical and simulation results.
Towards refactoring the Molecular Function Ontology with a UML profile for function modeling.
Burek, Patryk; Loebe, Frank; Herre, Heinrich
2017-10-04
Gene Ontology (GO) is the largest resource for cataloging gene products. This resource grows steadily and, naturally, this growth raises issues regarding the structure of the ontology. Moreover, modeling and refactoring large ontologies such as GO is generally far from being simple, as a whole as well as when focusing on certain aspects or fragments. It seems that human-friendly graphical modeling languages such as the Unified Modeling Language (UML) could be helpful in connection with these tasks. We investigate the use of UML for making the structural organization of the Molecular Function Ontology (MFO), a sub-ontology of GO, more explicit. More precisely, we present a UML dialect, called the Function Modeling Language (FueL), which is suited for capturing functions in an ontologically founded way. FueL is equipped, among other features, with language elements that arise from studying patterns of subsumption between functions. We show how to use this UML dialect for capturing the structure of molecular functions. Furthermore, we propose and discuss some refactoring options concerning fragments of MFO. FueL enables the systematic, graphical representation of functions and their interrelations, including making information explicit that is currently either implicit in MFO or is mainly captured in textual descriptions. Moreover, the considered subsumption patterns lend themselves to the methodical analysis of refactoring options with respect to MFO. On this basis we argue that the approach can increase the comprehensibility of the structure of MFO for humans and can support communication, for example, during revision and further development.
High resolution modeling of reservoir storage and extent dynamics at the continental scale
NASA Astrophysics Data System (ADS)
Shin, S.; Pokhrel, Y. N.
2017-12-01
Over the past decade, significant progress has been made in developing reservoir schemes in large scale hydrological models to better simulate hydrological fluxes and storages in highly managed river basins. These schemes have been successfully used to study the impact of reservoir operation on global river basins. However, improvements in the existing schemes are needed for hydrological fluxes and storages, especially at the spatial resolution to be used in hyper-resolution hydrological modeling. In this study, we developed a reservoir routing scheme with explicit representation of reservoir storage and extent at the grid scale of 5km or less. Instead of setting reservoir area to a fixed value or diagnosing it using the area-storage equation, which is a commonly used approach in the existing reservoir schemes, we explicitly simulate the inundated storage and area for all grid cells that are within the reservoir extent. This approach enables a better simulation of river-floodplain-reservoir storage by considering both the natural flood and man-made reservoir storage. Results of the seasonal dynamics of reservoir storage, river discharge at the downstream of dams, and the reservoir inundation extent are evaluated with various datasets from ground-observations and satellite measurements. The new model captures the dynamics of these variables with a good accuracy for most of the large reservoirs in the western United States. It is expected that the incorporation of the newly developed reservoir scheme in large-scale land surface models (LSMs) will lead to improved simulation of river flow and terrestrial water storage in highly managed river basins.
Intercomparison of microphysical datasets collected from CAIPEEX observations and WRF simulation
NASA Astrophysics Data System (ADS)
Pattnaik, S.; Goswami, B.; Kulkarni, J.
2009-12-01
In the first phase of ongoing Cloud Aerosol Interaction and Precipitation Enhancement Experiment (CAIPEEX) program of Indian Institute of Tropical Meteorology (IITM), intensive cloud microphysical datasets are collected over India during the May through September, 2009. This study is designed to evaluate the forecast skills of existing cloud microphysical parameterization schemes (i.e. single moment/double moments) within the WRF-ARW model (Version 3.1.1) during different intensive observation periods (IOP) over the targeted regions spreading all across India. Basic meteorological and cloud microphysical parameters obtained from the model simulations are validated against the observed data set collected during CAIPEEX program. For this study, we have considered three IOP phases (i.e. May 23-27, June 11-15, July 3-7) carried out over northern, central and western India respectively. This study emphasizes the thrust to understand the mechanism of evolution, intensification and distribution of simulated precipitation forecast upto day four (i.e. 96 hour forecast). Efforts have also been made to carryout few important microphysics sensitivity experiments within the explicit schemes to investigate their respective impact on the formation and distribution of vital cloud parameters (e.g. cloud liquid water, frozen hydrometeors) and model rainfall forecast over the IOP regions. The characteristic features of liquid and frozen hydrometers in the pre-monsoon and monsoon regimes are examined from model forecast as well as from CAIPEEX observation data set for different IOPs. The model is integrated in a triply nested fashion with an innermost nest explicitly resolved at a horizontal resolution of 4km.In this presentation preliminary results from aforementioned research initiatives will be introduced.
Dosch, Alessandra; Belayachi, Sanaâ; Van der Linden, Martial
2016-01-01
This article examines individual variability in sexual desire and sexual satisfaction by exploring the relation between these sexual aspects and sexual attitudes (implicit and explicit) and by taking gender into account, as this has been shown to be an influential factor. A total of 28 men and 33 women living in heterosexual relationships completed questionnaires assessing sexual desire (dyadic, solitary), sexual satisfaction, and explicit sexual attitudes. An adapted version of the Affect Misattribution Procedure was used to assess implicit sexual attitudes. Results showed higher levels of dyadic and solitary sexual desire in men than in women. No gender differences were found regarding sexual satisfaction or sexual attitudes. High dyadic sexual desire was associated with positive implicit and explicit sexual attitudes, regardless of gender. However, solitary sexual desire was significantly higher in men than women and was associated, in women only, with positive implicit sexual attitudes, suggesting that solitary sexual desire may fulfill different functions in men and women. Finally, sexual satisfaction depended on the combination of explicit and implicit sexual attitudes in both men and women. This study highlights the importance of considering both implicit and explicit sexual attitudes to better understand the mechanisms underlying individual variability in sexual desire and satisfaction.
Teshager, Awoke Dagnew; Gassman, Philip W; Secchi, Silvia; Schoof, Justin T; Misgna, Girmaye
2016-04-01
Applications of the Soil and Water Assessment Tool (SWAT) model typically involve delineation of a watershed into subwatersheds/subbasins that are then further subdivided into hydrologic response units (HRUs) which are homogeneous areas of aggregated soil, landuse, and slope and are the smallest modeling units used within the model. In a given standard SWAT application, multiple potential HRUs (farm fields) in a subbasin are usually aggregated into a single HRU feature. In other words, the standard version of the model combines multiple potential HRUs (farm fields) with the same landuse/landcover, soil, and slope, but located at different places of a subbasin (spatially non-unique), and considers them as one HRU. In this study, ArcGIS pre-processing procedures were developed to spatially define a one-to-one match between farm fields and HRUs (spatially unique HRUs) within a subbasin prior to SWAT simulations to facilitate input processing, input/output mapping, and further analysis at the individual farm field level. Model input data such as landuse/landcover (LULC), soil, crop rotation, and other management data were processed through these HRUs. The SWAT model was then calibrated/validated for Raccoon River watershed in Iowa for 2002-2010 and Big Creek River watershed in Illinois for 2000-2003. SWAT was able to replicate annual, monthly, and daily streamflow, as well as sediment, nitrate and mineral phosphorous within recommended accuracy in most cases. The one-to-one match between farm fields and HRUs created and used in this study is a first step in performing LULC change, climate change impact, and other analyses in a more spatially explicit manner.
NASA Astrophysics Data System (ADS)
Teshager, Awoke Dagnew; Gassman, Philip W.; Secchi, Silvia; Schoof, Justin T.; Misgna, Girmaye
2016-04-01
Applications of the Soil and Water Assessment Tool (SWAT) model typically involve delineation of a watershed into subwatersheds/subbasins that are then further subdivided into hydrologic response units (HRUs) which are homogeneous areas of aggregated soil, landuse, and slope and are the smallest modeling units used within the model. In a given standard SWAT application, multiple potential HRUs (farm fields) in a subbasin are usually aggregated into a single HRU feature. In other words, the standard version of the model combines multiple potential HRUs (farm fields) with the same landuse/landcover, soil, and slope, but located at different places of a subbasin (spatially non-unique), and considers them as one HRU. In this study, ArcGIS pre-processing procedures were developed to spatially define a one-to-one match between farm fields and HRUs (spatially unique HRUs) within a subbasin prior to SWAT simulations to facilitate input processing, input/output mapping, and further analysis at the individual farm field level. Model input data such as landuse/landcover (LULC), soil, crop rotation, and other management data were processed through these HRUs. The SWAT model was then calibrated/validated for Raccoon River watershed in Iowa for 2002-2010 and Big Creek River watershed in Illinois for 2000-2003. SWAT was able to replicate annual, monthly, and daily streamflow, as well as sediment, nitrate and mineral phosphorous within recommended accuracy in most cases. The one-to-one match between farm fields and HRUs created and used in this study is a first step in performing LULC change, climate change impact, and other analyses in a more spatially explicit manner.
NASA Astrophysics Data System (ADS)
Bisdom, Kevin; Bertotti, Giovanni; Nick, Hamidreza M.
2016-10-01
Aperture has a controlling impact on porosity and permeability and is a source of uncertainty in modeling of naturally fractured reservoirs. This uncertainty results from difficulties in accurately quantifying aperture in the subsurface and from a limited fundamental understanding of the mechanical and diagenetic processes that control aperture. In the absence of cement bridges and high pore pressure, fractures in the subsurface are generally considered to be closed. However, experimental work, outcrop analyses and subsurface data show that some fractures remain open, and that aperture varies even along a single fracture. However, most fracture flow models consider constant apertures for fractures. We create a stress-dependent heterogeneous aperture by combining Finite Element modeling of discrete fracture networks with an empirical aperture model. Using a modeling approach that considers fractures explicitly, we quantify equivalent permeability, i.e. combined matrix and stress-dependent fracture flow. Fracture networks extracted from a large outcropping pavement form the basis of these models. The results show that the angle between fracture strike and σ1 has a controlling impact on aperture and permeability, where hydraulic opening is maximum for an angle of 15°. At this angle, the fracture experiences a minor amount of shear displacement that allows the fracture to remain open even when fluid pressure is lower than the local normal stress. Averaging the heterogeneous aperture to scale up permeability probably results in an underestimation of flow, indicating the need to incorporate full aperture distributions rather than simplified aperture models in reservoir-scale flow models.
Demeter, persephone, and the search for emergence in agent-based models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
North, M. J.; Howe, T. R.; Collier, N. T.
2006-01-01
In Greek mythology, the earth goddess Demeter was unable to find her daughter Persephone after Persephone was abducted by Hades, the god of the underworld. Demeter is said to have embarked on a long and frustrating, but ultimately successful, search to find her daughter. Unfortunately, long and frustrating searches are not confined to Greek mythology. In modern times, agent-based modelers often face similar troubles when searching for agents that are to be to be connected to one another and when seeking appropriate target agents while defining agent behaviors. The result is a 'search for emergence' in that many emergent ormore » potentially emergent behaviors in agent-based models of complex adaptive systems either implicitly or explicitly require search functions. This paper considers a new nested querying approach to simplifying such agent-based modeling and multi-agent simulation search problems.« less
Extremum Seeking Control of Smart Inverters for VAR Compensation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arnold, Daniel; Negrete-Pincetic, Matias; Stewart, Emma
2015-09-04
Reactive power compensation is used by utilities to ensure customer voltages are within pre-defined tolerances and reduce system resistive losses. While much attention has been paid to model-based control algorithms for reactive power support and Volt Var Optimization (VVO), these strategies typically require relatively large communications capabilities and accurate models. In this work, a non-model-based control strategy for smart inverters is considered for VAR compensation. An Extremum Seeking control algorithm is applied to modulate the reactive power output of inverters based on real power information from the feeder substation, without an explicit feeder model. Simulation results using utility demand informationmore » confirm the ability of the control algorithm to inject VARs to minimize feeder head real power consumption. In addition, we show that the algorithm is capable of improving feeder voltage profiles and reducing reactive power supplied by the distribution substation.« less
Inequity aversion and the evolution of cooperation
NASA Astrophysics Data System (ADS)
Ahmed, Asrar; Karlapalem, Kamalakar
2014-02-01
Evolution of cooperation is a widely studied problem in biology, social science, economics, and artificial intelligence. Most of the existing approaches that explain cooperation rely on some notion of direct or indirect reciprocity. These reciprocity based models assume agents recognize their partner and know their previous interactions, which requires advanced cognitive abilities. In this paper we are interested in developing a model that produces cooperation without requiring any explicit memory of previous game plays. Our model is based on the notion of inequity aversion, a concept introduced within behavioral economics, whereby individuals care about payoff equality in outcomes. Here we explore the effect of using income inequality to guide partner selection and interaction. We study our model by considering both the well-mixed and the spatially structured population and present the conditions under which cooperation becomes dominant. Our results support the hypothesis that inequity aversion promotes cooperative relationship among nonkin.
Inequity aversion and the evolution of cooperation.
Ahmed, Asrar; Karlapalem, Kamalakar
2014-02-01
Evolution of cooperation is a widely studied problem in biology, social science, economics, and artificial intelligence. Most of the existing approaches that explain cooperation rely on some notion of direct or indirect reciprocity. These reciprocity based models assume agents recognize their partner and know their previous interactions, which requires advanced cognitive abilities. In this paper we are interested in developing a model that produces cooperation without requiring any explicit memory of previous game plays. Our model is based on the notion of inequity aversion, a concept introduced within behavioral economics, whereby individuals care about payoff equality in outcomes. Here we explore the effect of using income inequality to guide partner selection and interaction. We study our model by considering both the well-mixed and the spatially structured population and present the conditions under which cooperation becomes dominant. Our results support the hypothesis that inequity aversion promotes cooperative relationship among nonkin.
Nonparametric Bayesian Segmentation of a Multivariate Inhomogeneous Space-Time Poisson Process.
Ding, Mingtao; He, Lihan; Dunson, David; Carin, Lawrence
2012-12-01
A nonparametric Bayesian model is proposed for segmenting time-evolving multivariate spatial point process data. An inhomogeneous Poisson process is assumed, with a logistic stick-breaking process (LSBP) used to encourage piecewise-constant spatial Poisson intensities. The LSBP explicitly favors spatially contiguous segments, and infers the number of segments based on the observed data. The temporal dynamics of the segmentation and of the Poisson intensities are modeled with exponential correlation in time, implemented in the form of a first-order autoregressive model for uniformly sampled discrete data, and via a Gaussian process with an exponential kernel for general temporal sampling. We consider and compare two different inference techniques: a Markov chain Monte Carlo sampler, which has relatively high computational complexity; and an approximate and efficient variational Bayesian analysis. The model is demonstrated with a simulated example and a real example of space-time crime events in Cincinnati, Ohio, USA.
Magnetic and superconducting competition within the Hubbard dimer. Exact solution
NASA Astrophysics Data System (ADS)
Matlak, M.; Somska, T.; Grabiec, B.
2005-02-01
We express the Hubbard dimer Hamiltonian in the second quantization with theuse of the Hubbard and spin operators. We consider the case of positive and negative U. We decompose the resulting Hamiltonian into several parts collecting all the terms belonging to the same energy level. Such a decomposition visualizes explicitely all intrinsic interactions competing together and deeply hidden in the original form of the dimer Hamiltonian. Among them are competitive ferromagnetic and antiferromagnetic interactions. There are also hopping terms present which describe Cooper pairs hopping between sites 1 and 2 with positive and negative coupling constants (similar as in Kulik-Pedan, Penson-Kolb models). We show that the competition between intrinsic interactions strongly depends on the model parametrs and the averaged occupation number of electrons n [0, 4] resulting in different regimes of the model (as e.g. t-J model regime, etc.).
Twisting perturbed parafermions
NASA Astrophysics Data System (ADS)
Belitsky, A. V.
2017-07-01
The near-collinear expansion of scattering amplitudes in maximally supersymmetric Yang-Mills theory at strong coupling is governed by the dynamics of stings propagating on the five sphere. The pentagon transitions in the operator product expansion which systematize the series get reformulated in terms of matrix elements of branch-point twist operators in the two-dimensional O(6) nonlinear sigma model. The facts that the latter is an asymptotically free field theory and that there exists no local realization of twist fields prevents one from explicit calculation of their scaling dimensions and operator product expansion coefficients. This complication is bypassed making use of the equivalence of the sigma model to the infinite-level limit of WZNW models perturbed by current-current interactions, such that one can use conformal symmetry and conformal perturbation theory for systematic calculations. Presently, to set up the formalism, we consider the O(3) sigma model which is reformulated as perturbed parafermions.
Validating a Model for Welding Induced Residual Stress Using High-Energy X-ray Diffraction
NASA Astrophysics Data System (ADS)
Mach, J. C.; Budrow, C. J.; Pagan, D. C.; Ruff, J. P. C.; Park, J.-S.; Okasinski, J.; Beaudoin, A. J.; Miller, M. P.
2017-05-01
Integrated computational materials engineering (ICME) provides a pathway to advance performance in structures through the use of physically-based models to better understand how manufacturing processes influence product performance. As one particular challenge, consider that residual stresses induced in fabrication are pervasive and directly impact the life of structures. For ICME to be an effective strategy, it is essential that predictive capability be developed in conjunction with critical experiments. In the present work, simulation results from a multi-physics model for gas metal arc welding are evaluated through x-ray diffraction using synchrotron radiation. A test component was designed with intent to develop significant gradients in residual stress, be representative of real-world engineering application, yet remain tractable for finely spaced strain measurements with positioning equipment available at synchrotron facilities. The experimental validation lends confidence to model predictions, facilitating the explicit consideration of residual stress distribution in prediction of fatigue life.
Cellular automaton model for molecular traffic jams
NASA Astrophysics Data System (ADS)
Belitsky, V.; Schütz, G. M.
2011-07-01
We consider the time evolution of an exactly solvable cellular automaton with random initial conditions both in the large-scale hydrodynamic limit and on the microscopic level. This model is a version of the totally asymmetric simple exclusion process with sublattice parallel update and thus may serve as a model for studying traffic jams in systems of self-driven particles. We study the emergence of shocks from the microscopic dynamics of the model. In particular, we introduce shock measures whose time evolution we can compute explicitly, both in the thermodynamic limit and for open boundaries where a boundary-induced phase transition driven by the motion of a shock occurs. The motion of the shock, which results from the collective dynamics of the exclusion particles, is a random walk with an internal degree of freedom that determines the jump direction. This type of hopping dynamics is reminiscent of some transport phenomena in biological systems.
Spatial modeling in ecology: the flexibility of eigenfunction spatial analyses.
Griffith, Daniel A; Peres-Neto, Pedro R
2006-10-01
Recently, analytical approaches based on the eigenfunctions of spatial configuration matrices have been proposed in order to consider explicitly spatial predictors. The present study demonstrates the usefulness of eigenfunctions in spatial modeling applied to ecological problems and shows equivalencies of and differences between the two current implementations of this methodology. The two approaches in this category are the distance-based (DB) eigenvector maps proposed by P. Legendre and his colleagues, and spatial filtering based upon geographic connectivity matrices (i.e., topology-based; CB) developed by D. A. Griffith and his colleagues. In both cases, the goal is to create spatial predictors that can be easily incorporated into conventional regression models. One important advantage of these two approaches over any other spatial approach is that they provide a flexible tool that allows the full range of general and generalized linear modeling theory to be applied to ecological and geographical problems in the presence of nonzero spatial autocorrelation.
Some new results on the central overlap problem in astrometry
NASA Astrophysics Data System (ADS)
Rapaport, M.
1998-07-01
The central overlap problem in astrometry has been revisited in the recent last years by Eichhorn (1988) who explicitly inverted the matrix of a constrained least squares problem. In this paper, the general explicit solution of the unconstrained central overlap problem is given. We also give the explicit solution for an other set of constraints; this result is a confirmation of a conjecture expressed by Eichhorn (1988). We also consider the use of iterative methods to solve the central overlap problem. A surprising result is obtained when the classical Gauss Seidel method is used; the iterations converge immediately to the general solution of the equations; we explain this property writing the central overlap problem in a new set of variables.
ERIC Educational Resources Information Center
Stoel, Gerhard L.; van Drie, Jannet P.; van Boxtel, Carla A. M.
2017-01-01
This article reports an experimental study on the effects of explicit teaching on 11th grade students' ability to reason causally in history. Underpinned by the model of domain learning, explicit teaching is conceptualized as multidimensional, focusing on strategies and second-order concepts to generate and verbalize causal explanations and…
Keatley, David; Clarke, David D; Hagger, Martin S
2012-01-01
The literature on health-related behaviours and motivation is replete with research involving explicit processes and their relations with intentions and behaviour. Recently, interest has been focused on the impact of implicit processes and measures on health-related behaviours. Dual-systems models have been proposed to provide a framework for understanding the effects of explicit or deliberative and implicit or impulsive processes on health behaviours. Informed by a dual-systems approach and self-determination theory, the aim of this study was to test the effects of implicit and explicit motivation on three health-related behaviours in a sample of undergraduate students (N = 162). Implicit motives were hypothesised to predict behaviour independent of intentions while explicit motives would be mediated by intentions. Regression analyses indicated that implicit motivation predicted physical activity behaviour only. Across all behaviours, intention mediated the effects of explicit motivational variables from self-determination theory. This study provides limited support for dual-systems models and the role of implicit motivation in the prediction of health-related behaviour. Suggestions for future research into the role of implicit processes in motivation are outlined.
NASA Astrophysics Data System (ADS)
Yulia, M.; Suhandy, D.
2018-03-01
NIR spectra obtained from spectral data acquisition system contains both chemical information of samples as well as physical information of the samples, such as particle size and bulk density. Several methods have been established for developing calibration models that can compensate for sample physical information variations. One common approach is to include physical information variation in the calibration model both explicitly and implicitly. The objective of this study was to evaluate the feasibility of using explicit method to compensate the influence of different particle size of coffee powder in NIR calibration model performance. A number of 220 coffee powder samples with two different types of coffee (civet and non-civet) and two different particle sizes (212 and 500 µm) were prepared. Spectral data was acquired using NIR spectrometer equipped with an integrating sphere for diffuse reflectance measurement. A discrimination method based on PLS-DA was conducted and the influence of different particle size on the performance of PLS-DA was investigated. In explicit method, we add directly the particle size as predicted variable results in an X block containing only the NIR spectra and a Y block containing the particle size and type of coffee. The explicit inclusion of the particle size into the calibration model is expected to improve the accuracy of type of coffee determination. The result shows that using explicit method the quality of the developed calibration model for type of coffee determination is a little bit superior with coefficient of determination (R2) = 0.99 and root mean square error of cross-validation (RMSECV) = 0.041. The performance of the PLS2 calibration model for type of coffee determination with particle size compensation was quite good and able to predict the type of coffee in two different particle sizes with relatively high R2 pred values. The prediction also resulted in low bias and RMSEP values.
NASA Astrophysics Data System (ADS)
Wilde, M. V.; Sergeeva, N. V.
2018-05-01
An explicit asymptotic model extracting the contribution of a surface wave to the dynamic response of a viscoelastic half-space is derived. Fractional exponential Rabotnov's integral operators are used for describing of material properties. The model is derived by extracting the principal part of the poles corresponding to the surface waves after applying Laplace and Fourier transforms. The simplified equations for the originals are written by using power series expansions. Padè approximation is constructed to unite short-time and long-time models. The form of this approximation allows to formulate the explicit model using a fractional exponential Rabotnov's integral operator with parameters depending on the properties of surface wave. The applicability of derived models is studied by comparing with the exact solutions of a model problem. It is revealed that the model based on Padè approximation is highly effective for all the possible time domains.
Gambini, R; Pullin, J
2000-12-18
We consider general relativity with a cosmological constant as a perturbative expansion around a completely solvable diffeomorphism invariant field theory. This theory is the lambda --> infinity limit of general relativity. This allows an explicit perturbative computational setup in which the quantum states of the theory and the classical observables can be explicitly computed. An unexpected relationship arises at a quantum level between the discrete spectrum of the volume operator and the allowed values of the cosmological constant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, XF; Zhao, X; Huang, K
A high-fidelity two-dimensional axial symmetrical multi-physics model is described in this paper as an effort to simulate the cycle performance of a recently discovered solid oxide metal-air redox battery (SOMARB). The model collectively considers mass transport, charge transfer and chemical redox cycle kinetics occurring across the components of the battery, and is validated by experimental data obtained from independent research. In particular, the redox kinetics at the energy storage unit is well represented by Johnson-Mehl-Avrami-Kolmogorov (JIVIAK) and Shrinking Core models. The results explicitly show that the reduction of Fe3O4 during the charging cycle limits the overall performance. Distributions of electrodemore » potential, overpotential, Nernst potential, and H-2/H2O-concentration across various components of the battery are also systematically investigated. (C) 2015 Elsevier B.V. All rights reserved.« less
Multistage Schemes with Multigrid for Euler and Navier-Strokes Equations: Components and Analysis
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Turkel, Eli
1997-01-01
A class of explicit multistage time-stepping schemes with centered spatial differencing and multigrids are considered for the compressible Euler and Navier-Stokes equations. These schemes are the basis for a family of computer programs (flow codes with multigrid (FLOMG) series) currently used to solve a wide range of fluid dynamics problems, including internal and external flows. In this paper, the components of these multistage time-stepping schemes are defined, discussed, and in many cases analyzed to provide additional insight into their behavior. Special emphasis is given to numerical dissipation, stability of Runge-Kutta schemes, and the convergence acceleration techniques of multigrid and implicit residual smoothing. Both the Baldwin and Lomax algebraic equilibrium model and the Johnson and King one-half equation nonequilibrium model are used to establish turbulence closure. Implementation of these models is described.
Macías-Díaz, J E; Macías, Siegfried; Medina-Ramírez, I E
2013-12-01
In this manuscript, we present a computational model to approximate the solutions of a partial differential equation which describes the growth dynamics of microbial films. The numerical technique reported in this work is an explicit, nonlinear finite-difference methodology which is computationally implemented using Newton's method. Our scheme is compared numerically against an implicit, linear finite-difference discretization of the same partial differential equation, whose computer coding requires an implementation of the stabilized bi-conjugate gradient method. Our numerical results evince that the nonlinear approach results in a more efficient approximation to the solutions of the biofilm model considered, and demands less computer memory. Moreover, the positivity of initial profiles is preserved in the practice by the nonlinear scheme proposed. Copyright © 2013 Elsevier Ltd. All rights reserved.
Universality of entropy scaling in one dimensional gapless models.
Korepin, V E
2004-03-05
We consider critical models in one dimension. We study the ground state in the thermodynamic limit (infinite lattice). We are interested in an entropy of a subsystem. We calculate the entropy of a part of the ground state from a space interval (0,x). At zero temperature it describes the entanglement of the part of the ground state from this interval with the rest of the ground state. We obtain an explicit formula for the entropy of the subsystem at any temperature. At zero temperature our formula reproduces a logarithmic formula, discovered by Vidal, Latorre, Rico, and Kitaev for spin chains. We prove our formula by means of conformal field theory and the second law of thermodynamics. Our formula is universal. We illustrate it for a Bose gas with a delta interaction and for the Hubbard model.
NASA Technical Reports Server (NTRS)
Saleeb, A. F.; Chang, T. Y. P.; Wilt, T.; Iskovitz, I.
1989-01-01
The research work performed during the past year on finite element implementation and computational techniques pertaining to high temperature composites is outlined. In the present research, two main issues are addressed: efficient geometric modeling of composite structures and expedient numerical integration techniques dealing with constitutive rate equations. In the first issue, mixed finite elements for modeling laminated plates and shells were examined in terms of numerical accuracy, locking property and computational efficiency. Element applications include (currently available) linearly elastic analysis and future extension to material nonlinearity for damage predictions and large deformations. On the material level, various integration methods to integrate nonlinear constitutive rate equations for finite element implementation were studied. These include explicit, implicit and automatic subincrementing schemes. In all cases, examples are included to illustrate the numerical characteristics of various methods that were considered.
The role of mesocosm studies in ecological risk analysis
Boyle, Terence P.; Fairchild, James F.
1997-01-01
Mesocosms have been primarily used as research tools for the evaluation of the fate and effects of xenobiotic chemicals at the population, community, and ecosystem levels of biological organization. This paper provides suggestions for future applications of mesocosm research. Attention should be given to the configuration of mesocosm parameters to explicitly study regional questions of ecological interest. The initial physical, chemical, and biological conditions within mesocosms should be considered as factors shaping the final results of experiments. Certain fundamental questions such as the ecological inertia and resilience of systems with different initial ecological properties should be addressed. Researchers should develop closer working relationships with mathematical modelers in linking computer models to the outcomes of mesocosm studies. Mesocosm tests, linked with models, could enable managers and regulators to forecast the regional consequences of chemicals released into the environment.
Age effects on explicit and implicit memory
Ward, Emma V.; Berry, Christopher J.; Shanks, David R.
2013-01-01
It is well-documented that explicit memory (e.g., recognition) declines with age. In contrast, many argue that implicit memory (e.g., priming) is preserved in healthy aging. For example, priming on tasks such as perceptual identification is often not statistically different in groups of young and older adults. Such observations are commonly taken as evidence for distinct explicit and implicit learning/memory systems. In this article we discuss several lines of evidence that challenge this view. We describe how patterns of differential age-related decline may arise from differences in the ways in which the two forms of memory are commonly measured, and review recent research suggesting that under improved measurement methods, implicit memory is not age-invariant. Formal computational models are of considerable utility in revealing the nature of underlying systems. We report the results of applying single and multiple-systems models to data on age effects in implicit and explicit memory. Model comparison clearly favors the single-system view. Implications for the memory systems debate are discussed. PMID:24065942
High-Order/Low-Order methods for ocean modeling
Newman, Christopher; Womeldorff, Geoff; Chacón, Luis; ...
2015-06-01
In this study, we examine a High Order/Low Order (HOLO) approach for a z-level ocean model and show that the traditional semi-implicit and split-explicit methods, as well as a recent preconditioning strategy, can easily be cast in the framework of HOLO methods. The HOLO formulation admits an implicit-explicit method that is algorithmically scalable and second-order accurate, allowing timesteps much larger than the barotropic time scale. We show how HOLO approaches, in particular the implicit-explicit method, can provide a solid route for ocean simulation to heterogeneous computing and exascale environments.
NASA Technical Reports Server (NTRS)
Arnold, S. M.; Saleeb, A. F.; Tan, H. Q.; Zhang, Y.
1993-01-01
The issue of developing effective and robust schemes to implement a class of the Ogden-type hyperelastic constitutive models is addressed. To this end, special purpose functions (running under MACSYMA) are developed for the symbolic derivation, evaluation, and automatic FORTRAN code generation of explicit expressions for the corresponding stress function and material tangent stiffness tensors. These explicit forms are valid over the entire deformation range, since the singularities resulting from repeated principal-stretch values have been theoretically removed. The required computational algorithms are outlined, and the resulting FORTRAN computer code is presented.
Vertex functions at finite momentum: Application to antiferromagnetic quantum criticality
NASA Astrophysics Data System (ADS)
Wölfle, Peter; Abrahams, Elihu
2016-02-01
We analyze the three-point vertex function that describes the coupling of fermionic particle-hole pairs in a metal to spin or charge fluctuations at nonzero momentum. We consider Ward identities, which connect two-particle vertex functions to the self-energy, in the framework of a Hubbard model. These are derived using conservation laws following from local symmetries. The generators considered are the spin density and particle density. It is shown that at certain antiferromagnetic critical points, where the quasiparticle effective mass is diverging, the vertex function describing the coupling of particle-hole pairs to the spin density Fourier component at the antiferromagnetic wave vector is also divergent. Then we give an explicit calculation of the irreducible vertex function for the case of three-dimensional antiferromagnetic fluctuations, and show that it is proportional to the diverging quasiparticle effective mass.
NASA Technical Reports Server (NTRS)
Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.
1991-01-01
A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.
The need for spatially explicit quantification of benefits in invasive-species management.
Januchowski-Hartley, Stephanie R; Adams, Vanessa M; Hermoso, Virgilio
2018-04-01
Worldwide, invasive species are a leading driver of environmental change across terrestrial, marine, and freshwater environments and cost billions of dollars annually in ecological damages and economic losses. Resources limit invasive-species control, and planning processes are needed to identify cost-effective solutions. Thus, studies are increasingly considering spatially variable natural and socioeconomic assets (e.g., species persistence, recreational fishing) when planning the allocation of actions for invasive-species management. There is a need to improve understanding of how such assets are considered in invasive-species management. We reviewed over 1600 studies focused on management of invasive species, including flora and fauna. Eighty-four of these studies were included in our final analysis because they focused on the prioritization of actions for invasive species management. Forty-five percent (n = 38) of these studies were based on spatial optimization methods, and 35% (n = 13) accounted for spatially variable assets. Across all 84 optimization studies considered, 27% (n = 23) explicitly accounted for spatially variable assets. Based on our findings, we further explored the potential costs and benefits to invasive species management when spatially variable assets are explicitly considered or not. To include spatially variable assets in decision-making processes that guide invasive-species management there is a need to quantify environmental responses to invasive species and to enhance understanding of potential impacts of invasive species on different natural or socioeconomic assets. We suggest these gaps could be filled by systematic reviews, quantifying invasive species impacts on native species at different periods, and broadening sources and enhancing sharing of knowledge. © 2017 Society for Conservation Biology.
A k-Omega Turbulence Model for Quasi-Three-Dimensional Turbomachinery Flows
NASA Technical Reports Server (NTRS)
Chima, Rodrick V.
1995-01-01
A two-equation k-omega turbulence model has been developed and applied to a quasi-three-dimensional viscous analysis code for blade-to-blade flows in turbomachinery. the code includes the effects of rotation, radius change, and variable stream sheet thickness. The flow equations are given and the explicit runge-Kutta solution scheme is described. the k-omega model equations are also given and the upwind implicit approximate-factorization solution scheme is described. Three cases were calculated: transitional flow over a flat plate, a transonic compressor rotor, and transonic turbine vane with heat transfer. Results were compared to theory, experimental data, and to results using the Baldwin-Lomax turbulence model. The two models compared reasonably well with the data and surprisingly well with each other. Although the k-omega model behaves well numerically and simulates effects of transition, freestream turbulence, and wall roughness, it was not decisively better than the Baldwin-Lomax model for the cases considered here.
Modeling the Effects of Perceptual Load: Saliency, Competitive Interactions, and Top-Down Biases.
Neokleous, Kleanthis; Shimi, Andria; Avraamides, Marios N
2016-01-01
A computational model of visual selective attention has been implemented to account for experimental findings on the Perceptual Load Theory (PLT) of attention. The model was designed based on existing neurophysiological findings on attentional processes with the objective to offer an explicit and biologically plausible formulation of PLT. Simulation results verified that the proposed model is capable of capturing the basic pattern of results that support the PLT as well as findings that are considered contradictory to the theory. Importantly, the model is able to reproduce the behavioral results from a dilution experiment, providing thus a way to reconcile PLT with the competing Dilution account. Overall, the model presents a novel account for explaining PLT effects on the basis of the low-level competitive interactions among neurons that represent visual input and the top-down signals that modulate neural activity. The implications of the model concerning the debate on the locus of selective attention as well as the origins of distractor interference in visual displays of varying load are discussed.
Class of self-limiting growth models in the presence of nonlinear diffusion
NASA Astrophysics Data System (ADS)
Kar, Sandip; Banik, Suman Kumar; Ray, Deb Shankar
2002-06-01
The source term in a reaction-diffusion system, in general, does not involve explicit time dependence. A class of self-limiting growth models dealing with animal and tumor growth and bacterial population in a culture, on the other hand, are described by kinetics with explicit functions of time. We analyze a reaction-diffusion system to study the propagation of spatial front for these models.
Causal relations among events and states in dynamic geographical phenomena
NASA Astrophysics Data System (ADS)
Huang, Zhaoqiang; Feng, Xuezhi; Xuan, Wenling; Chen, Xiuwan
2007-06-01
There is only a static state of the real world to be recorded in conventional geographical information systems. However, there is not only static information but also dynamic information in geographical phenomena. So that how to record the dynamic information and reveal the relations among dynamic information is an important issue in a spatio-temporal information system. From an ontological perspective, we can initially divide the spatio-temporal entities in the world into continuants and occurrents. Continuant entities endure through some extended (although possibly very short) interval of time (e.g., houses, roads, cities, and real-estate). Occurrent entities happen and are then gone (e.g., a house repair job, road construction project, urban expansion, real-estate transition). From an information system perspective, continuants and occurrents that have a unique identity in the system are referred to as objects and events, respectively. And the change is represented implicitly by static snapshots in current spatial temporal information systems. In the previous models, the objects can be considered as the fundamental components of the system, and the change is modeled by considering time-varying attributes of these objects. In the spatio-temporal database, the temporal information that is either interval or instant is involved and the underlying data structures and indexes for temporal are considerable investigated. However, there is the absence of explicit ways of considering events, which affect the attributes of objects or the state. So the research issue of this paper focuses on how to model events in conceptual models of dynamic geographical phenomena and how to represent the causal relations among events and the objects or states. Firstly, the paper reviews the conceptual modeling in a temporal GIS by researchers. Secondly, this paper discusses the spatio-temporal entities: objects and events. Thirdly, this paper investigates the causal relations amongst events and states. The qualitative spatiotemporal change is an important issue in the dynamic geographic-scale phenomena. In real estate transition, the events and states are needed to be represented explicitly. In our modeling the evolution of a dynamic system, it can not avoid fetching in the view of causality. The object's transition is represented by the state of object. Event causes the state of objects changing and causes other events happen. Events connect with objects closely. The basic causal relations are the state-event and event-state relationships. Lastly, the paper concludes with the overview about the causal relations amongst events and states. And this future work is pointed.
Timescales and the management of ecological systems.
Hastings, Alan
2016-12-20
Human management of ecological systems, including issues like fisheries, invasive species, and restoration, as well as others, often must be undertaken with limited information. This means that developing general principles and heuristic approaches is important. Here, I focus on one aspect, the importance of an explicit consideration of time, which arises because of the inherent limitations in the response of ecological systems. I focus mainly on simple systems and models, beginning with systems without density dependence, which are therefore linear. Even for these systems, it is important to recognize the necessary delays in the response of the ecological system to management. Here, I also provide details for optimization that show how general results emerge and emphasize how delays due to demography and life histories can change the optimal management approach. A brief discussion of systems with density dependence and tipping points shows that the same themes emerge, namely, that when considering issues of restoration or management to change the state of an ecological system, that timescales need explicit consideration and may change the optimal approach in important ways.
NASA Technical Reports Server (NTRS)
Walker, K. P.; Freed, A. D.
1991-01-01
New methods for integrating systems of stiff, nonlinear, first order, ordinary differential equations are developed by casting the differential equations into integral form. Nonlinear recursive relations are obtained that allow the solution to a system of equations at time t plus delta t to be obtained in terms of the solution at time t in explicit and implicit forms. Examples of accuracy obtained with the new technique are given by considering systems of nonlinear, first order equations which arise in the study of unified models of viscoplastic behaviors, the spread of the AIDS virus, and predator-prey populations. In general, the new implicit algorithm is unconditionally stable, and has a Jacobian of smaller dimension than that which is acquired by current implicit methods, such as the Euler backward difference algorithm; yet, it gives superior accuracy. The asymptotic explicit and implicit algorithms are suitable for solutions that are of the growing and decaying exponential kinds, respectively, whilst the implicit Euler-Maclaurin algorithm is superior when the solution oscillates, i.e., when there are regions in which both growing and decaying exponential solutions exist.
Choi, J.; Seong, J.C.; Kim, B.; Usery, E.L.
2008-01-01
A feature relies on three dimensions (space, theme, and time) for its representation. Even though spatiotemporal models have been proposed, they have principally focused on the spatial changes of a feature. In this paper, a feature-based temporal model is proposed to represent the changes of both space and theme independently. The proposed model modifies the ISO's temporal schema and adds new explicit temporal relationship structure that stores temporal topological relationship with the ISO's temporal primitives of a feature in order to keep track feature history. The explicit temporal relationship can enhance query performance on feature history by removing topological comparison during query process. Further, a prototype system has been developed to test a proposed feature-based temporal model by querying land parcel history in Athens, Georgia. The result of temporal query on individual feature history shows the efficiency of the explicit temporal relationship structure. ?? Springer Science+Business Media, LLC 2007.
The number of reduced alignments between two DNA sequences
2014-01-01
Background In this study we consider DNA sequences as mathematical strings. Total and reduced alignments between two DNA sequences have been considered in the literature to measure their similarity. Results for explicit representations of some alignments have been already obtained. Results We present exact, explicit and computable formulas for the number of different possible alignments between two DNA sequences and a new formula for a class of reduced alignments. Conclusions A unified approach for a wide class of alignments between two DNA sequences has been provided. The formula is computable and, if complemented by software development, will provide a deeper insight into the theory of sequence alignment and give rise to new comparison methods. AMS Subject Classification Primary 92B05, 33C20, secondary 39A14, 65Q30 PMID:24684679
ERIC Educational Resources Information Center
lo, C. Owen
2014-01-01
Using a realist grounded theory method, this study resulted in a theoretical model and 4 propositions. As displayed in the LINK model, the labeling practice is situated in and endorsed by a social context that carries explicit theory about and educational policies regarding the labels. Taking a developmental perspective, the labeling practice…
Assessment of the GECKO-A Modeling Tool and Simplified 3D Model Parameterizations for SOA Formation
NASA Astrophysics Data System (ADS)
Aumont, B.; Hodzic, A.; La, S.; Camredon, M.; Lannuque, V.; Lee-Taylor, J. M.; Madronich, S.
2014-12-01
Explicit chemical mechanisms aim to embody the current knowledge of the transformations occurring in the atmosphere during the oxidation of organic matter. These explicit mechanisms are therefore useful tools to explore the fate of organic matter during its tropospheric oxidation and examine how these chemical processes shape the composition and properties of the gaseous and the condensed phases. Furthermore, explicit mechanisms provide powerful benchmarks to design and assess simplified parameterizations to be included 3D model. Nevertheless, the explicit mechanism describing the oxidation of hydrocarbons with backbones larger than few carbon atoms involves millions of secondary organic compounds, far exceeding the size of chemical mechanisms that can be written manually. Data processing tools can however be designed to overcome these difficulties and automatically generate consistent and comprehensive chemical mechanisms on a systematic basis. The Generator for Explicit Chemistry and Kinetics of Organics in the Atmosphere (GECKO-A) has been developed for the automatic writing of explicit chemical schemes of organic species and their partitioning between the gas and condensed phases. GECKO-A can be viewed as an expert system that mimics the steps by which chemists might develop chemical schemes. GECKO-A generates chemical schemes according to a prescribed protocol assigning reaction pathways and kinetics data on the basis of experimental data and structure-activity relationships. In its current version, GECKO-A can generate the full atmospheric oxidation scheme for most linear, branched and cyclic precursors, including alkanes and alkenes up to C25. Assessments of the GECKO-A modeling tool based on chamber SOA observations will be presented. GECKO-A was recently used to design a parameterization for SOA formation based on a Volatility Basis Set (VBS) approach. First results will be presented.
The mixed impact of medical school on medical students' implicit and explicit weight bias.
Phelan, Sean M; Puhl, Rebecca M; Burke, Sara E; Hardeman, Rachel; Dovidio, John F; Nelson, David B; Przedworski, Julia; Burgess, Diana J; Perry, Sylvia; Yeazel, Mark W; van Ryn, Michelle
2015-10-01
Health care trainees demonstrate implicit (automatic, unconscious) and explicit (conscious) bias against people from stigmatised and marginalised social groups, which can negatively influence communication and decision making. Medical schools are well positioned to intervene and reduce bias in new physicians. This study was designed to assess medical school factors that influence change in implicit and explicit bias against individuals from one stigmatised group: people with obesity. This was a prospective cohort study of medical students enrolled at 49 US medical schools randomly selected from all US medical schools within the strata of public and private schools and region. Participants were 1795 medical students surveyed at the beginning of their first year and end of their fourth year. Web-based surveys included measures of weight bias, and medical school experiences and climate. Bias change was compared with changes in bias in the general public over the same period. Linear mixed models were used to assess the impact of curriculum, contact with people with obesity, and faculty role modelling on weight bias change. Increased implicit and explicit biases were associated with less positive contact with patients with obesity and more exposure to faculty role modelling of discriminatory behaviour or negative comments about patients with obesity. Increased implicit bias was associated with training in how to deal with difficult patients. On average, implicit weight bias decreased and explicit bias increased during medical school, over a period of time in which implicit weight bias in the general public increased and explicit bias remained stable. Medical schools may reduce students' weight biases by increasing positive contact between students and patients with obesity, eliminating unprofessional role modelling by faculty members and residents, and altering curricula focused on treating difficult patients. © 2015 John Wiley & Sons Ltd.
Explicit and implicit learning: The case of computer programming
NASA Astrophysics Data System (ADS)
Mancy, Rebecca
The central question of this thesis concerns the role of explicit and implicit learning in the acquisition of a complex skill, namely computer programming. This issue is explored with reference to information processing models of memory drawn from cognitive science. These models indicate that conscious information processing occurs in working memory where information is stored and manipulated online, but that this mode of processing shows serious limitations in terms of capacity or resources. Some information processing models also indicate information processing in the absence of conscious awareness through automation and implicit learning. It was hypothesised that students would demonstrate implicit and explicit knowledge and that both would contribute to their performance in programming. This hypothesis was investigated via two empirical studies. The first concentrated on temporary storage and online processing in working memory and the second on implicit and explicit knowledge. Storage and processing were tested using two tools: temporary storage capacity was measured using a digit span test; processing was investigated with a disembedding test. The results were used to calculate correlation coefficients with performance on programming examinations. Individual differences in temporary storage had only a small role in predicting programming performance and this factor was not a major determinant of success. Individual differences in disembedding were more strongly related to programming achievement. The second study used interviews to investigate the use of implicit and explicit knowledge. Data were analysed according to a grounded theory paradigm. The results indicated that students possessed implicit and explicit knowledge, but that the balance between the two varied between students and that the most successful students did not necessarily possess greater explicit knowledge. The ways in which students described their knowledge led to the development of a framework which extends beyond the implicit-explicit dichotomy to four descriptive categories of knowledge along this dimension. Overall, the results demonstrated that explicit and implicit knowledge both contribute to the acquisition ofprogramming skills. Suggestions are made for further research, and the results are discussed in the context of their implications for education.
Russo, Tommaso; Parisi, Antonio; Garofalo, Germana; Gristina, Michele; Cataudella, Stefano; Fiorentino, Fabio
2014-01-01
Management of catches, effort and exploitation pattern are considered the most effective measures to control fishing mortality and ultimately ensure productivity and sustainability of fisheries. Despite the growing concerns about the spatial dimension of fisheries, the distribution of resources and fishing effort in space is seldom considered in assessment and management processes. Here we propose SMART (Spatial MAnagement of demersal Resources for Trawl fisheries), a tool for assessing bio-economic feedback in different management scenarios. SMART combines information from different tasks gathered within the European Data Collection Framework on fisheries and is composed of: 1) spatial models of fishing effort, environmental characteristics and distribution of demersal resources; 2) an Artificial Neural Network which captures the relationships among these aspects in a spatially explicit way and uses them to predict resources abundances; 3) a deterministic module which analyzes the size structure of catches and the associated revenues, according to different spatially-based management scenarios. SMART is applied to demersal fishery in the Strait of Sicily, one of the most productive fisheries of the Mediterranean Sea. Three of the main target species are used as proxies for the whole range exploited by trawlers. After training, SMART is used to evaluate different management scenarios, including spatial closures, using a simulation approach that mimics the recent exploitation patterns. Results evidence good model performance, with a noteworthy coherence and reliability of outputs for the different components. Among others, the main finding is that a partial improvement in resource conditions can be achieved by means of nursery closures, even if the overall fishing effort in the area remains stable. Accordingly, a series of strategically designed areas of trawling closures could significantly improve the resource conditions of demersal fisheries in the Strait of Sicily, also supporting sustainable economic returns for fishermen if not applied simultaneously for different species. PMID:24465971
NASA Astrophysics Data System (ADS)
Naipal, V.; Wang, Y.; Ciais, P.; Guenet, B.; Lauerwald, R.
2017-12-01
The onset of agriculture has accelerated soil erosion rates significantly, mobilizing vast quantities of soil organic carbon (SOC) globally. Studies show that at timescales of decennia to millennia this mobilized SOC can significantly alter previously estimated carbon emissions from land use and land cover change (LULCC). However, a full understanding of the impact of soil erosion on land-atmosphere carbon exchange is still missing. The aim of our study is to better constrain the terrestrial carbon fluxes by developing methods, which are compatible with earth system models (ESMs), and explicitly represent the links between soil erosion and carbon dynamics. For this we use an emulator that represents the carbon cycle of ORCHIDEE, which is the land component of the IPSL ESM, in combination with an adjusted version of the Revised Universal Soil Loss Equation (RUSLE) model. We applied this modeling framework at the global scale to evaluate how soil erosion influenced the terrestrial carbon cycle in the presence of elevated CO2, regional climate change and land use change. Here, we focus on the effects of soil detachment by erosion only and do not consider sediment transport and deposition. We found that including soil erosion in the SOC dynamics-scheme resulted in two times more SOC being lost during the historical period (1850-2005 AD). LULCC is the main contributor to this SOC loss, whose impact on the SOC stocks is significantly amplified by erosion. Regionally, the influence of soil erosion varies significantly, depending on the magnitude of the perturbations to the carbon cycle and the effects of LULCC and climate change on soil erosion rates. We conclude that it is necessary to include soil erosion in assessments of LULCC, and to explicitly consider the effects of elevated CO2 and climate change on the carbon cycle and on soil erosion, for better quantification of past, present, and future LULCC carbon emissions.