Studies of implicit and explicit solution techniques in transient thermal analysis of structures
NASA Technical Reports Server (NTRS)
Adelman, H. M.; Haftka, R. T.; Robinson, J. C.
1982-01-01
Studies aimed at an increase in the efficiency of calculating transient temperature fields in complex aerospace vehicle structures are reported. The advantages and disadvantages of explicit and implicit algorithms are discussed and a promising set of implicit algorithms with variable time steps, known as GEARIB, is described. Test problems, used for evaluating and comparing various algorithms, are discussed and finite element models of the configurations are described. These problems include a coarse model of the Space Shuttle wing, an insulated frame tst article, a metallic panel for a thermal protection system, and detailed models of sections of the Space Shuttle wing. Results generally indicate a preference for implicit over explicit algorithms for transient structural heat transfer problems when the governing equations are stiff (typical of many practical problems such as insulated metal structures). The effects on algorithm performance of different models of an insulated cylinder are demonstrated. The stiffness of the problem is highly sensitive to modeling details and careful modeling can reduce the stiffness of the equations to the extent that explicit methods may become the best choice. Preliminary applications of a mixed implicit-explicit algorithm and operator splitting techniques for speeding up the solution of the algebraic equations are also described.
Studies of implicit and explicit solution techniques in transient thermal analysis of structures
NASA Astrophysics Data System (ADS)
Adelman, H. M.; Haftka, R. T.; Robinson, J. C.
1982-08-01
Studies aimed at an increase in the efficiency of calculating transient temperature fields in complex aerospace vehicle structures are reported. The advantages and disadvantages of explicit and implicit algorithms are discussed and a promising set of implicit algorithms with variable time steps, known as GEARIB, is described. Test problems, used for evaluating and comparing various algorithms, are discussed and finite element models of the configurations are described. These problems include a coarse model of the Space Shuttle wing, an insulated frame tst article, a metallic panel for a thermal protection system, and detailed models of sections of the Space Shuttle wing. Results generally indicate a preference for implicit over explicit algorithms for transient structural heat transfer problems when the governing equations are stiff (typical of many practical problems such as insulated metal structures). The effects on algorithm performance of different models of an insulated cylinder are demonstrated. The stiffness of the problem is highly sensitive to modeling details and careful modeling can reduce the stiffness of the equations to the extent that explicit methods may become the best choice. Preliminary applications of a mixed implicit-explicit algorithm and operator splitting techniques for speeding up the solution of the algebraic equations are also described.
An explicit microphysics thunderstorm model.
R. Solomon; C.M. Medaglia; C. Adamo; S. Dietrick; A. Mugnai; U. Biader Ceipidor
2005-01-01
The authors present a brief description of a 1.5-dimensional thunderstorm model with a lightning parameterization that utilizes an explicit microphysical scheme to model lightning-producing clouds. The main intent of this work is to describe the basic microphysical and electrical properties of the model, with a small illustrative section to show how the model may be...
Barnes, Marcia A.; Raghubar, Kimberly P.; Faulkner, Heather; Denton, Carolyn A.
2014-01-01
Readers construct mental models of situations described by text to comprehend what they read, updating these situation models based on explicitly described and inferred information about causal, temporal, and spatial relations. Fluent adult readers update their situation models while reading narrative text based in part on spatial location information that is consistent with the perspective of the protagonist. The current study investigates whether children update spatial situation models in a similar way, whether there are age-related changes in children's formation of spatial situation models during reading, and whether measures of the ability to construct and update spatial situation models are predictive of reading comprehension. Typically-developing children from ages 9 through 16 years (n=81) were familiarized with a physical model of a marketplace. Then the model was covered, and children read stories that described the movement of a protagonist through the marketplace and were administered items requiring memory for both explicitly stated and inferred information about the character's movements. Accuracy of responses and response times were evaluated. Results indicated that: (a) location and object information during reading appeared to be activated and updated not simply from explicit text-based information but from a mental model of the real world situation described by the text; (b) this pattern showed no age-related differences; and (c) the ability to update the situation model of the text based on inferred information, but not explicitly stated information, was uniquely predictive of reading comprehension after accounting for word decoding. PMID:24315376
SPATIALLY EXPLICIT MICRO-LEVEL MODELLING OF LAND USE CHANGE AT THE RURAL-URBAN INTERFACE. (R828012)
This paper describes micro-economic models of land use change applicable to the rural–urban interface in the US. Use of a spatially explicit micro-level modelling approach permits the analysis of regional patterns of land use as the aggregate outcomes of many, disparate...
NASA Astrophysics Data System (ADS)
He, Hongxing; Meyer, Astrid; Jansson, Per-Erik; Svensson, Magnus; Rütting, Tobias; Klemedtsson, Leif
2018-02-01
The symbiosis between plants and Ectomycorrhizal fungi (ECM) is shown to considerably influence the carbon (C) and nitrogen (N) fluxes between the soil, rhizosphere, and plants in boreal forest ecosystems. However, ECM are either neglected or presented as an implicit, undynamic term in most ecosystem models, which can potentially reduce the predictive power of models.
In order to investigate the necessity of an explicit consideration of ECM in ecosystem models, we implement the previously developed MYCOFON model into a detailed process-based, soil-plant-atmosphere model, Coup-MYCOFON, which explicitly describes the C and N fluxes between ECM and roots. This new Coup-MYCOFON model approach (ECM explicit) is compared with two simpler model approaches: one containing ECM implicitly as a dynamic uptake of organic N considering the plant roots to represent the ECM (ECM implicit), and the other a static N approach in which plant growth is limited to a fixed N level (nonlim). Parameter uncertainties are quantified using Bayesian calibration in which the model outputs are constrained to current forest growth and soil C / N ratio for four forest sites along a climate and N deposition gradient in Sweden and simulated over a 100-year period.
The nonlim
approach could not describe the soil C / N ratio due to large overestimation of soil N sequestration but simulate the forest growth reasonably well. The ECM implicit
and explicit
approaches both describe the soil C / N ratio well but slightly underestimate the forest growth. The implicit approach simulated lower litter production and soil respiration than the explicit approach. The ECM explicit Coup-MYCOFON model provides a more detailed description of internal ecosystem fluxes and feedbacks of C and N between plants, soil, and ECM. Our modeling highlights the need to incorporate ECM and organic N uptake into ecosystem models, and the nonlim approach is not recommended for future long-term soil C and N predictions. We also provide a key set of posterior fungal parameters that can be further investigated and evaluated in future ECM studies.
ERIC Educational Resources Information Center
Petty, Richard E.; Brinol, Pablo
2006-01-01
Comments on the article by B. Gawronski and G. V. Bodenhausen (see record 2006-10465-003). A metacognitive model (MCM) is presented to describe how automatic (implicit) and deliberative (explicit) measures of attitudes respond to change attempts. The model assumes that contemporary implicit measures tap quick evaluative associations, whereas…
Combining Model-driven and Schema-based Program Synthesis
NASA Technical Reports Server (NTRS)
Denney, Ewen; Whittle, John
2004-01-01
We describe ongoing work which aims to extend the schema-based program synthesis paradigm with explicit models. In this context, schemas can be considered as model-to-model transformations. The combination of schemas with explicit models offers a number of advantages, namely, that building synthesis systems becomes much easier since the models can be used in verification and in adaptation of the synthesis systems. We illustrate our approach using an example from signal processing.
NASA Technical Reports Server (NTRS)
Tamma, Kumar K.; D'Costa, Joseph F.
1991-01-01
This paper describes the evaluation of mixed implicit-explicit finite element formulations for hyperbolic heat conduction problems involving non-Fourier effects. In particular, mixed implicit-explicit formulations employing the alpha method proposed by Hughes et al. (1987, 1990) are described for the numerical simulation of hyperbolic heat conduction models, which involves time-dependent relaxation effects. Existing analytical approaches for modeling/analysis of such models involve complex mathematical formulations for obtaining closed-form solutions, while in certain numerical formulations the difficulties include severe oscillatory solution behavior (which often disguises the true response) in the vicinity of the thermal disturbances, which propagate with finite velocities. In view of these factors, the alpha method is evaluated to assess the control of the amount of numerical dissipation for predicting the transient propagating thermal disturbances. Numerical test models are presented, and pertinent conclusions are drawn for the mixed-time integration simulation of hyperbolic heat conduction models involving non-Fourier effects.
NASA Technical Reports Server (NTRS)
Gilbertsen, Noreen D.; Belytschko, Ted
1990-01-01
The implementation of a nonlinear explicit program on a vectorized, concurrent computer with shared memory is described and studied. The conflict between vectorization and concurrency is described and some guidelines are given for optimal block sizes. Several example problems are summarized to illustrate the types of speed-ups which can be achieved by reprogramming as compared to compiler optimization.
Class of self-limiting growth models in the presence of nonlinear diffusion
NASA Astrophysics Data System (ADS)
Kar, Sandip; Banik, Suman Kumar; Ray, Deb Shankar
2002-06-01
The source term in a reaction-diffusion system, in general, does not involve explicit time dependence. A class of self-limiting growth models dealing with animal and tumor growth and bacterial population in a culture, on the other hand, are described by kinetics with explicit functions of time. We analyze a reaction-diffusion system to study the propagation of spatial front for these models.
On the performance of explicit and implicit algorithms for transient thermal analysis
NASA Astrophysics Data System (ADS)
Adelman, H. M.; Haftka, R. T.
1980-09-01
The status of an effort to increase the efficiency of calculating transient temperature fields in complex aerospace vehicle structures is described. The advantages and disadvantages of explicit and implicit algorithms are discussed. A promising set of implicit algorithms, known as the GEAR package is described. Four test problems, used for evaluating and comparing various algorithms, have been selected and finite element models of the configurations are discribed. These problems include a space shuttle frame component, an insulated cylinder, a metallic panel for a thermal protection system and a model of the space shuttle orbiter wing. Calculations were carried out using the SPAR finite element program, the MITAS lumped parameter program and a special purpose finite element program incorporating the GEAR algorithms. Results generally indicate a preference for implicit over explicit algorithms for solution of transient structural heat transfer problems when the governing equations are stiff. Careful attention to modeling detail such as avoiding thin or short high-conducting elements can sometimes reduce the stiffness to the extent that explicit methods become advantageous.
Some aspects of algorithm performance and modeling in transient analysis of structures
NASA Technical Reports Server (NTRS)
Adelman, H. M.; Haftka, R. T.; Robinson, J. C.
1981-01-01
The status of an effort to increase the efficiency of calculating transient temperature fields in complex aerospace vehicle structures is described. The advantages and disadvantages of explicit algorithms with variable time steps, known as the GEAR package, is described. Four test problems, used for evaluating and comparing various algorithms, were selected and finite-element models of the configurations are described. These problems include a space shuttle frame component, an insulated cylinder, a metallic panel for a thermal protection system, and a model of the wing of the space shuttle orbiter. Results generally indicate a preference for implicit over explicit algorithms for solution of transient structural heat transfer problems when the governing equations are stiff (typical of many practical problems such as insulated metal structures).
ERIC Educational Resources Information Center
Mellard, Daryl; Scanlon, David
2006-01-01
A strategic instruction model introduced into adult basic education classrooms yields insight into the feasibility of using direct and explicit instruction with adults with learning disabilities or other cognitive barriers to learning. Ecobehavioral assessment was used to describe and compare instructor-learner interaction patterns during learning…
Beyond the Sponge Model: Encouraging Students' Questioning Skills in Abnormal Psychology.
ERIC Educational Resources Information Center
Keeley, Stuart M.; Ali, Rahan; Gebing, Tracy
1998-01-01
Argues that educators should provide students with explicit training in asking critical questions. Describes a training strategy taught in abnormal psychology courses at Bowling Green State University (Ohio). Based on a pre- and post-test, results support the promise of using explicit questioning training in promoting the evaluative aspects of…
Spatially explicit and stochastic simulation of forest landscape fire disturbance and succession
Hong S. He; David J. Mladenoff
1999-01-01
Understanding disturbance and recovery of forest landscapes is a challenge because of complex interactions over a range of temporal and spatial scales. Landscape simulation models offer an approach to studying such systems at broad scales. Fire can be simulated spatially using mechanistic or stochastic approaches. We describe the fire module in a spatially explicit,...
Sleeter, Rachel; Acevedo, William; Soulard, Christopher E.; Sleeter, Benjamin M.
2015-01-01
Spatially-explicit state-and-transition simulation models of land use and land cover (LULC) increase our ability to assess regional landscape characteristics and associated carbon dynamics across multiple scenarios. By characterizing appropriate spatial attributes such as forest age and land-use distribution, a state-and-transition model can more effectively simulate the pattern and spread of LULC changes. This manuscript describes the methods and input parameters of the Land Use and Carbon Scenario Simulator (LUCAS), a customized state-and-transition simulation model utilized to assess the relative impacts of LULC on carbon stocks for the conterminous U.S. The methods and input parameters are spatially explicit and describe initial conditions (strata, state classes and forest age), spatial multipliers, and carbon stock density. Initial conditions were derived from harmonization of multi-temporal data characterizing changes in land use as well as land cover. Harmonization combines numerous national-level datasets through a cell-based data fusion process to generate maps of primary LULC categories. Forest age was parameterized using data from the North American Carbon Program and spatially-explicit maps showing the locations of past disturbances (i.e. wildfire and harvest). Spatial multipliers were developed to spatially constrain the location of future LULC transitions. Based on distance-decay theory, maps were generated to guide the placement of changes related to forest harvest, agricultural intensification/extensification, and urbanization. We analyze the spatially-explicit input parameters with a sensitivity analysis, by showing how LUCAS responds to variations in the model input. This manuscript uses Mediterranean California as a regional subset to highlight local to regional aspects of land change, which demonstrates the utility of LUCAS at many scales and applications.
Kintsch, Walter; Mangalath, Praful
2011-04-01
We argue that word meanings are not stored in a mental lexicon but are generated in the context of working memory from long-term memory traces that record our experience with words. Current statistical models of semantics, such as latent semantic analysis and the Topic model, describe what is stored in long-term memory. The CI-2 model describes how this information is used to construct sentence meanings. This model is a dual-memory model, in that it distinguishes between a gist level and an explicit level. It also incorporates syntactic information about how words are used, derived from dependency grammar. The construction of meaning is conceptualized as feature sampling from the explicit memory traces, with the constraint that the sampling must be contextually relevant both semantically and syntactically. Semantic relevance is achieved by sampling topically relevant features; local syntactic constraints as expressed by dependency relations ensure syntactic relevance. Copyright © 2010 Cognitive Science Society, Inc.
NASA Astrophysics Data System (ADS)
Wilde, M. V.; Sergeeva, N. V.
2018-05-01
An explicit asymptotic model extracting the contribution of a surface wave to the dynamic response of a viscoelastic half-space is derived. Fractional exponential Rabotnov's integral operators are used for describing of material properties. The model is derived by extracting the principal part of the poles corresponding to the surface waves after applying Laplace and Fourier transforms. The simplified equations for the originals are written by using power series expansions. Padè approximation is constructed to unite short-time and long-time models. The form of this approximation allows to formulate the explicit model using a fractional exponential Rabotnov's integral operator with parameters depending on the properties of surface wave. The applicability of derived models is studied by comparing with the exact solutions of a model problem. It is revealed that the model based on Padè approximation is highly effective for all the possible time domains.
Age effects on explicit and implicit memory
Ward, Emma V.; Berry, Christopher J.; Shanks, David R.
2013-01-01
It is well-documented that explicit memory (e.g., recognition) declines with age. In contrast, many argue that implicit memory (e.g., priming) is preserved in healthy aging. For example, priming on tasks such as perceptual identification is often not statistically different in groups of young and older adults. Such observations are commonly taken as evidence for distinct explicit and implicit learning/memory systems. In this article we discuss several lines of evidence that challenge this view. We describe how patterns of differential age-related decline may arise from differences in the ways in which the two forms of memory are commonly measured, and review recent research suggesting that under improved measurement methods, implicit memory is not age-invariant. Formal computational models are of considerable utility in revealing the nature of underlying systems. We report the results of applying single and multiple-systems models to data on age effects in implicit and explicit memory. Model comparison clearly favors the single-system view. Implications for the memory systems debate are discussed. PMID:24065942
Beta Regression Finite Mixture Models of Polarization and Priming
ERIC Educational Resources Information Center
Smithson, Michael; Merkle, Edgar C.; Verkuilen, Jay
2011-01-01
This paper describes the application of finite-mixture general linear models based on the beta distribution to modeling response styles, polarization, anchoring, and priming effects in probability judgments. These models, in turn, enhance our capacity for explicitly testing models and theories regarding the aforementioned phenomena. The mixture…
Wagoner, Jason A.; Baker, Nathan A.
2006-01-01
Continuum solvation models provide appealing alternatives to explicit solvent methods because of their ability to reproduce solvation effects while alleviating the need for expensive sampling. Our previous work has demonstrated that Poisson-Boltzmann methods are capable of faithfully reproducing polar explicit solvent forces for dilute protein systems; however, the popular solvent-accessible surface area model was shown to be incapable of accurately describing nonpolar solvation forces at atomic-length scales. Therefore, alternate continuum methods are needed to reproduce nonpolar interactions at the atomic scale. In the present work, we address this issue by supplementing the solvent-accessible surface area model with additional volume and dispersion integral terms suggested by scaled particle models and Weeks–Chandler–Andersen theory, respectively. This more complete nonpolar implicit solvent model shows very good agreement with explicit solvent results and suggests that, although often overlooked, the inclusion of appropriate dispersion and volume terms are essential for an accurate implicit solvent description of atomic-scale nonpolar forces. PMID:16709675
ERIC Educational Resources Information Center
Dennis, Minyi Shih; Knight, Jacqueline; Jerman, Olga
2016-01-01
This article describes how to teach fraction and percentage word problems using a model-drawing strategy. This cognitive strategy places emphasis on explicitly teaching students how to draw a schematic diagram to represent the qualitative relations described in the problem, and how to formulate the solution based on the schematic diagram. The…
HexSim: a modeling environment for ecology and conservation.
HexSim is a powerful and flexible new spatially-explicit, individual based modeling environment intended for use in ecology, conservation, genetics, epidemiology, toxicology, and other disciplines. We describe HexSim, illustrate past applications that contributed to our >10 year ...
Decision support systems in health economics.
Quaglini, S; Dazzi, L; Stefanelli, M; Barosi, G; Marchetti, M
1999-08-01
This article describes a system addressed to different health care professionals for building, using, and sharing decision support systems for resource allocation. The system deals with selected areas, namely the choice of diagnostic tests, the therapy planning, and the instrumentation purchase. Decision support is based on decision-analytic models, incorporating an explicit knowledge representation of both the medical domain knowledge and the economic evaluation theory. Application models are built on top of meta-models, that are used as guidelines for making explicit both the cost and effectiveness components. This approach improves the transparency and soundness of the collaborative decision-making process and facilitates the result interpretation.
We describe and analyze a spatially explicit, individual-based model for the local population dynamics of mottled sculpin (Cottus bairdi). The model simulated daily growth, mortality, movement and spawning of individuals within a reach of stream. Juvenile and adult growth was bas...
TRIM.FaTE is a spatially explicit, compartmental mass balance model that describes the movement and transformation of pollutants over time, through a user-defined, bounded system that includes both biotic and abiotic compartments.
BETR Global - A geographically explicit global-scale multimedia contaminant fate model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macleod, M.; Waldow, H. von; Tay, P.
2011-04-01
We present two new software implementations of the BETR Global multimedia contaminant fate model. The model uses steady-state or non-steady-state mass-balance calculations to describe the fate and transport of persistent organic pollutants using a desktop computer. The global environment is described using a database of long-term average monthly conditions on a 15{sup o} x 15{sup o} grid. We demonstrate BETR Global by modeling the global sources, transport, and removal of decamethylcyclopentasiloxane (D5).
Default contagion risks in Russian interbank market
NASA Astrophysics Data System (ADS)
Leonidov, A. V.; Rumyantsev, E. L.
2016-06-01
Systemic risks of default contagion in the Russian interbank market are investigated. The analysis is based on considering the bow-tie structure of the weighted oriented graph describing the structure of the interbank loans. A probabilistic model of interbank contagion explicitly taking into account the empirical bow-tie structure reflecting functionality of the corresponding nodes (borrowers, lenders, borrowers and lenders simultaneously), degree distributions and disassortativity of the interbank network under consideration based on empirical data is developed. The characteristics of contagion-related systemic risk calculated with this model are shown to be in agreement with those of explicit stress tests.
Teaching Communication Skills in Science: Tracing Teacher Change
ERIC Educational Resources Information Center
Spektor-Levy, Ornit; Eylon, Bat-Sheva; Scherz, Zahava
2008-01-01
This paper describes a general model for skills instruction and its implementation through the program "Scientific Communication" for acquiring learning skills. The model is characterized by modularity, explicit instruction, spiral integration into contents, practice in various contexts, and implementation in performance tasks. It requires…
A Process Model of Principal Selection.
ERIC Educational Resources Information Center
Flanigan, J. L.; And Others
A process model to assist school district superintendents in the selection of principals is presented in this paper. Components of the process are described, which include developing an action plan, formulating an explicit job description, advertising, assessing candidates' philosophy, conducting interview analyses, evaluating response to stress,…
Mass balance modelling of contaminants in river basins: a flexible matrix approach.
Warren, Christopher; Mackay, Don; Whelan, Mick; Fox, Kay
2005-12-01
A novel and flexible approach is described for simulating the behaviour of chemicals in river basins. A number (n) of river reaches are defined and their connectivity is described by entries in an n x n matrix. Changes in segmentation can be readily accommodated by altering the matrix entries, without the need for model revision. Two models are described. The simpler QMX-R model only considers advection and an overall loss due to the combined processes of volatilization, net transfer to sediment and degradation. The rate constant for the overall loss is derived from fugacity calculations for a single segment system. The more rigorous QMX-F model performs fugacity calculations for each segment and explicitly includes the processes of advection, evaporation, water-sediment exchange and degradation in both water and sediment. In this way chemical exposure in all compartments (including equilibrium concentrations in biota) can be estimated. Both models are designed to serve as intermediate-complexity exposure assessment tools for river basins with relatively low data requirements. By considering the spatially explicit nature of emission sources and the changes in concentration which occur with transport in the channel system, the approach offers significant advantages over simple one-segment simulations while being more readily applicable than more sophisticated, highly segmented, GIS-based models.
TRIM.FaTE Public Reference Library Documentation
TRIM.FaTE is a spatially explicit, compartmental mass balance model that describes the movement and transformation of pollutants over time, through a user-defined, bounded system that includes both biotic and abiotic compartments.
We describe a research program aimed at integrating remotely sensed data with an ecosystem model (VELMA) and a soil-vegetation-atmosphere transfer (SVAT) model (SEBS) for generating spatially explicit, regional scale estimates of productivity (biomass) and energy\\mass exchanges i...
Modeling animal movements using stochastic differential equations
Haiganoush K. Preisler; Alan A. Ager; Bruce K. Johnson; John G. Kie
2004-01-01
We describe the use of bivariate stochastic differential equations (SDE) for modeling movements of 216 radiocollared female Rocky Mountain elk at the Starkey Experimental Forest and Range in northeastern Oregon. Spatially and temporally explicit vector fields were estimated using approximating difference equations and nonparametric regression techniques. Estimated...
Chaouiya, Claudine; Keating, Sarah M; Berenguier, Duncan; Naldi, Aurélien; Thieffry, Denis; van Iersel, Martijn P; Le Novère, Nicolas; Helikar, Tomáš
2015-09-04
Quantitative methods for modelling biological networks require an in-depth knowledge of the biochemical reactions and their stoichiometric and kinetic parameters. In many practical cases, this knowledge is missing. This has led to the development of several qualitative modelling methods using information such as, for example, gene expression data coming from functional genomic experiments. The SBML Level 3 Version 1 Core specification does not provide a mechanism for explicitly encoding qualitative models, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The SBML Qualitative Models package for SBML Level 3 adds features so that qualitative models can be directly and explicitly encoded. The approach taken in this package is essentially based on the definition of regulatory or influence graphs. The SBML Qualitative Models package defines the structure and syntax necessary to describe qualitative models that associate discrete levels of activities with entity pools and the transitions between states that describe the processes involved. This is particularly suited to logical models (Boolean or multi-valued) and some classes of Petri net models can be encoded with the approach.
NASA Astrophysics Data System (ADS)
Falugi, P.; Olaru, S.; Dumur, D.
2010-08-01
This article proposes an explicit robust predictive control solution based on linear matrix inequalities (LMIs). The considered predictive control strategy uses different local descriptions of the system dynamics and uncertainties and thus allows the handling of less conservative input constraints. The computed control law guarantees constraint satisfaction and asymptotic stability. The technique is effective for a class of nonlinear systems embedded into polytopic models. A detailed discussion of the procedures which adapt the partition of the state space is presented. For the practical implementation the construction of suitable (explicit) descriptions of the control law are described upon concrete algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwerdtfeger, Christine A.; Soudackov, Alexander V.; Hammes-Schiffer, Sharon, E-mail: shs3@illinois.edu
2014-01-21
The development of efficient theoretical methods for describing electron transfer (ET) reactions in condensed phases is important for a variety of chemical and biological applications. Previously, dynamical dielectric continuum theory was used to derive Langevin equations for a single collective solvent coordinate describing ET in a polar solvent. In this theory, the parameters are directly related to the physical properties of the system and can be determined from experimental data or explicit molecular dynamics simulations. Herein, we combine these Langevin equations with surface hopping nonadiabatic dynamics methods to calculate the rate constants for thermal ET reactions in polar solvents formore » a wide range of electronic couplings and reaction free energies. Comparison of explicit and implicit solvent calculations illustrates that the mapping from explicit to implicit solvent models is valid even for solvents exhibiting complex relaxation behavior with multiple relaxation time scales and a short-time inertial response. The rate constants calculated for implicit solvent models with a single solvent relaxation time scale corresponding to water, acetonitrile, and methanol agree well with analytical theories in the Golden rule and solvent-controlled regimes, as well as in the intermediate regime. The implicit solvent models with two relaxation time scales are in qualitative agreement with the analytical theories but quantitatively overestimate the rate constants compared to these theories. Analysis of these simulations elucidates the importance of multiple relaxation time scales and the inertial component of the solvent response, as well as potential shortcomings of the analytical theories based on single time scale solvent relaxation models. This implicit solvent approach will enable the simulation of a wide range of ET reactions via the stochastic dynamics of a single collective solvent coordinate with parameters that are relevant to experimentally accessible systems.« less
Integrating remote sensing and spatially explicit epidemiological modeling
NASA Astrophysics Data System (ADS)
Finger, Flavio; Knox, Allyn; Bertuzzo, Enrico; Mari, Lorenzo; Bompangue, Didier; Gatto, Marino; Rinaldo, Andrea
2015-04-01
Spatially explicit epidemiological models are a crucial tool for the prediction of epidemiological patterns in time and space as well as for the allocation of health care resources. In addition they can provide valuable information about epidemiological processes and allow for the identification of environmental drivers of the disease spread. Most epidemiological models rely on environmental data as inputs. They can either be measured in the field by the means of conventional instruments or using remote sensing techniques to measure suitable proxies of the variables of interest. The later benefit from several advantages over conventional methods, including data availability, which can be an issue especially in developing, and spatial as well as temporal resolution of the data, which is particularly crucial for spatially explicit models. Here we present the case study of a spatially explicit, semi-mechanistic model applied to recurring cholera outbreaks in the Lake Kivu area (Democratic Republic of the Congo). The model describes the cholera incidence in eight health zones on the shore of the lake. Remotely sensed datasets of chlorophyll a concentration in the lake, precipitation and indices of global climate anomalies are used as environmental drivers. Human mobility and its effect on the disease spread is also taken into account. Several model configurations are tested on a data set of reported cases. The best models, accounting for different environmental drivers, and selected using the Akaike information criterion, are formally compared via cross validation. The best performing model accounts for seasonality, El Niño Southern Oscillation, precipitation and human mobility.
On Spatially Explicit Models of Epidemic and Endemic Cholera: The Haiti and Lake Kivu Case Studies.
NASA Astrophysics Data System (ADS)
Rinaldo, A.; Bertuzzo, E.; Mari, L.; Finger, F.; Casagrandi, R.; Gatto, M.; Rodriguez-Iturbe, I.
2014-12-01
The first part of the Lecture deals with the predictive ability of mechanistic models for the Haitian cholera epidemic. Predictive models of epidemic cholera need to resolve at suitable aggregation levels spatial data pertaining to local communities, epidemiological records, hydrologic drivers, waterways, patterns of human mobility and proxies of exposure rates. A formal model comparison framework provides a quantitative assessment of the explanatory and predictive abilities of various model settings with different spatial aggregation levels. Intensive computations and objective model comparisons show that parsimonious spatially explicit models accounting for spatial connections have superior explanatory power than spatially disconnected ones for short-to intermediate calibration windows. In general, spatially connected models show better predictive ability than disconnected ones. We suggest limits and validity of the various approaches and discuss the pathway towards the development of case-specific predictive tools in the context of emergency management. The second part deals with approaches suitable to describe patterns of endemic cholera. Cholera outbreaks have been reported in the Democratic Republic of the Congo since the 1970s. Here we employ a spatially explicit, inhomogeneous Markov chain model to describe cholera incidence in eight health zones on the shore of lake Kivu. Remotely sensed datasets of chlorophyll a concentration in the lake, precipitation and indices of global climate anomalies are used as environmental drivers in addition to baseline seasonality. The effect of human mobility is also modelled mechanistically. We test several models on a multi-year dataset of reported cholera cases. Fourteen models, accounting for different environmental drivers, are selected in calibration. Among these, the one accounting for seasonality, El Nino Southern Oscillation, precipitation and human mobility outperforms the others in cross-validation.
A Behavioral Model of Landscape Change in the Amazon Basin: The Colonist Case
NASA Technical Reports Server (NTRS)
Walker, R. A.; Drzyzga, S. A.; Li, Y. L.; Wi, J. G.; Caldas, M.; Arima, E.; Vergara, D.
2004-01-01
This paper presents the prototype of a predictive model capable of describing both magnitudes of deforestation and its spatial articulation into patterns of forest fragmentation. In a departure from other landscape models, it establishes an explicit behavioral foundation for algorithm development, predicated on notions of the peasant economy and on household production theory. It takes a 'bottom-up' approach, generating the process of land-cover change occurring at lot level together with the geography of a transportation system to describe regional landscape change. In other words, it translates the decentralized decisions of individual households into a collective, spatial impact. In so doing, the model unites the richness of survey research on farm households with the analytical rigor of spatial analysis enabled by geographic information systems (GIs). The paper describes earlier efforts at spatial modeling, provides a critique of the so-called spatially explicit model, and elaborates a behavioral foundation by considering farm practices of colonists in the Amazon basin. It then uses, insight from the behavioral statement to motivate a GIs-based model architecture. The model is implemented for a long-standing colonization frontier in the eastern sector of the basin, along the Trans-Amazon Highway in the State of Para, Brazil. Results are subjected to both sensitivity analysis and error assessment, and suggestions are made about how the model could be improved.
NASA Astrophysics Data System (ADS)
Tulet, Pierre; Crassier, Vincent; Cousin, Frederic; Suhre, Karsten; Rosset, Robert
2005-09-01
Classical aerosol schemes use either a sectional (bin) or lognormal approach. Both approaches have particular capabilities and interests: the sectional approach is able to describe every kind of distribution, whereas the lognormal one makes assumption of the distribution form with a fewer number of explicit variables. For this last reason we developed a three-moment lognormal aerosol scheme named ORILAM to be coupled in three-dimensional mesoscale or CTM models. This paper presents the concept and hypothesis of a range of aerosol processes such as nucleation, coagulation, condensation, sedimentation, and dry deposition. One particular interest of ORILAM is to keep explicit the aerosol composition and distribution (mass of each constituent, mean radius, and standard deviation of the distribution are explicit) using the prediction of three-moment (m0, m3, and m6). The new model was evaluated by comparing simulations to measurements from the Escompte campaign and to a previously published aerosol model. The numerical cost of the lognormal mode is lower than two bins of the sectional one.
Total Risk Integrated Methodology (TRIM) - TRIM.FaTE
TRIM.FaTE is a spatially explicit, compartmental mass balance model that describes the movement and transformation of pollutants over time, through a user-defined, bounded system that includes both biotic and abiotic compartments.
Explicit and Implicit Processes Constitute the Fast and Slow Processes of Sensorimotor Learning.
McDougle, Samuel D; Bond, Krista M; Taylor, Jordan A
2015-07-01
A popular model of human sensorimotor learning suggests that a fast process and a slow process work in parallel to produce the canonical learning curve (Smith et al., 2006). Recent evidence supports the subdivision of sensorimotor learning into explicit and implicit processes that simultaneously subserve task performance (Taylor et al., 2014). We set out to test whether these two accounts of learning processes are homologous. Using a recently developed method to assay explicit and implicit learning directly in a sensorimotor task, along with a computational modeling analysis, we show that the fast process closely resembles explicit learning and the slow process approximates implicit learning. In addition, we provide evidence for a subdivision of the slow/implicit process into distinct manifestations of motor memory. We conclude that the two-state model of motor learning is a close approximation of sensorimotor learning, but it is unable to describe adequately the various implicit learning operations that forge the learning curve. Our results suggest that a wider net be cast in the search for the putative psychological mechanisms and neural substrates underlying the multiplicity of processes involved in motor learning. Copyright © 2015 the authors 0270-6474/15/359568-12$15.00/0.
Explicit and Implicit Processes Constitute the Fast and Slow Processes of Sensorimotor Learning
Bond, Krista M.; Taylor, Jordan A.
2015-01-01
A popular model of human sensorimotor learning suggests that a fast process and a slow process work in parallel to produce the canonical learning curve (Smith et al., 2006). Recent evidence supports the subdivision of sensorimotor learning into explicit and implicit processes that simultaneously subserve task performance (Taylor et al., 2014). We set out to test whether these two accounts of learning processes are homologous. Using a recently developed method to assay explicit and implicit learning directly in a sensorimotor task, along with a computational modeling analysis, we show that the fast process closely resembles explicit learning and the slow process approximates implicit learning. In addition, we provide evidence for a subdivision of the slow/implicit process into distinct manifestations of motor memory. We conclude that the two-state model of motor learning is a close approximation of sensorimotor learning, but it is unable to describe adequately the various implicit learning operations that forge the learning curve. Our results suggest that a wider net be cast in the search for the putative psychological mechanisms and neural substrates underlying the multiplicity of processes involved in motor learning. PMID:26134640
Morgan, Anthony; Ogunbajo, Adedotun; Trent, Maria; Harper, Gary W.; Fortenberry, J. Dennis
2015-01-01
Sexually explicit material (SEM) (including Internet, video, and print) may play a key role in the lives of Black same-sex sexually active youth by providing the only information to learn about sexual development. There is limited school-and/or family-based sex education to serve as models for sexual behaviors for Black youth. We describe the role SEM plays in the sexual development of a sample of Black same-sex attracted (SSA) young adolescent men ages 15–19. Adolescents recruited from clinics, social networking sites, and through snowball sampling were invited to participate in a 90-min, semi-structured qualitative interview. Most participants described using SEM prior to their first same-sex sexual experience. Participants described using SEM primarily for sexual development, including learning about sexual organs and function, the mechanics of same-gender sex, and to negotiate one’s sexual identity. Secondary functions were to determine readiness for sex; to learn about sexual performance, including understanding sexual roles and responsibilities (e.g., “top” or “bottom”); to introduce sexual performance scripts; and to develop models for how sex should feel (e.g., pleasure and pain). Youth also described engaging in sexual behaviors (including condom non-use and/or swallowing ejaculate) that were modeled on SEM. Comprehensive sexuality education programs should be designed to address the unmet needs of young, Black SSA young men, with explicit focus on sexual roles and behaviors that may be inaccurately portrayed and/or involve sexual risk-taking (such as unprotected anal intercourse and swallowing ejaculate) in SEM. This work also calls for development of Internet-based HIV/STI prevention strategies targeting young Black SSA men who maybe accessing SEM. PMID:25677334
Sarlikioti, V; de Visser, P H B; Marcelis, L F M
2011-04-01
At present most process-based models and the majority of three-dimensional models include simplifications of plant architecture that can compromise the accuracy of light interception simulations and, accordingly, canopy photosynthesis. The aim of this paper is to analyse canopy heterogeneity of an explicitly described tomato canopy in relation to temporal dynamics of horizontal and vertical light distribution and photosynthesis under direct- and diffuse-light conditions. Detailed measurements of canopy architecture, light interception and leaf photosynthesis were carried out on a tomato crop. These data were used for the development and calibration of a functional-structural tomato model. The model consisted of an architectural static virtual plant coupled with a nested radiosity model for light calculations and a leaf photosynthesis module. Different scenarios of horizontal and vertical distribution of light interception, incident light and photosynthesis were investigated under diffuse and direct light conditions. Simulated light interception showed a good correspondence to the measured values. Explicitly described leaf angles resulted in higher light interception in the middle of the plant canopy compared with fixed and ellipsoidal leaf-angle distribution models, although the total light interception remained the same. The fraction of light intercepted at a north-south orientation of rows differed from east-west orientation by 10 % on winter and 23 % on summer days. The horizontal distribution of photosynthesis differed significantly between the top, middle and lower canopy layer. Taking into account the vertical variation of leaf photosynthetic parameters in the canopy, led to approx. 8 % increase on simulated canopy photosynthesis. Leaf angles of heterogeneous canopies should be explicitly described as they have a big impact both on light distribution and photosynthesis. Especially, the vertical variation of photosynthesis in canopy is such that the experimental approach of photosynthesis measurements for model parameterization should be revised.
Sarlikioti, V.; de Visser, P. H. B.; Marcelis, L. F. M.
2011-01-01
Background and Aims At present most process-based models and the majority of three-dimensional models include simplifications of plant architecture that can compromise the accuracy of light interception simulations and, accordingly, canopy photosynthesis. The aim of this paper is to analyse canopy heterogeneity of an explicitly described tomato canopy in relation to temporal dynamics of horizontal and vertical light distribution and photosynthesis under direct- and diffuse-light conditions. Methods Detailed measurements of canopy architecture, light interception and leaf photosynthesis were carried out on a tomato crop. These data were used for the development and calibration of a functional–structural tomato model. The model consisted of an architectural static virtual plant coupled with a nested radiosity model for light calculations and a leaf photosynthesis module. Different scenarios of horizontal and vertical distribution of light interception, incident light and photosynthesis were investigated under diffuse and direct light conditions. Key Results Simulated light interception showed a good correspondence to the measured values. Explicitly described leaf angles resulted in higher light interception in the middle of the plant canopy compared with fixed and ellipsoidal leaf-angle distribution models, although the total light interception remained the same. The fraction of light intercepted at a north–south orientation of rows differed from east–west orientation by 10 % on winter and 23 % on summer days. The horizontal distribution of photosynthesis differed significantly between the top, middle and lower canopy layer. Taking into account the vertical variation of leaf photosynthetic parameters in the canopy, led to approx. 8 % increase on simulated canopy photosynthesis. Conclusions Leaf angles of heterogeneous canopies should be explicitly described as they have a big impact both on light distribution and photosynthesis. Especially, the vertical variation of photosynthesis in canopy is such that the experimental approach of photosynthesis measurements for model parameterization should be revised. PMID:21355008
Lotka-Volterra competition models for sessile organisms.
Spencer, Matthew; Tanner, Jason E
2008-04-01
Markov models are widely used to describe the dynamics of communities of sessile organisms, because they are easily fitted to field data and provide a rich set of analytical tools. In typical ecological applications, at any point in time, each point in space is in one of a finite set of states (e.g., species, empty space). The models aim to describe the probabilities of transitions between states. In most Markov models for communities, these transition probabilities are assumed to be independent of state abundances. This assumption is often suspected to be false and is rarely justified explicitly. Here, we start with simple assumptions about the interactions among sessile organisms and derive a model in which transition probabilities depend on the abundance of destination states. This model is formulated in continuous time and is equivalent to a Lotka-Volterra competition model. We fit this model and a variety of alternatives in which transition probabilities do not depend on state abundances to a long-term coral reef data set. The Lotka-Volterra model describes the data much better than all models we consider other than a saturated model (a model with a separate parameter for each transition at each time interval, which by definition fits the data perfectly). Our approach provides a basis for further development of stochastic models of sessile communities, and many of the methods we use are relevant to other types of community. We discuss possible extensions to spatially explicit models.
The Real World of the Ivory Tower: Linking Classroom and Practice via Pedagogical Modeling
ERIC Educational Resources Information Center
Campbell, Carolyn; Scott-Lincourt, Rose; Brennan, Kimberley
2008-01-01
The authors explore the pedagogical principles of congruency, modeling, and transfer of learning through the description and analysis of a course entitled "The Theory and Practice of Anti-oppressive Social Work." Initially reviewing the literature related to the above concepts, they describe an instructor's attempt to explicitly model, via a range…
Brenda Rashleigh; Gary D. Grossman
2005-01-01
We describe and analyze a spatially explicit, individual-based model for the local population dynamics of mottled sculpin (Cottus bairdi). The model simulated daily growth, mortality, movement and spawning of individuals within a reach of stream. Juvenile and adult growth was based on consumption bioenergetics of benthic macroinvertebrate prey;...
Self-sustained peristaltic waves: Explicit asymptotic solutions
NASA Astrophysics Data System (ADS)
Dudchenko, O. A.; Guria, G. Th.
2012-02-01
A simple nonlinear model for the coupled problem of fluid flow and contractile wall deformation is proposed to describe peristalsis. In the context of the model the ability of a transporting system to perform autonomous peristaltic pumping is interpreted as the ability to propagate sustained waves of wall deformation. Piecewise-linear approximations of nonlinear functions are used to analytically demonstrate the existence of traveling-wave solutions. Explicit formulas are derived which relate the speed of self-sustained peristaltic waves to the rheological properties of the transporting vessel and the transported fluid. The results may contribute to the development of diagnostic and therapeutic procedures for cases of peristaltic motility disorders.
Assessment of the GECKO-A Modeling Tool and Simplified 3D Model Parameterizations for SOA Formation
NASA Astrophysics Data System (ADS)
Aumont, B.; Hodzic, A.; La, S.; Camredon, M.; Lannuque, V.; Lee-Taylor, J. M.; Madronich, S.
2014-12-01
Explicit chemical mechanisms aim to embody the current knowledge of the transformations occurring in the atmosphere during the oxidation of organic matter. These explicit mechanisms are therefore useful tools to explore the fate of organic matter during its tropospheric oxidation and examine how these chemical processes shape the composition and properties of the gaseous and the condensed phases. Furthermore, explicit mechanisms provide powerful benchmarks to design and assess simplified parameterizations to be included 3D model. Nevertheless, the explicit mechanism describing the oxidation of hydrocarbons with backbones larger than few carbon atoms involves millions of secondary organic compounds, far exceeding the size of chemical mechanisms that can be written manually. Data processing tools can however be designed to overcome these difficulties and automatically generate consistent and comprehensive chemical mechanisms on a systematic basis. The Generator for Explicit Chemistry and Kinetics of Organics in the Atmosphere (GECKO-A) has been developed for the automatic writing of explicit chemical schemes of organic species and their partitioning between the gas and condensed phases. GECKO-A can be viewed as an expert system that mimics the steps by which chemists might develop chemical schemes. GECKO-A generates chemical schemes according to a prescribed protocol assigning reaction pathways and kinetics data on the basis of experimental data and structure-activity relationships. In its current version, GECKO-A can generate the full atmospheric oxidation scheme for most linear, branched and cyclic precursors, including alkanes and alkenes up to C25. Assessments of the GECKO-A modeling tool based on chamber SOA observations will be presented. GECKO-A was recently used to design a parameterization for SOA formation based on a Volatility Basis Set (VBS) approach. First results will be presented.
Eilmes, Andrzej; Kubisiak, Piotr
2010-01-21
Relative complexation energies for the lithium cation in acetonitrile and diethyl ether have been studied. Quantum-chemical calculations explicitly describing the solvation of Li(+) have been performed based on structures obtained from molecular dynamics simulations. The effect of an increasing number of solvent molecules beyond the first solvation shell has been found to consist in reduction of the differences in complexation energies for different coordination numbers. Explicit-solvation data have served as a benchmark to the results of polarizable continuum model (PCM) calculations. It has been demonstrated that the PCM approach can yield relative complexation energies comparable to the predictions based on molecular-level solvation, but at significantly lower computational cost. The best agreement between the explicit-solvation and the PCM results has been obtained when the van der Waals surface was adopted to build the molecular cavity.
Sierra/Solid Mechanics 4.48 User's Guide.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merewether, Mark Thomas; Crane, Nathan K; de Frias, Gabriel Jose
Sierra/SolidMechanics (Sierra/SM) is a Lagrangian, three-dimensional code for finite element analysis of solids and structures. It provides capabilities for explicit dynamic, implicit quasistatic and dynamic analyses. The explicit dynamics capabilities allow for the efficient and robust solution of models with extensive contact subjected to large, suddenly applied loads. For implicit problems, Sierra/SM uses a multi-level iterative solver, which enables it to effectively solve problems with large deformations, nonlinear material behavior, and contact. Sierra/SM has a versatile library of continuum and structural elements, and a large library of material models. The code is written for parallel computing environments enabling scalable solutionsmore » of extremely large problems for both implicit and explicit analyses. It is built on the SIERRA Framework, which facilitates coupling with other SIERRA mechanics codes. This document describes the functionality and input syntax for Sierra/SM.« less
The Impact of Aerosols on Cloud and Precipitation Processes: Cloud-Resolving Model Simulations
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Li, X.; Khain, A.; Simpson, S.
2005-01-01
Cloud microphysics are inevitable affected by the smoke particle (CCN, cloud condensation nuclei) size distributions below the clouds, Therefore, size distributions parameterized as spectral bin microphysics are needed to explicitly study the effect of atmospheric aerosol concentration on cloud development, rainfall production, and rainfall rates for convective clouds. Recently, a detailed spectral-bin microphysical scheme was implemented into the the Goddard Cumulus Ensemble (GCE) model. The formulation for the explicit spectral-bim microphysical processes is based on solving stochastic kinetic equations for the size distribution functions of water droplets (i.e., cloud droplets and raindrops), and several types of ice particles [i.e., pristine ice crystals (columnar and plate-like), snow (dendrites and aggregates), graupel and frozen drops/hail]. Each type is described by a special size distribution function containing many categories (i.e., 33 bins). Atmospheric aerosols are also described using number density size-distribution functions.
Jansa, Václav
2017-01-01
Height to crown base (HCB) of a tree is an important variable often included as a predictor in various forest models that serve as the fundamental tools for decision-making in forestry. We developed spatially explicit and spatially inexplicit mixed-effects HCB models using measurements from a total 19,404 trees of Norway spruce (Picea abies (L.) Karst.) and European beech (Fagus sylvatica L.) on the permanent sample plots that are located across the Czech Republic. Variables describing site quality, stand density or competition, and species mixing effects were included into the HCB model with use of dominant height (HDOM), basal area of trees larger in diameters than a subject tree (BAL- spatially inexplicit measure) or Hegyi’s competition index (HCI—spatially explicit measure), and basal area proportion of a species of interest (BAPOR), respectively. The parameters describing sample plot-level random effects were included into the HCB model by applying the mixed-effects modelling approach. Among several functional forms evaluated, the logistic function was found most suited to our data. The HCB model for Norway spruce was tested against the data originated from different inventory designs, but model for European beech was tested using partitioned dataset (a part of the main dataset). The variance heteroscedasticity in the residuals was substantially reduced through inclusion of a power variance function into the HCB model. The results showed that spatially explicit model described significantly a larger part of the HCB variations [R2adj = 0.86 (spruce), 0.85 (beech)] than its spatially inexplicit counterpart [R2adj = 0.84 (spruce), 0.83 (beech)]. The HCB increased with increasing competitive interactions described by tree-centered competition measure: BAL or HCI, and species mixing effects described by BAPOR. A test of the mixed-effects HCB model with the random effects estimated using at least four trees per sample plot in the validation data confirmed that the model was precise enough for the prediction of HCB for a range of site quality, tree size, stand density, and stand structure. We therefore recommend measuring of HCB on four randomly selected trees of a species of interest on each sample plot for localizing the mixed-effects model and predicting HCB of the remaining trees on the plot. Growth simulations can be made from the data that lack the values for either crown ratio or HCB using the HCB models. PMID:29049391
Comparison of three explicit multigrid methods for the Euler and Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Chima, Rodrick V.; Turkel, Eli; Schaffer, Steve
1987-01-01
Three explicit multigrid methods, Ni's method, Jameson's finite-volume method, and a finite-difference method based on Brandt's work, are described and compared for two model problems. All three methods use an explicit multistage Runge-Kutta scheme on the fine grid, and this scheme is also described. Convergence histories for inviscid flow over a bump in a channel for the fine-grid scheme alone show that convergence rate is proportional to Courant number and that implicit residual smoothing can significantly accelerate the scheme. Ni's method was slightly slower than the implicitly-smoothed scheme alone. Brandt's and Jameson's methods are shown to be equivalent in form but differ in their node versus cell-centered implementations. They are about 8.5 times faster than Ni's method in terms of CPU time. Results for an oblique shock/boundary layer interaction problem verify the accuracy of the finite-difference code. All methods slowed considerably on the stretched viscous grid but Brandt's method was still 2.1 times faster than Ni's method.
Accounting for Variability in Student Responses to Motion Questions
ERIC Educational Resources Information Center
Frank, Brian W.; Kanim, Stephen E.; Gomez, Luanna S.
2008-01-01
We describe the results of an experiment conducted to test predictions about student responses to questions about motion based on an explicit model of student thinking in terms of the cuing of a variety of different physical intuitions or conceptual resources. This particular model allows us to account for observed variations in patterns of…
Simple liquid models with corrected dielectric constants
Fennell, Christopher J.; Li, Libo; Dill, Ken A.
2012-01-01
Molecular simulations often use explicit-solvent models. Sometimes explicit-solvent models can give inaccurate values for basic liquid properties, such as the density, heat capacity, and permittivity, as well as inaccurate values for molecular transfer free energies. Such errors have motivated the development of more complex solvents, such as polarizable models. We describe an alternative here. We give new fixed-charge models of solvents for molecular simulations – water, carbon tetrachloride, chloroform and dichloromethane. Normally, such solvent models are parameterized to agree with experimental values of the neat liquid density and enthalpy of vaporization. Here, in addition to those properties, our parameters are chosen to give the correct dielectric constant. We find that these new parameterizations also happen to give better values for other properties, such as the self-diffusion coefficient. We believe that parameterizing fixed-charge solvent models to fit experimental dielectric constants may provide better and more efficient ways to treat solvents in computer simulations. PMID:22397577
Putting proteins back into water
NASA Astrophysics Data System (ADS)
de Los Rios, Paolo; Caldarelli, Guido
2000-12-01
We introduce a simplified protein model where the solvent (water) degrees of freedom appear explicitly (although in an extremely simplified fashion). Using this model we are able to recover the thermodynamic phenomenology of proteins over a wide range of temperatures. In particular we describe both the warm and the cold protein denaturation within a single framework, while addressing important issues about the structure of model proteins.
Hong S. He; Wei Li; Brian R. Sturtevant; Jian Yang; Bo Z. Shang; Eric J. Gustafson; David J. Mladenoff
2005-01-01
LANDIS 4.0 is new-generation software that simulates forest landscape change over large spatial and temporal scales. It is used to explore how disturbances, succession, and management interact to determine forest composition and pattern. Also describes software architecture, model assumptions and provides detailed instructions on the use of the model.
The DIVA model: A neural theory of speech acquisition and production
Tourville, Jason A.; Guenther, Frank H.
2013-01-01
The DIVA model of speech production provides a computationally and neuroanatomically explicit account of the network of brain regions involved in speech acquisition and production. An overview of the model is provided along with descriptions of the computations performed in the different brain regions represented in the model. The latest version of the model, which contains a new right-lateralized feedback control map in ventral premotor cortex, will be described, and experimental results that motivated this new model component will be discussed. Application of the model to the study and treatment of communication disorders will also be briefly described. PMID:23667281
Combining Distributed and Shared Memory Models: Approach and Evolution of the Global Arrays Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nieplocha, Jarek; Harrison, Robert J.; Kumar, Mukul
2002-07-29
Both shared memory and distributed memory models have advantages and shortcomings. Shared memory model is much easier to use but it ignores data locality/placement. Given the hierarchical nature of the memory subsystems in the modern computers this characteristic might have a negative impact on performance and scalability. Various techniques, such as code restructuring to increase data reuse and introducing blocking in data accesses, can address the problem and yield performance competitive with message passing[Singh], however at the cost of compromising the ease of use feature. Distributed memory models such as message passing or one-sided communication offer performance and scalability butmore » they compromise the ease-of-use. In this context, the message-passing model is sometimes referred to as?assembly programming for the scientific computing?. The Global Arrays toolkit[GA1, GA2] attempts to offer the best features of both models. It implements a shared-memory programming model in which data locality is managed explicitly by the programmer. This management is achieved by explicit calls to functions that transfer data between a global address space (a distributed array) and local storage. In this respect, the GA model has similarities to the distributed shared-memory models that provide an explicit acquire/release protocol. However, the GA model acknowledges that remote data is slower to access than local data and allows data locality to be explicitly specified and hence managed. The GA model exposes to the programmer the hierarchical memory of modern high-performance computer systems, and by recognizing the communication overhead for remote data transfer, it promotes data reuse and locality of reference. This paper describes the characteristics of the Global Arrays programming model, capabilities of the toolkit, and discusses its evolution.« less
Generalized Born Models of Macromolecular Solvation Effects
NASA Astrophysics Data System (ADS)
Bashford, Donald; Case, David A.
2000-10-01
It would often be useful in computer simulations to use a simple description of solvation effects, instead of explicitly representing the individual solvent molecules. Continuum dielectric models often work well in describing the thermodynamic aspects of aqueous solvation, and approximations to such models that avoid the need to solve the Poisson equation are attractive because of their computational efficiency. Here we give an overview of one such approximation, the generalized Born model, which is simple and fast enough to be used for molecular dynamics simulations of proteins and nucleic acids. We discuss its strengths and weaknesses, both for its fidelity to the underlying continuum model and for its ability to replace explicit consideration of solvent molecules in macromolecular simulations. We focus particularly on versions of the generalized Born model that have a pair-wise analytical form, and therefore fit most naturally into conventional molecular mechanics calculations.
Implicit Versus Explicit Applications of the Tissue Residue Approach, Oral Presentation
Toxic effect models based on the relationship of toxic effects to chemical concentrations within receptor organism tissues can often be reformulated to describe the relationship of toxic effects to exposure concentrations without actual specification of the tissue concentrations....
A comparison of two central difference schemes for solving the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Maksymiuk, C. M.; Swanson, R. C.; Pulliam, T. H.
1990-01-01
Five viscous transonic airfoil cases were computed by two significantly different computational fluid dynamics codes: An explicit finite-volume algorithm with multigrid, and an implicit finite-difference approximate-factorization method with Eigenvector diagonalization. Both methods are described in detail, and their performance on the test cases is compared. The codes utilized the same grids, turbulence model, and computer to provide the truest test of the algorithms. The two approaches produce very similar results, which, for attached flows, also agree well with experimental results; however, the explicit code is considerably faster.
Explicit and implicit learning: The case of computer programming
NASA Astrophysics Data System (ADS)
Mancy, Rebecca
The central question of this thesis concerns the role of explicit and implicit learning in the acquisition of a complex skill, namely computer programming. This issue is explored with reference to information processing models of memory drawn from cognitive science. These models indicate that conscious information processing occurs in working memory where information is stored and manipulated online, but that this mode of processing shows serious limitations in terms of capacity or resources. Some information processing models also indicate information processing in the absence of conscious awareness through automation and implicit learning. It was hypothesised that students would demonstrate implicit and explicit knowledge and that both would contribute to their performance in programming. This hypothesis was investigated via two empirical studies. The first concentrated on temporary storage and online processing in working memory and the second on implicit and explicit knowledge. Storage and processing were tested using two tools: temporary storage capacity was measured using a digit span test; processing was investigated with a disembedding test. The results were used to calculate correlation coefficients with performance on programming examinations. Individual differences in temporary storage had only a small role in predicting programming performance and this factor was not a major determinant of success. Individual differences in disembedding were more strongly related to programming achievement. The second study used interviews to investigate the use of implicit and explicit knowledge. Data were analysed according to a grounded theory paradigm. The results indicated that students possessed implicit and explicit knowledge, but that the balance between the two varied between students and that the most successful students did not necessarily possess greater explicit knowledge. The ways in which students described their knowledge led to the development of a framework which extends beyond the implicit-explicit dichotomy to four descriptive categories of knowledge along this dimension. Overall, the results demonstrated that explicit and implicit knowledge both contribute to the acquisition ofprogramming skills. Suggestions are made for further research, and the results are discussed in the context of their implications for education.
A statistical metadata model for clinical trials' data management.
Vardaki, Maria; Papageorgiou, Haralambos; Pentaris, Fragkiskos
2009-08-01
We introduce a statistical, process-oriented metadata model to describe the process of medical research data collection, management, results analysis and dissemination. Our approach explicitly provides a structure for pieces of information used in Clinical Study Data Management Systems, enabling a more active role for any associated metadata. Using the object-oriented paradigm, we describe the classes of our model that participate during the design of a clinical trial and the subsequent collection and management of the relevant data. The advantage of our approach is that we focus on presenting the structural inter-relation of these classes when used during datasets manipulation by proposing certain transformations that model the simultaneous processing of both data and metadata. Our solution reduces the possibility of human errors and allows for the tracking of all changes made during datasets lifecycle. The explicit modeling of processing steps improves data quality and assists in the problem of handling data collected in different clinical trials. The case study illustrates the applicability of the proposed framework demonstrating conceptually the simultaneous handling of datasets collected during two randomized clinical studies. Finally, we provide the main considerations for implementing the proposed framework into a modern Metadata-enabled Information System.
Finite-difference model for 3-D flow in bays and estuaries
Smith, Peter E.; Larock, Bruce E.; ,
1993-01-01
This paper describes a semi-implicit finite-difference model for the numerical solution of three-dimensional flow in bays and estuaries. The model treats the gravity wave and vertical diffusion terms in the governing equations implicitly, and other terms explicitly. The model achieves essentially second-order accurate and stable solutions in strongly nonlinear problems by using a three-time-level leapfrog-trapezoidal scheme for the time integration.
NASA Astrophysics Data System (ADS)
Yu, Fei; Ma, Xiaoyu; Deng, Wanling; Liou, Juin J.; Huang, Junkai
2017-11-01
A physics-based drain current compact model for amorphous InGaZnO (a-InGaZnO) thin-film transistors (TFTs) is proposed. As a key feature, the surface potential model accounts for both exponential tail and deep trap densities of states, which are essential to describe a-InGaZnO TFT electrical characteristics. The surface potential is solved explicitly without the process of amendment and suitable for circuit simulations. Furthermore, based on the surface potential, an explicit closed-form expression of the drain current is developed. For the cases of the different operational voltages, surface potential and drain current are verified by numerical results and experimental data, respectively. As a result, our model can predict DC characteristics of a-InGaZnO TFTs.
Performability modeling with continuous accomplishment sets
NASA Technical Reports Server (NTRS)
Meyer, J. F.
1979-01-01
A general modeling framework that permits the definition, formulation, and evaluation of performability is described. It is shown that performability relates directly to system effectiveness, and is a proper generalization of both performance and reliability. A hierarchical modeling scheme is used to formulate the capability function used to evaluate performability. The case in which performance variables take values in a continuous accomplishment set is treated explicitly.
Thomas C. Edwards; Gretchen G. Moisen; Tracey S. Frescino; Joshua L. Lawler
2002-01-01
We describe our collective efforts to develop and apply methods for using FIA data to model forest resources and wildlife habitat. Our work demonstrates how flexible regression techniques, such as generalized additive models, can be linked with spatially explicit environmental information for the mapping of forest type and structure. We illustrate how these maps of...
Organic aerosol sources an partitioning in CMAQv5.2
We describe a major CMAQ update, available in version 5.2, which explicitly treats the semivolatile mass transfer of primary organic aerosol compounds, in agreement with available field and laboratory observations. Until this model release, CMAQ has considered these compounds to ...
DOT National Transportation Integrated Search
2009-02-01
This working paper describes a group of techniques for disaggregating origin-destination tables : for truck forecasting that makes explicit use of observed traffic on a network. Six models within : the group are presented, each of which uses nonlinea...
Latash, M L; Gottlieb, G L
1991-09-01
We describe a model for the regulation of fast, single-joint movements, based on the equilibrium-point hypothesis. Limb movement follows constant rate shifts of independently regulated neuromuscular variables. The independently regulated variables are tentatively identified as thresholds of a length sensitive reflex for each of the participating muscles. We use the model to predict EMG patterns associated with changes in the conditions of movement execution, specifically, changes in movement times, velocities, amplitudes, and moments of limb inertia. The approach provides a theoretical neural framework for the dual-strategy hypothesis, which considers certain movements to be results of one of two basic, speed-sensitive or speed-insensitive strategies. This model is advanced as an alternative to pattern-imposing models based on explicit regulation of timing and amplitudes of signals that are explicitly manifest in the EMG patterns.
Operational models of pharmacological agonism.
Black, J W; Leff, P
1983-12-22
The traditional receptor-stimulus model of agonism began with a description of drug action based on the law of mass action and has developed by a series of modifications, each accounting for new experimental evidence. By contrast, in this paper an approach to modelling agonism is taken that begins with the observation that experimental agonist-concentration effect, E/[A], curves are commonly hyperbolic and develops using the deduction that the relation between occupancy and effect must be hyperbolic if the law of mass action applies at the agonist-receptor level. The result is a general model that explicitly describes agonism by three parameters: an agonist-receptor dissociation constant, KA; the total receptor concentration, [R0]; and a parameter, KE, defining the transduction of agonist-receptor complex, AR, into pharmacological effect. The ratio, [R0]/KE, described here as the 'transducer ratio', tau, is a logical definition for the efficacy of an agonist in a system. The model may be extended to account for non-hyperbolic E/[A] curves with no loss of meaning. Analysis shows that an explicit formulation of the traditional receptor-stimulus model is one particular form of the general model but that it is not the simplest. An alternative model is proposed, representing the cognitive and transducer functions of a receptor, that describes agonist action with one fewer parameter than the traditional model. In addition, this model provides a chemical definition of intrinsic efficacy making this parameter experimentally accessible in principle. The alternative models are compared and contrasted with regard to their practical and conceptual utilities in experimental pharmacology.
Analysis of explicit model predictive control for path-following control
2018-01-01
In this paper, explicit Model Predictive Control(MPC) is employed for automated lane-keeping systems. MPC has been regarded as the key to handle such constrained systems. However, the massive computational complexity of MPC, which employs online optimization, has been a major drawback that limits the range of its target application to relatively small and/or slow problems. Explicit MPC can reduce this computational burden using a multi-parametric quadratic programming technique(mp-QP). The control objective is to derive an optimal front steering wheel angle at each sampling time so that autonomous vehicles travel along desired paths, including straight, circular, and clothoid parts, at high entry speeds. In terms of the design of the proposed controller, a method of choosing weighting matrices in an optimization problem and the range of horizons for path-following control are described through simulations. For the verification of the proposed controller, simulation results obtained using other control methods such as MPC, Linear-Quadratic Regulator(LQR), and driver model are employed, and CarSim, which reflects the features of a vehicle more realistically than MATLAB/Simulink, is used for reliable demonstration. PMID:29534080
Analysis of explicit model predictive control for path-following control.
Lee, Junho; Chang, Hyuk-Jun
2018-01-01
In this paper, explicit Model Predictive Control(MPC) is employed for automated lane-keeping systems. MPC has been regarded as the key to handle such constrained systems. However, the massive computational complexity of MPC, which employs online optimization, has been a major drawback that limits the range of its target application to relatively small and/or slow problems. Explicit MPC can reduce this computational burden using a multi-parametric quadratic programming technique(mp-QP). The control objective is to derive an optimal front steering wheel angle at each sampling time so that autonomous vehicles travel along desired paths, including straight, circular, and clothoid parts, at high entry speeds. In terms of the design of the proposed controller, a method of choosing weighting matrices in an optimization problem and the range of horizons for path-following control are described through simulations. For the verification of the proposed controller, simulation results obtained using other control methods such as MPC, Linear-Quadratic Regulator(LQR), and driver model are employed, and CarSim, which reflects the features of a vehicle more realistically than MATLAB/Simulink, is used for reliable demonstration.
An efficient hydro-mechanical model for coupled multi-porosity and discrete fracture porous media
NASA Astrophysics Data System (ADS)
Yan, Xia; Huang, Zhaoqin; Yao, Jun; Li, Yang; Fan, Dongyan; Zhang, Kai
2018-02-01
In this paper, a numerical model is developed for coupled analysis of deforming fractured porous media with multiscale fractures. In this model, the macro-fractures are modeled explicitly by the embedded discrete fracture model, and the supporting effects of fluid and fillings in these fractures are represented explicitly in the geomechanics model. On the other hand, matrix and micro-fractures are modeled by a multi-porosity model, which aims to accurately describe the transient matrix-fracture fluid exchange process. A stabilized extended finite element method scheme is developed based on the polynomial pressure projection technique to address the displacement oscillation along macro-fracture boundaries. After that, the mixed space discretization and modified fixed stress sequential implicit methods based on non-matching grids are applied to solve the coupling model. Finally, we demonstrate the accuracy and application of the proposed method to capture the coupled hydro-mechanical impacts of multiscale fractures on fractured porous media.
Adaptive methods for nonlinear structural dynamics and crashworthiness analysis
NASA Technical Reports Server (NTRS)
Belytschko, Ted
1993-01-01
The objective is to describe three research thrusts in crashworthiness analysis: adaptivity; mixed time integration, or subcycling, in which different timesteps are used for different parts of the mesh in explicit methods; and methods for contact-impact which are highly vectorizable. The techniques are being developed to improve the accuracy of calculations, ease-of-use of crashworthiness programs, and the speed of calculations. The latter is still of importance because crashworthiness calculations are often made with models of 20,000 to 50,000 elements using explicit time integration and require on the order of 20 to 100 hours on current supercomputers. The methodologies are briefly reviewed and then some example calculations employing these methods are described. The methods are also of value to other nonlinear transient computations.
NASA Technical Reports Server (NTRS)
Chen, Xiaoqin; Tamma, Kumar K.; Sha, Desong
1993-01-01
The present paper describes a new explicit virtual-pulse time integral methodology for nonlinear structural dynamics problems. The purpose of the paper is to provide the theoretical basis of the methodology and to demonstrate applicability of the proposed formulations to nonlinear dynamic structures. Different from the existing numerical methods such as direct time integrations or mode superposition techniques, the proposed methodology offers new perspectives and methodology of development, and possesses several unique and attractive computational characteristics. The methodology is tested and compared with the implicit Newmark method (trapezoidal rule) through a nonlinear softening and hardening spring dynamic models. The numerical results indicate that the proposed explicit virtual-pulse time integral methodology is an excellent alternative for solving general nonlinear dynamic problems.
Detailed modeling of the train-to-train impact test : rail passenger equipment impact tests
DOT National Transportation Integrated Search
2007-07-01
This report describes the results of a finite element-based analysis of the train-to-train impact test conducted at the Federal Railroad Administrations Transportation Technology Center in Pueblo, CO, on January 31, 2002. The ABAQUS/Explicit dynam...
Identification of internal properties of fibers and micro-swimmers
NASA Astrophysics Data System (ADS)
Plouraboue, Franck; Thiam, Ibrahima; Delmotte, Blaise; Climent, Eric; PSC Collaboration
2016-11-01
In this presentation we discuss the identifiability of constitutive parameters of passive or active micro-swimmers. We first present a general framework for describing fibers or micro-swimmers using a bead-model description. Using a kinematic constraint formulation to describe fibers, flagellum or cilia, we find explicit linear relationship between elastic constitutive parameters and generalised velocities from computing contact forces. This linear formulation then permits to address explicitly identifiability conditions and solve for parameter identification. We show that both active forcing and passive parameters are both identifiable independently but not simultaneously. We also provide unbiased estimators for elastic parameters as well as active ones in the presence of Langevin-like forcing with Gaussian noise using normal linear regression models and maximum likelihood method. These theoretical results are illustrated in various configurations of relaxed or actuated passives fibers, and active filament of known passive properties, showing the efficiency of the proposed approach for direct parameter identification. The convergence of the proposed estimators is successfully tested numerically.
Assessment of the GECKO-A modeling tool using chamber observations for C12 alkanes
NASA Astrophysics Data System (ADS)
Aumont, B.; La, S.; Ouzebidour, F.; Valorso, R.; Mouchel-Vallon, C.; Camredon, M.; Lee-Taylor, J. M.; Hodzic, A.; Madronich, S.; Yee, L. D.; Loza, C. L.; Craven, J. S.; Zhang, X.; Seinfeld, J.
2013-12-01
Secondary Organic Aerosol (SOA) production and ageing is the result of atmospheric oxidation processes leading to the progressive formation of organic species with higher oxidation state and lower volatility. Explicit chemical mechanisms reflect our understanding of these multigenerational oxidation steps. Major uncertainties remain concerning the processes leading to SOA formation and the development, assessment and improvement of such explicit schemes is therefore a key issue. The development of explicit mechanism to describe the oxidation of long chain hydrocarbons is however a challenge. Indeed, explicit oxidation schemes involve a large number of reactions and secondary organic species, far exceeding the size of chemical schemes that can be written manually. The chemical mechanism generator GECKO-A (Generator for Explicit Chemistry and Kinetics of Organics in the Atmosphere) is a computer program designed to overcome this difficulty. GECKO-A generates gas phase oxidation schemes according to a prescribed protocol assigning reaction pathways and kinetics data on the basis of experimental data and structure-activity relationships. In this study, we examine the ability of the generated schemes to explain SOA formation observed in the Caltech Environmental Chambers from various C12 alkane isomers and under high NOx and low NOx conditions. First results show that the model overestimates both the SOA yields and the O/C ratios. Various sensitivity tests are performed to explore processes that might be responsible for these disagreements.
Lattice gas models for particle systems in an underdamped hopping regime
NASA Astrophysics Data System (ADS)
Gobron, Thierry
A model in which the state of the particle is described by a multicomponent vector, each possible kinetic state for the particle being associated with one of the components is presented. A master equation describes the evolution of the probability distribution in an independent particle model. From the master equation and with the help of the symmetry group that leaves the state transition operator invariant, physical quantities such as the diffusion constant are explicitly calculated for several lattices in one, two, and three dimensions. A Boltzmann equation is established and compared to the Rice and Roth proposal.
NASA Astrophysics Data System (ADS)
Alexandridis, Konstantinos T.
This dissertation adopts a holistic and detailed approach to modeling spatially explicit agent-based artificial intelligent systems, using the Multi Agent-based Behavioral Economic Landscape (MABEL) model. The research questions that addresses stem from the need to understand and analyze the real-world patterns and dynamics of land use change from a coupled human-environmental systems perspective. Describes the systemic, mathematical, statistical, socio-economic and spatial dynamics of the MABEL modeling framework, and provides a wide array of cross-disciplinary modeling applications within the research, decision-making and policy domains. Establishes the symbolic properties of the MABEL model as a Markov decision process, analyzes the decision-theoretic utility and optimization attributes of agents towards comprising statistically and spatially optimal policies and actions, and explores the probabilogic character of the agents' decision-making and inference mechanisms via the use of Bayesian belief and decision networks. Develops and describes a Monte Carlo methodology for experimental replications of agent's decisions regarding complex spatial parcel acquisition and learning. Recognizes the gap on spatially-explicit accuracy assessment techniques for complex spatial models, and proposes an ensemble of statistical tools designed to address this problem. Advanced information assessment techniques such as the Receiver-Operator Characteristic curve, the impurity entropy and Gini functions, and the Bayesian classification functions are proposed. The theoretical foundation for modular Bayesian inference in spatially-explicit multi-agent artificial intelligent systems, and the ensembles of cognitive and scenario assessment modular tools build for the MABEL model are provided. Emphasizes the modularity and robustness as valuable qualitative modeling attributes, and examines the role of robust intelligent modeling as a tool for improving policy-decisions related to land use change. Finally, the major contributions to the science are presented along with valuable directions for future research.
ERIC Educational Resources Information Center
Cromley, Jennifer G.; Wills, Theodore W.
2016-01-01
Van den Broek's landscape model explicitly posits sequences of moves during reading in real time. Two other models that implicitly describe sequences of processes during reading are tested in the present research. Coded think-aloud data from 24 undergraduate students reading scientific text were analysed with lag-sequential techniques to compare…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schulze-Halberg, Axel, E-mail: xbataxel@gmail.com; García-Ravelo, Jesús; Pacheco-García, Christian
We consider the Schrödinger equation in the Thomas–Fermi field, a model that has been used for describing electron systems in δ-doped semiconductors. It is shown that the problem becomes exactly-solvable if a particular effective (position-dependent) mass distribution is incorporated. Orthogonal sets of normalizable bound state solutions are constructed in explicit form, and the associated energies are determined. We compare our results with the corresponding findings on the constant-mass problem discussed by Ioriatti (1990) [13]. -- Highlights: ► We introduce an exactly solvable, position-dependent mass model for the Thomas–Fermi potential. ► Orthogonal sets of solutions to our model are constructed inmore » closed form. ► Relation to delta-doped semiconductors is discussed. ► Explicit subband bottom energies are calculated and compared to results obtained in a previous study.« less
NASA Astrophysics Data System (ADS)
Dumont, E.; Harrison, J. A.; Kroeze, C.; Bakker, E. J.; Seitzinger, S. P.
2005-12-01
Here we describe, test, and apply a spatially explicit, global model for predicting dissolved inorganic nitrogen (DIN) export by rivers to coastal waters (NEWS-DIN). NEWS-DIN was developed as part of an internally consistent suite of global nutrient export models. Modeled and measured DIN export values agree well (calibration R2 = 0.79), and NEWS-DIN is relatively free of bias. NEWS-DIN predicts: DIN yields ranging from 0.0004 to 5217 kg N km-2 yr-1 with the highest DIN yields occurring in Europe and South East Asia; global DIN export to coastal waters of 25 Tg N yr-1, with 16 Tg N yr-1 from anthropogenic sources; biological N2 fixation is the dominant source of exported DIN; and globally, and on every continent except Africa, N fertilizer is the largest anthropogenic source of DIN export to coastal waters.
Indexing Theory and Retrieval Effectiveness.
ERIC Educational Resources Information Center
Robertson, Stephen E.
1978-01-01
Describes recent attempts to make explicit connections between the indexing process and the use of the index or information retrieval system, particularly the utility-theoretic and automatic indexing models of William Cooper and Stephen Harter. Theory and performance, information storage and retrieval, search stage feedback, and indexing are also…
MOAB: a spatially explicit, individual-based expert system for creating animal foraging models
Carter, J.; Finn, John T.
1999-01-01
We describe the development, structure, and corroboration process of a simulation model of animal behavior (MOAB). MOAB can create spatially explicit, individual-based animal foraging models. Users can create or replicate heterogeneous landscape patterns, and place resources and individual animals of a goven species on that landscape to simultaneously simulate the foraging behavior of multiple species. The heuristic rules for animal behavior are maintained in a user-modifiable expert system. MOAB can be used to explore hypotheses concerning the influence of landscape patttern on animal movement and foraging behavior. A red fox (Vulpes vulpes L.) foraging and nest predation model was created to test MOAB's capabilities. Foxes were simulated for 30-day periods using both expert system and random movement rules. Home range size, territory formation and other available simulation studies. A striped skunk (Mephitis mephitis L.) model also was developed. The expert system model proved superior to stochastic in respect to territory formation, general movement patterns and home range size.
Consumer-operated service program members' explanatory models of mental illness and recovery.
Hoy, Janet M
2014-10-01
Incorporating individuals' understandings and explanations of mental illness into service delivery offers benefits relating to increased service relevance and meaning. Existing research delineates explanatory models of mental illness held by individuals in home, outpatient, and hospital-based contexts; research on models held by those in peer-support contexts is notably absent. In this article, I describe themes identified within and across explanatory models of mental illness and recovery held by mental health consumers (N = 24) at one peer center, referred to as a consumer-operated service center (COSP). Participants held explanatory models inclusive of both developmental stressors and biomedical causes, consistent with a stress-diathesis model (although no participant explicitly referenced such). Explicit incorporation of stress-diathesis constructs into programming at this COSP offers the potential of increasing service meaning and relevance. Identifying and incorporating shared meanings across individuals' understandings of mental illness likewise can increase relevance and meaning for particular subgroups of service users. © The Author(s) 2014.
Explicit Global Simulation of Gravity Waves up to the Lower Thermosphere
NASA Astrophysics Data System (ADS)
Becker, E.
2016-12-01
At least for short-term simulations, middle atmosphere general circulation models (GCMs) can be run with sufficiently high resolution in order to describe a good part of the gravity wave spectrum explicitly. Nevertheless, the parameterization of unresolved dynamical scales remains an issue, especially when the scales of parameterized gravity waves (GWs) and resolved GWs become comparable. In addition, turbulent diffusion must always be parameterized along with other subgrid-scale dynamics. A practical solution to the combined closure problem for GWs and turbulent diffusion is to dispense with a parameterization of GWs, apply a high spatial resolution, and to represent the unresolved scales by a macro-turbulent diffusion scheme that gives rise to wave damping in a self-consistent fashion. This is the approach of a few GCMs that extend from the surface to the lower thermosphere and simulate a realistic GW drag and summer-to-winter-pole residual circulation in the upper mesosphere. In this study we describe a new version of the Kuehlungsborn Mechanistic general Circulation Model (KMCM), which includes explicit (though idealized) computations of radiative transfer and the tropospheric moisture cycle. Particular emphasis is spent on 1) the turbulent diffusion scheme, 2) the attenuation of resolved GWs at critical levels, 3) the generation of GWs in the middle atmosphere from body forces, and 4) GW-tidal interactions (including the energy deposition of GWs and tides).
A Descriptive and Evaluative Analysis of Program Planning Literature, 1950-1983.
ERIC Educational Resources Information Center
Sork, Thomas J.; Buskey, John H.
1986-01-01
Literature that presents a complete program planning model was described and analyzed using explicitly defined and uniformly applied descriptive and evaluative dimensions. Several observations about the current state of the program planning literature are made, and recommendations designed to strengthen the literature are offered. (Author/CT)
ERIC Educational Resources Information Center
Jouriles, Ernest N.; McDonald, Renee; Mueller, Victoria; Grych, John H.
2012-01-01
This article describes a conceptual model of cognitive and emotional processes proposed to mediate the relation between youth exposure to family violence and teen dating violence perpetration. Explicit beliefs about violence, internal knowledge structures, and executive functioning are hypothesized as cognitive mediators, and their potential…
A Social Cognitive View of Parental Influences on Student Academic Self-Regulation.
ERIC Educational Resources Information Center
Martinez-Pons, Manuel
2002-01-01
Discusses recent theory and research on parental activities that influence children's academic self-regulatory development, describing a social-cognitive perspective on academic self- regulation which assumes parents function as implicit and explicit social models for their children and socially support their emulation and adaptive use of…
Simulating dispersal of reintroduced species within heterogeneous landscapes
Robert H. Gardner; Eric J. Gustafson
2004-01-01
This paper describes the development and application of a spatially explicit, individual based model of animal dispersal (J-walk) to determine the relative effects of landscape heterogeneity, prey availability, predation risk, and the energy requirements and behavior of dispersing organisms on dispersal success. Significant unknowns exist for the simulation of complex...
MODIA: Vol. 4. The Resource Utilization Model. A Project AIR FORCE Report.
ERIC Educational Resources Information Center
Gallegos, Margaret
MODIA (Method of Designing Instructional Alternatives) was developed to help the Air Force manage resources for formal training by systematically and explicitly relating quantitative requirements for training resources to the details of course design and course operation during the planning stage. This report describes the Resource Utilization…
Defining Alcohol-Specific Rules Among Parents of Older Adolescents: Moving Beyond No Tolerance.
Bourdeau, Beth; Miller, Brenda; Vanya, Magdalena; Duke, Michael; Ames, Genevieve
2012-01-01
Parental beliefs and rules regarding their teen's use of alcohol influence teen decisions regarding alcohol use. However, measurement of parental rules regarding adolescent alcohol use has not been thoroughly studied. This study used qualitative interviews with 174 parents of older teens from 100 families. From open-ended questions, themes emerged that describe explicit rules tied to circumscribed use, no tolerance, and "call me." There was some inconsistency in explicit rules with and between parents. Responses also generated themes relating to implicit rules such as expectations and preferences. Parents described their methods of communicating their position via conversational methods, role modeling their own behavior, teaching socially appropriate use of alcohol by offering their teen alcohol, and monitoring their teens' social activities. Findings indicate that alcohol rules are not adequately captured by current assessment measures.
Defining Alcohol-Specific Rules Among Parents of Older Adolescents: Moving Beyond No Tolerance
Bourdeau, Beth; Miller, Brenda; Vanya, Magdalena; Duke, Michael; Ames, Genevieve
2012-01-01
Parental beliefs and rules regarding their teen’s use of alcohol influence teen decisions regarding alcohol use. However, measurement of parental rules regarding adolescent alcohol use has not been thoroughly studied. This study used qualitative interviews with 174 parents of older teens from 100 families. From open-ended questions, themes emerged that describe explicit rules tied to circumscribed use, no tolerance, and “call me.” There was some inconsistency in explicit rules with and between parents. Responses also generated themes relating to implicit rules such as expectations and preferences. Parents described their methods of communicating their position via conversational methods, role modeling their own behavior, teaching socially appropriate use of alcohol by offering their teen alcohol, and monitoring their teens’ social activities. Findings indicate that alcohol rules are not adequately captured by current assessment measures. PMID:23204931
A new method for calculating time-dependent atomic level populations
NASA Technical Reports Server (NTRS)
Kastner, S. O.
1981-01-01
A method is described for reducing the number of levels to be dealt with in calculating time-dependent populations of atoms or ions in plasmas. The procedure effectively extends the collisional-radiative model to consecutive stages of ionization, treating ground and metastable levels explicitly and excited levels implicitly. Direct comparisons of full and simulated systems are carried out for five-level models.
Substructure based modeling of nickel single crystals cycled at low plastic strain amplitudes
NASA Astrophysics Data System (ADS)
Zhou, Dong
In this dissertation a meso-scale, substructure-based, composite single crystal model is fully developed from the simple uniaxial model to the 3-D finite element method (FEM) model with explicit substructures and further with substructure evolution parameters, to simulate the completely reversed, strain controlled, low plastic strain amplitude cyclic deformation of nickel single crystals. Rate-dependent viscoplasticity and Armstrong-Frederick type kinematic hardening rules are applied to substructures on slip systems in the model to describe the kinematic hardening behavior of crystals. Three explicit substructure components are assumed in the composite single crystal model, namely "loop patches" and "channels" which are aligned in parallel in a "vein matrix," and persistent slip bands (PSBs) connected in series with the vein matrix. A magnetic domain rotation model is presented to describe the reverse magnetostriction of single crystal nickel. Kinematic hardening parameters are obtained by fitting responses to experimental data in the uniaxial model, and the validity of uniaxial assumption is verified in the 3-D FEM model with explicit substructures. With information gathered from experiments, all control parameters in the model including hardening parameters, volume fraction of loop patches and PSBs, and variation of Young's modulus etc. are correlated to cumulative plastic strain and/or plastic strain amplitude; and the whole cyclic deformation history of single crystal nickel at low plastic strain amplitudes is simulated in the uniaxial model. Then these parameters are implanted in the 3-D FEM model to simulate the formation of PSB bands. A resolved shear stress criterion is set to trigger the formation of PSBs, and stress perturbation in the specimen is obtained by several elements assigned with PSB material properties a priori. Displacement increment, plastic strain amplitude control and overall stress-strain monitor and output are carried out in the user subroutine DISP and URDFIL of ABAQUS, respectively, while constitutive formulations of the FEM model are coded and implemented in UMAT. The results of the simulations are compared to experiments. This model verified the validity of Winter's two-phase model and Taylor's uniform stress assumption, explored substructure evolution and "intrinsic" behavior in substructures and successfully simulated the process of PSB band formation and propagation.
NASA Astrophysics Data System (ADS)
Aumont, B.; Camredon, M.; Isaacman-VanWertz, G. A.; Karam, C.; Valorso, R.; Madronich, S.; Kroll, J. H.
2016-12-01
Gas phase oxidation of VOC is a gradual process leading to the formation of multifunctional organic compounds, i.e., typically species with higher oxidation state, high water solubility and low volatility. These species contribute to the formation of secondary organic aerosols (SOA) viamultiphase processes involving a myriad of organic species that evolve through thousands of reactions and gas/particle mass exchanges. Explicit chemical mechanisms reflect the understanding of these multigenerational oxidation steps. These mechanisms rely directly on elementary reactions to describe the chemical evolution and track the identity of organic carbon through various phases down to ultimate oxidation products. The development, assessment and improvement of such explicit schemes is a key issue, as major uncertainties remain on the chemical pathways involved during atmospheric oxidation of organic matter. An array of mass spectrometric techniques (CIMS, PTRMS, AMS) was recently used to track the composition of organic species during α-pinene oxidation in the MIT environmental chamber, providing an experimental database to evaluate and improve explicit mechanisms. In this study, the GECKO-A tool (Generator for Explicit Chemistry and Kinetics of Organics in the Atmosphere) is used to generate fully explicit oxidation schemes for α-pinene multiphase oxidation simulating the MIT experiment. The ability of the GECKO-A chemical scheme to explain the organic molecular composition in the gas and the condensed phases is explored. First results of this model/observation comparison at the molecular level will be presented.
Modeling heterogeneous processor scheduling for real time systems
NASA Technical Reports Server (NTRS)
Leathrum, J. F.; Mielke, R. R.; Stoughton, J. W.
1994-01-01
A new model is presented to describe dataflow algorithms implemented in a multiprocessing system. Called the resource/data flow graph (RDFG), the model explicitly represents cyclo-static processor schedules as circuits of processor arcs which reflect the order that processors execute graph nodes. The model also allows the guarantee of meeting hard real-time deadlines. When unfolded, the model identifies statically the processor schedule. The model therefore is useful for determining the throughput and latency of systems with heterogeneous processors. The applicability of the model is demonstrated using a space surveillance algorithm.
A new solution method for wheel/rail rolling contact.
Yang, Jian; Song, Hua; Fu, Lihua; Wang, Meng; Li, Wei
2016-01-01
To solve the problem of wheel/rail rolling contact of nonlinear steady-state curving, a three-dimensional transient finite element (FE) model is developed by the explicit software ANSYS/LS-DYNA. To improve the solving speed and efficiency, an explicit-explicit order solution method is put forward based on analysis of the features of implicit and explicit algorithm. The solution method was first applied to calculate the pre-loading of wheel/rail rolling contact with explicit algorithm, and then the results became the initial conditions in solving the dynamic process of wheel/rail rolling contact with explicit algorithm as well. Simultaneously, the common implicit-explicit order solution method is used to solve the FE model. Results show that the explicit-explicit order solution method has faster operation speed and higher efficiency than the implicit-explicit order solution method while the solution accuracy is almost the same. Hence, the explicit-explicit order solution method is more suitable for the wheel/rail rolling contact model with large scale and high nonlinearity.
A second generation distributed point polarizable water model.
Kumar, Revati; Wang, Fang-Fang; Jenness, Glen R; Jordan, Kenneth D
2010-01-07
A distributed point polarizable model (DPP2) for water, with explicit terms for charge penetration, induction, and charge transfer, is introduced. The DPP2 model accurately describes the interaction energies in small and large water clusters and also gives an average internal energy per molecule and radial distribution functions of liquid water in good agreement with experiment. A key to the success of the model is its accurate description of the individual terms in the n-body expansion of the interaction energies.
Generalized Ehrenfest Relations, Deformation Quantization, and the Geometry of Inter-model Reduction
NASA Astrophysics Data System (ADS)
Rosaler, Joshua
2018-03-01
This study attempts to spell out more explicitly than has been done previously the connection between two types of formal correspondence that arise in the study of quantum-classical relations: one the one hand, deformation quantization and the associated continuity between quantum and classical algebras of observables in the limit \\hbar → 0, and, on the other, a certain generalization of Ehrenfest's Theorem and the result that expectation values of position and momentum evolve approximately classically for narrow wave packet states. While deformation quantization establishes a direct continuity between the abstract algebras of quantum and classical observables, the latter result makes in-eliminable reference to the quantum and classical state spaces on which these structures act—specifically, via restriction to narrow wave packet states. Here, we describe a certain geometrical re-formulation and extension of the result that expectation values evolve approximately classically for narrow wave packet states, which relies essentially on the postulates of deformation quantization, but describes a relationship between the actions of quantum and classical algebras and groups over their respective state spaces that is non-trivially distinct from deformation quantization. The goals of the discussion are partly pedagogical in that it aims to provide a clear, explicit synthesis of known results; however, the particular synthesis offered aspires to some novelty in its emphasis on a certain general type of mathematical and physical relationship between the state spaces of different models that represent the same physical system, and in the explicitness with which it details the above-mentioned connection between quantum and classical models.
Interrelations between different canonical descriptions of dissipative systems
NASA Astrophysics Data System (ADS)
Schuch, D.; Guerrero, J.; López-Ruiz, F. F.; Aldaya, V.
2015-04-01
There are many approaches for the description of dissipative systems coupled to some kind of environment. This environment can be described in different ways; only effective models are being considered here. In the Bateman model, the environment is represented by one additional degree of freedom and the corresponding momentum. In two other canonical approaches, no environmental degree of freedom appears explicitly, but the canonical variables are connected with the physical ones via non-canonical transformations. The link between the Bateman approach and those without additional variables is achieved via comparison with a canonical approach using expanding coordinates, as, in this case, both Hamiltonians are constants of motion. This leads to constraints that allow for the elimination of the additional degree of freedom in the Bateman approach. These constraints are not unique. Several choices are studied explicitly, and the consequences for the physical interpretation of the additional variable in the Bateman model are discussed.
Ultrasonic modeling of an embedded elliptic crack
NASA Astrophysics Data System (ADS)
Fradkin, Larissa Ju.; Zalipaev, Victor
2000-05-01
Experiments indicate that the radiating near zone of a compressional circular transducer directly coupled to a homogeneous and isotropic solid has the following structure: there are geometrical zones where one can distinguish a plane compressional wave and toroidal waves, both compressional and shear, radiated by the transducer rim. As has been shown previously the modern diffraction theory allows to describe these explicitly. It also gives explicit asymptotic description of waves present in the transition zones. In case of a normal incidence of a plane compressional wave the explicit expressions have been obtained by Achenbach and co-authors for the fields diffracted by a penny-shaped crack. We build on the above work by applying the uniform GTD to model an oblique incidence of a plane compressional wave on an elliptical crack. We compare our asymptotic results with numerical results based on the boundary integral code as developed by Glushkovs, Krasnodar University, Russia. The asymptotic formulas form a basis of a code for high-frequency simulation of ultrasonic scattering by elliptical cracks situated in the vicinity of a compressional circular transducer, currently under development at our Center.
Explicit formulation of second and third order optical nonlinearity in the FDTD framework
NASA Astrophysics Data System (ADS)
Varin, Charles; Emms, Rhys; Bart, Graeme; Fennel, Thomas; Brabec, Thomas
2018-01-01
The finite-difference time-domain (FDTD) method is a flexible and powerful technique for rigorously solving Maxwell's equations. However, three-dimensional optical nonlinearity in current commercial and research FDTD softwares requires solving iteratively an implicit form of Maxwell's equations over the entire numerical space and at each time step. Reaching numerical convergence demands significant computational resources and practical implementation often requires major modifications to the core FDTD engine. In this paper, we present an explicit method to include second and third order optical nonlinearity in the FDTD framework based on a nonlinear generalization of the Lorentz dispersion model. A formal derivation of the nonlinear Lorentz dispersion equation is equally provided, starting from the quantum mechanical equations describing nonlinear optics in the two-level approximation. With the proposed approach, numerical integration of optical nonlinearity and dispersion in FDTD is intuitive, transparent, and fully explicit. A strong-field formulation is also proposed, which opens an interesting avenue for FDTD-based modelling of the extreme nonlinear optics phenomena involved in laser filamentation and femtosecond micromachining of dielectrics.
Knotted fields and explicit fibrations for lemniscate knots
NASA Astrophysics Data System (ADS)
Bode, B.; Dennis, M. R.; Foster, D.; King, R. P.
2017-06-01
We give an explicit construction of complex maps whose nodal lines have the form of lemniscate knots. We review the properties of lemniscate knots, defined as closures of braids where all strands follow the same transverse (1, ℓ) Lissajous figure, and are therefore a subfamily of spiral knots generalizing the torus knots. We then prove that such maps exist and are in fact fibrations with appropriate choices of parameters. We describe how this may be useful in physics for creating knotted fields, in quantum mechanics, optics and generalizing to rational maps with application to the Skyrme-Faddeev model. We also prove how this construction extends to maps with weakly isolated singularities.
Design, evaluation and test of an electronic, multivariable control for the F100 turbofan engine
NASA Technical Reports Server (NTRS)
Skira, C. A.; Dehoff, R. L.; Hall, W. E., Jr.
1980-01-01
A digital, multivariable control design procedure for the F100 turbofan engine is described. The controller is based on locally linear synthesis techniques using linear, quadratic regulator design methods. The control structure uses an explicit model reference form with proportional and integral feedback near a nominal trajectory. Modeling issues, design procedures for the control law and the estimation of poorly measured variables are presented.
SCOSII OL: A dedicated language for mission operations
NASA Technical Reports Server (NTRS)
Baldi, Andrea; Elgaard, Dennis; Lynenskjold, Steen; Pecchioli, Mauro
1994-01-01
The Spacecraft Control and Operations System 2 (SCOSII) is the new generation of Mission Control Systems (MCS) to be used at ESOC. The system is generic because it offers a collection of standard functions configured through a database upon which a dedicated MCS is established for a given mission. An integral component of SCOSII is the support of a dedicated Operations Language (OL). The spacecraft operation engineers edit, test, validate, and install OL scripts as part of the configuration of the system with, e.g., expressions for computing derived parameters and procedures for performing flight operations, all without involvement of software support engineers. A layered approach has been adopted for the implementation centered around the explicit representation of a data model. The data model is object-oriented defining the structure of the objects in terms of attributes (data) and services (functions) which can be accessed by the OL. SCOSII supports the creation of a mission model. System elements as, e.g., a gyro are explicit, as are the attributes which described them and the services they provide. The data model driven approach makes it possible to take immediate advantage of this higher-level of abstraction, without requiring expansion of the language. This article describes the background and context leading to the OL, concepts, language facilities, implementation, status and conclusions found so far.
Network-Based Visual Analysis of Tabular Data
ERIC Educational Resources Information Center
Liu, Zhicheng
2012-01-01
Tabular data is pervasive in the form of spreadsheets and relational databases. Although tables often describe multivariate data without explicit network semantics, it may be advantageous to explore the data modeled as a graph or network for analysis. Even when a given table design conveys some static network semantics, analysts may want to look…
Timothy A. Martin; Kurt H. Johnsen; Timothy L. White
2001-01-01
Indirect genetic selection for early growth and disease resistance of southern pines has proven remarkably successful over the past several decades. However, several benefits could be derived for southern pine breeding programs by incorporating ideotypes, conceptual models which explicitly describe plant phenotypic characteristics that are hypothesized to produce...
ERIC Educational Resources Information Center
Domingo, Jennifer P.; Abualia, Mohammed; Barragan, Diana; Schroeder, Lianne; Wink, Donald J.; King, Maripat; Clark, Ginevra A.
2017-01-01
Introductory Chemistry laboratories must go beyond "cookbook" methods to illustrate how chemistry concepts apply to complex, real-world problems. In our case, we are preparing students to use their chemistry knowledge in the healthcare profession. The experiment described here explicitly models three important chemical concepts: dialysis…
Placing Parent Education in Conceptual and Empirical Context.
ERIC Educational Resources Information Center
Dunst, Carl J.
1999-01-01
This response to Mahoney et al. (EC 623 392), although agreeing that parent education needs to be reemphasized, disagrees with the reasons offered for why parent education is not a more explicit focus of current early-intervention efforts. Alternative approaches, such as family-centered practices and family support, are described. A model that…
Theory of the evolutionary minority game
NASA Astrophysics Data System (ADS)
Lo, T. S.; Hui, P. M.; Johnson, N. F.
2000-09-01
We present a theory describing a recently introduced model of an evolving, adaptive system in which agents compete to be in the minority. The agents themselves are able to evolve their strategies over time in an attempt to improve their performance. The theory explicitly demonstrates the self-interaction, or market impact, that agents in such systems experience.
EXPECT: Explicit Representations for Flexible Acquisition
NASA Technical Reports Server (NTRS)
Swartout, BIll; Gil, Yolanda
1995-01-01
To create more powerful knowledge acquisition systems, we not only need better acquisition tools, but we need to change the architecture of the knowledge based systems we create so that their structure will provide better support for acquisition. Current acquisition tools permit users to modify factual knowledge but they provide limited support for modifying problem solving knowledge. In this paper, the authors argue that this limitation (and others) stem from the use of incomplete models of problem-solving knowledge and inflexible specification of the interdependencies between problem-solving and factual knowledge. We describe the EXPECT architecture which addresses these problems by providing an explicit representation for problem-solving knowledge and intent. Using this more explicit representation, EXPECT can automatically derive the interdependencies between problem-solving and factual knowledge. By deriving these interdependencies from the structure of the knowledge-based system itself EXPECT supports more flexible and powerful knowledge acquisition.
NASA Astrophysics Data System (ADS)
Zhang, Yu-Ping; Yu, Lan; Wei, Guang-Mei
2018-02-01
Under investigation with symbolic computation in this paper, is a variable-coefficient Sasa-Satsuma equation (SSE) which can describe the ultra short pulses in optical fiber communications and propagation of deep ocean waves. By virtue of the extended Ablowitz-Kaup-Newell-Segur system, Lax pair for the model is directly constructed. Based on the obtained Lax pair, an auto-Bäcklund transformation is provided, then the explicit one-soliton solution is obtained. Meanwhile, an infinite number of conservation laws in explicit recursion forms are derived to indicate its integrability in the Liouville sense. Furthermore, exact explicit rogue wave (RW) solution is presented by use of a Darboux transformation. In addition to the double-peak structure and an analog of the Peregrine soliton, the RW can exhibit graphically an intriguing twisted rogue-wave (TRW) pair that involve four well-defined zero-amplitude points.
Identification of internal properties of fibres and micro-swimmers
NASA Astrophysics Data System (ADS)
Plouraboué, Franck; Thiam, E. Ibrahima; Delmotte, Blaise; Climent, Eric
2017-01-01
In this paper, we address the identifiability of constitutive parameters of passive or active micro-swimmers. We first present a general framework for describing fibres or micro-swimmers using a bead-model description. Using a kinematic constraint formulation to describe fibres, flagellum or cilia, we find explicit linear relationship between elastic constitutive parameters and generalized velocities from computing contact forces. This linear formulation then permits one to address explicitly identifiability conditions and solve for parameter identification. We show that both active forcing and passive parameters are both identifiable independently but not simultaneously. We also provide unbiased estimators for generalized elastic parameters in the presence of Langevin-like forcing with Gaussian noise using a Bayesian approach. These theoretical results are illustrated in various configurations showing the efficiency of the proposed approach for direct parameter identification. The convergence of the proposed estimators is successfully tested numerically.
On the Nexus of the Spatial Dynamics of Global Urbanization and the Age of the City
Scheuer, Sebastian; Haase, Dagmar; Volk, Martin
2016-01-01
A number of concepts exist regarding how urbanization can be described as a process. Understanding this process that affects billions of people and its future development in a spatial manner is imperative to address related issues such as human quality of life. In the focus of spatially explicit studies on urbanization is typically a city, a particular urban region, an agglomeration. However, gaps remain in spatially explicit global models. This paper addresses that issue by examining the spatial dynamics of urban areas over time, for a full coverage of the world. The presented model identifies past, present and potential future hotspots of urbanization as a function of an urban area's spatial variation and age, whose relation could be depicted both as a proxy and as a path of urban development. PMID:27490199
On the Nexus of the Spatial Dynamics of Global Urbanization and the Age of the City.
Scheuer, Sebastian; Haase, Dagmar; Volk, Martin
2016-01-01
A number of concepts exist regarding how urbanization can be described as a process. Understanding this process that affects billions of people and its future development in a spatial manner is imperative to address related issues such as human quality of life. In the focus of spatially explicit studies on urbanization is typically a city, a particular urban region, an agglomeration. However, gaps remain in spatially explicit global models. This paper addresses that issue by examining the spatial dynamics of urban areas over time, for a full coverage of the world. The presented model identifies past, present and potential future hotspots of urbanization as a function of an urban area's spatial variation and age, whose relation could be depicted both as a proxy and as a path of urban development.
Using effort information with change-in-ratio data for population estimation
Udevitz, Mark S.; Pollock, Kenneth H.
1995-01-01
Most change-in-ratio (CIR) methods for estimating fish and wildlife population sizes have been based only on assumptions about how encounter probabilities vary among population subclasses. When information on sampling effort is available, it is also possible to derive CIR estimators based on assumptions about how encounter probabilities vary over time. This paper presents a generalization of previous CIR models that allows explicit consideration of a range of assumptions about the variation of encounter probabilities among subclasses and over time. Explicit estimators are derived under this model for specific sets of assumptions about the encounter probabilities. Numerical methods are presented for obtaining estimators under the full range of possible assumptions. Likelihood ratio tests for these assumptions are described. Emphasis is on obtaining estimators based on assumptions about variation of encounter probabilities over time.
Relativistic quantum optics: The relativistic invariance of the light-matter interaction models
NASA Astrophysics Data System (ADS)
Martín-Martínez, Eduardo; Rodriguez-Lopez, Pablo
2018-05-01
In this article we discuss the invariance under general changes of reference frame of all the physical predictions of particle detector models in quantum field theory in general and, in particular, of those used in quantum optics to model atoms interacting with light. We find explicitly how the light-matter interaction Hamiltonians change under general coordinate transformations, and analyze the subtleties of the Hamiltonians commonly used to describe the light-matter interaction when relativistic motion is taken into account.
Two-dimensional habitat modeling in the Yellowstone/Upper Missouri River system
Waddle, T. J.; Bovee, K.D.; Bowen, Z.H.
1997-01-01
This study is being conducted to provide the aquatic biology component of a decision support system being developed by the U.S. Bureau of Reclamation. In an attempt to capture the habitat needs of Great Plains fish communities we are looking beyond previous habitat modeling methods. Traditional habitat modeling approaches have relied on one-dimensional hydraulic models and lumped compositional habitat metrics to describe aquatic habitat. A broader range of habitat descriptors is available when both composition and configuration of habitats is considered. Habitat metrics that consider both composition and configuration can be adapted from terrestrial biology. These metrics are most conveniently accessed with spatially explicit descriptors of the physical variables driving habitat composition. Two-dimensional hydrodynamic models have advanced to the point that they may provide the spatially explicit description of physical parameters needed to address this problem. This paper reports progress to date on applying two-dimensional hydraulic and habitat models on the Yellowstone and Missouri Rivers and uses examples from the Yellowstone River to illustrate the configurational metrics as a new tool for assessing riverine habitats.
Explicit Computations of Instantons and Large Deviations in Beta-Plane Turbulence
NASA Astrophysics Data System (ADS)
Laurie, J.; Bouchet, F.; Zaboronski, O.
2012-12-01
We use a path integral formalism and instanton theory in order to make explicit analytical predictions about large deviations and rare events in beta-plane turbulence. The path integral formalism is a concise way to get large deviation results in dynamical systems forced by random noise. In the most simple cases, it leads to the same results as the Freidlin-Wentzell theory, but it has a wider range of applicability. This approach is however usually extremely limited, due to the complexity of the theoretical problems. As a consequence it provides explicit results in a fairly limited number of models, often extremely simple ones with only a few degrees of freedom. Few exception exist outside the realm of equilibrium statistical physics. We will show that the barotropic model of beta-plane turbulence is one of these non-equilibrium exceptions. We describe sets of explicit solutions to the instanton equation, and precise derivations of the action functional (or large deviation rate function). The reason why such exact computations are possible is related to the existence of hidden symmetries and conservation laws for the instanton dynamics. We outline several applications of this apporach. For instance, we compute explicitly the very low probability to observe flows with an energy much larger or smaller than the typical one. Moreover, we consider regimes for which the system has multiple attractors (corresponding to different numbers of alternating jets), and discuss the computation of transition probabilities between two such attractors. These extremely rare events are of the utmost importance as the dynamics undergo qualitative macroscopic changes during such transitions.
Nonlinear field equations for aligning self-propelled rods.
Peshkov, Anton; Aranson, Igor S; Bertin, Eric; Chaté, Hugues; Ginelli, Francesco
2012-12-28
We derive a set of minimal and well-behaved nonlinear field equations describing the collective properties of self-propelled rods from a simple microscopic starting point, the Vicsek model with nematic alignment. Analysis of their linear and nonlinear dynamics shows good agreement with the original microscopic model. In particular, we derive an explicit expression for density-segregated, banded solutions, allowing us to develop a more complete analytic picture of the problem at the nonlinear level.
ERIC Educational Resources Information Center
Darongkamas, Jurai; John, Christopher; Walker, Mark James
2014-01-01
This paper proposes incorporating the concept of the "observing eye/I", from cognitive analytic therapy (CAT), to Hawkins and Shohet's seven modes of supervision, comprising their transtheoretical model of supervision. Each mode is described alongside explicit examples relating to CAT. This modification using a key idea from CAT (in…
Fitts' Law in the Control of Isometric Grip Force With Naturalistic Targets.
Thumser, Zachary C; Slifkin, Andrew B; Beckler, Dylan T; Marasco, Paul D
2018-01-01
Fitts' law models the relationship between amplitude, precision, and speed of rapid movements. It is widely used to quantify performance in pointing tasks, study human-computer interaction, and generally to understand perceptual-motor information processes, including research to model performance in isometric force production tasks. Applying Fitts' law to an isometric grip force task would allow for quantifying grasp performance in rehabilitative medicine and may aid research on prosthetic control and design. We examined whether Fitts' law would hold when participants attempted to accurately produce their intended force output while grasping a manipulandum when presented with images of various everyday objects (we termed this the implicit task). Although our main interest was the implicit task, to benchmark it and establish validity, we examined performance against a more standard visual feedback condition via a digital force-feedback meter on a video monitor (explicit task). Next, we progressed from visual force feedback with force meter targets to the same targets without visual force feedback (operating largely on feedforward control with tactile feedback). This provided an opportunity to see if Fitts' law would hold without vision, and allowed us to progress toward the more naturalistic implicit task (which does not include visual feedback). Finally, we changed the nature of the targets from requiring explicit force values presented as arrows on a force-feedback meter (explicit targets) to the more naturalistic and intuitive target forces implied by images of objects (implicit targets). With visual force feedback the relation between task difficulty and the time to produce the target grip force was predicted by Fitts' law (average r 2 = 0.82). Without vision, average grip force scaled accurately although force variability was insensitive to the target presented. In contrast, images of everyday objects generated more reliable grip forces without the visualized force meter. In sum, population means were well-described by Fitts' law for explicit targets with vision ( r 2 = 0.96) and implicit targets ( r 2 = 0.89), but not as well-described for explicit targets without vision ( r 2 = 0.54). Implicit targets should provide a realistic see-object-squeeze-object test using Fitts' law to quantify the relative speed-accuracy relationship of any given grasper.
A Case Study in an Integrated Development and Problem Solving Environment
ERIC Educational Resources Information Center
Deek, Fadi P.; McHugh, James A.
2003-01-01
This article describes an integrated problem solving and program development environment, illustrating the application of the system with a detailed case study of a small-scale programming problem. The system, which is based on an explicit cognitive model, is intended to guide the novice programmer through the stages of problem solving and program…
NASA Astrophysics Data System (ADS)
Daniel, M.; Lemonsu, Aude; Déqué, M.; Somot, S.; Alias, A.; Masson, V.
2018-06-01
Most climate models do not explicitly model urban areas and at best describe them as rock covers. Nonetheless, the very high resolutions reached now by the regional climate models may justify and require a more realistic parameterization of surface exchanges between urban canopy and atmosphere. To quantify the potential impact of urbanization on the regional climate, and evaluate the benefits of a detailed urban canopy model compared with a simpler approach, a sensitivity study was carried out over France at a 12-km horizontal resolution with the ALADIN-Climate regional model for 1980-2009 time period. Different descriptions of land use and urban modeling were compared, corresponding to an explicit modeling of cities with the urban canopy model TEB, a conventional and simpler approach representing urban areas as rocks, and a vegetated experiment for which cities are replaced by natural covers. A general evaluation of ALADIN-Climate was first done, that showed an overestimation of the incoming solar radiation but satisfying results in terms of precipitation and near-surface temperatures. The sensitivity analysis then highlighted that urban areas had a significant impact on modeled near-surface temperature. A further analysis on a few large French cities indicated that over the 30 years of simulation they all induced a warming effect both at daytime and nighttime with values up to + 1.5 °C for the city of Paris. The urban model also led to a regional warming extending beyond the urban areas boundaries. Finally, the comparison to temperature observations available for Paris area highlighted that the detailed urban canopy model improved the modeling of the urban heat island compared with a simpler approach.
Explicit simulation of a midlatitude Mesoscale Convective System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexander, G.D.; Cotton, W.R.
1996-04-01
We have explicitly simulated the mesoscale convective system (MCS) observed on 23-24 June 1985 during PRE-STORM, the Preliminary Regional Experiment for the Stormscale Operational and Research and Meterology Program. Stensrud and Maddox (1988), Johnson and Bartels (1992), and Bernstein and Johnson (1994) are among the researchers who have investigated various aspects of this MCS event. We have performed this MCS simulation (and a similar one of a tropical MCS; Alexander and Cotton 1994) in the spirit of the Global Energy and Water Cycle Experiment Cloud Systems Study (GCSS), in which cloud-resolving models are used to assist in the formulation andmore » testing of cloud parameterization schemes for larger-scale models. In this paper, we describe (1) the nature of our 23-24 June MCS dimulation and (2) our efforts to date in using our explicit MCS simulations to assist in the development of a GCM parameterization for mesoscale flow branches. The paper is organized as follows. First, we discuss the synoptic situation surrounding the 23-24 June PRE-STORM MCS followed by a discussion of the model setup and results of our simulation. We then discuss the use of our MCS simulation. We then discuss the use of our MCS simulations in developing a GCM parameterization for mesoscale flow branches and summarize our results.« less
NASA Astrophysics Data System (ADS)
Wiedemair, W.; Tuković, Ž.; Jasak, H.; Poulikakos, D.; Kurtcuoglu, V.
2012-02-01
The complex interaction between an ultrasound-driven microbubble and an enclosing capillary microvessel is investigated by means of a coupled, multi-domain numerical model using the finite volume formulation. This system is of interest in the study of transient blood-brain barrier disruption (BBBD) for drug delivery applications. The compliant vessel structure is incorporated explicitly as a distinct domain described by a dedicated physical model. Red blood cells (RBCs) are taken into account as elastic solids in the blood plasma. We report the temporal and spatial development of transmural pressure (Ptm) and wall shear stress (WSS) at the luminal endothelial interface, both of which are candidates for the yet unknown mediator of BBBD. The explicit introduction of RBCs shapes the Ptm and WSS distributions and their derivatives markedly. While the peak values of these mechanical wall parameters are not affected considerably by the presence of RBCs, a pronounced increase in their spatial gradients is observed compared to a configuration with blood plasma alone. The novelty of our work lies in the explicit treatment of the vessel wall, and in the modelling of blood as a composite fluid, which we show to be relevant for the mechanical processes at the endothelium.
The Impact of Aerosols on Cloud and Precipitation Processes: Cloud-Resolving Model Simulations
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Khain, A.; Simpson, S.; Johnson, D.; Li, X.; Remer, L.
2003-01-01
Cloud microphysics are inevitable affected by the smoke particle (CCN, cloud condensation nuclei) size distributions below the clouds. Therefore, size distribution parameterized as spectral bin microphysics are needed to explicitly study the effect of atmospheric aerosol concentration on cloud development, rainfall production, and rainfall rates convective clouds. Recently, two detailed spectral-bin microphysical schemes were implemented into the Goddard Cumulus Ensembel (GCE) model. The formulation for the explicit spectral-bim microphysical processes is based on solving stochastic kinetic equations for the size distribution functions of water droplets (i.e., cloud droplets and raindrops), and several types of ice particles [i.e., pristine ice crystals (columnar and plate-like), snow (dendrites and aggregates), groupel and frozen drops/hall] Each type is described by a special size distribution function containing many categories (i.e., 33 bins). Atmospheric aerosols are also described using number density size-distribution functions.A spectral-bin microphysical model is very expensive from a computational point of view and has only been implemented into the 2D version of the GCE at the present time. The model is tested by studying the evolution of deep cloud systems in the west Pacific warm pool region and in the mid-latitude using identical thermodynamic conditions but with different concentrations of CCN: a low "clean" concentration and a high "dirty" concentration. Besides the initial differences in aerosol concentration, preliminary results indicate that the low CCN concentration case produces rainfall at the surface sooner than the high CCN case but has less cloud water mass aloft. Because the spectral-bim model explicitly calculates and allows for the examination of both the mass and number concentration of cpecies in each size category, a detailed analysis of the instantaneous size spectrum can be obtained for the two cases. It is shown that since the low CCN case produces fever droplets, larger size develop due to greater condencational and collectional growth, leading to a broader size spectrum in comparison to the high CCN case.
The Impact of Aerosols on Cloud and Precipitation Processes: Cloud-Resolving Model Simulations
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Khain, A.; Simpson, S.; Johnson, D.; Li, X.; Remer, L.
2003-01-01
Cloud microphysics are inevitably affected by the smoke particle (CCN, cloud condensation nuclei) size distributions below the clouds. Therefore, size distributions parameterized as spectral bin microphysics are needed to explicitly study the effects of atmospheric aerosol concentration on cloud development, rainfall production, and rainfall rates for convective clouds. Recently, two detailed spectral-bin microphysical schemes were implemented into the Goddard Cumulus Ensemble (GCE) model. The formulation for the explicit spectral-bin microphysical processes is based on solving stochastic kinetic equations for the size distribution functions of water droplets (i.e., cloud droplets and raindrops), and several types of ice particles [i.e.,pristine ice crystals (columnar and plate-like), snow (dendrites and aggregates), graupel and frozen drops/hail]. Each type is described by a special size distribution function containing many categories (i.e. 33 bins). Atmospheric aerosols are also described using number density size-distribution functions.A spectral-bin microphysical model is very expensive from a from a computational point of view and has only been implemented into the 2D version of the GCE at the present time. The model is tested by studying the evolution of deep tropical clouds in the west Pacific warm pool region using identical thermodynamic conditions but with different concentrations of CCN: a low "clean" concentration and a high "dirty" concentration. Besides the initial differences in aerosol concentration, preliminary results indicate that the low CCN concentration case produces rainfall at the surface sooner than the high CCN case but has less cloud water mass aloft. Because the spectral-bin model explicitly calculates and allows for the examination of both the mass and number concentration of species in each size categor, a detailed analysis of the instantaneous size spectrum can be obtained for the two cases. It is shown that since the low CCN case produces fewer droplets, larger sized develop due to the greater condensational and collectional growth, leading to a broader size spectrum in comparison to the high CCN case.
ERIC Educational Resources Information Center
Doabler, Christian T.; Fien, Hank
2013-01-01
This article describes the essential instructional elements necessary for delivering explicit mathematics instruction to students with mathematics difficulties. Mathematics intervention research indicates that explicit instruction is one of the most effective instructional approaches for teaching students with or at risk for math difficulties.…
Work and information processing in a solvable model of Maxwell's demon.
Mandal, Dibyendu; Jarzynski, Christopher
2012-07-17
We describe a minimal model of an autonomous Maxwell demon, a device that delivers work by rectifying thermal fluctuations while simultaneously writing information to a memory register. We solve exactly for the steady-state behavior of our model, and we construct its phase diagram. We find that our device can also act as a "Landauer eraser", using externally supplied work to remove information from the memory register. By exposing an explicit, transparent mechanism of operation, our model offers a simple paradigm for investigating the thermodynamics of information processing by small systems.
A New Constitutive Model for the Plastic Flow of Metals at Elevated Temperatures
NASA Astrophysics Data System (ADS)
Spigarelli, S.; El Mehtedi, M.
2014-02-01
A new constitutive model based on the combination of the Garofalo and Hensel-Spittel equations has been used to describe the plastic flow behavior of an AA6005 aluminum alloy tested in torsion. The analysis of the experimental data by the constitutive model resulted in an excellent description of the flow curves. The model equation was then rewritten to explicitly include the Arrhenius term describing the temperature dependence of plastic deformation. The calculation indicated that the activation energy for hot working slowly decreased with increasing strain, leading to thermally activated flow softening. The combined use of the new equation and torsion testing led to the development of a constitutive model which can be safely adopted in a computer code to simulate forging or extrusion.
An age dependent model for radium metabolism in man.
Johnson, J R
1983-01-01
The model developed by a Task Group of Committee 2 of ICRP to describe Alkaline Earth Metabolism in Adult Man (ICRP Publication 20) has been modified so that recycling is handled explicitly, and retention in mineral bone is represented by second compartments rather than by the product of a power function and an exponential. This model has been extended to include all ages from birth to adult man, and has been coupled with modified "ICRP" lung and G.I. tract models so that activity in organs can be calculated as functions of time during or after exposures. These activities, and age dependent "specific effective energy" factors, are then used to calculate age dependent dose rates, and dose commitments. This presentation describes this work, with emphasis on the model parameters and results obtained for radium.
Paletz, Susannah B F; Bearman, Christopher; Orasanu, Judith; Holbrook, Jon
2009-08-01
The presence of social psychological pressures on pilot decision making was assessed using qualitative analyses of critical incident interviews. Social psychological phenomena have long been known to influence attitudes and behavior but have not been highlighted in accident investigation models. Using a critical incident method, 28 pilots who flew in Alaska were interviewed. The participants were asked to describe a situation involving weather when they were pilot in command and found their skills challenged. They were asked to describe the incident in detail but were not explicitly asked to identify social pressures. Pressures were extracted from transcripts in a bottom-up manner and then clustered into themes. Of the 28 pilots, 16 described social psychological pressures on their decision making, specifically, informational social influence, the foot-in-the-door persuasion technique, normalization of deviance, and impression management and self-consistency motives. We believe accident and incident investigations can benefit from explicit inclusion of common social psychological pressures. We recommend specific ways of incorporating these pressures into theHuman Factors Analysis and Classification System.
NASA Astrophysics Data System (ADS)
Valorso, Richard; Raventos-Duran, Teresa; Aumont, Bernard; Camredon, Marie; Ng, Nga L.; Seinfeld, John H.
2010-05-01
The evaluation of the impacts of secondary organics on pollution episodes, climate and the tropospheric oxidizing capacity requires modelling tools that track the identity and reactivity of organic carbon in the various stages down to the ultimate oxidation products. The fully explicit representation of hydrocarbon oxidation, from the initial compounds to the final product CO2, requires a very large number of chemical reactions and intermediate species, far in excess of the number that can be reasonably written manually. We developed a "self generating approach" to explicitly describe (i) the gas phase oxidation schemes of organic compounds under general tropospheric conditions and (ii) the partitioning of secondary organics between gas and condensed phases. This approach was codified in a computer program, GECKO-A (Generator for Explicit Chemistry and Kinetics of Organics in the Atmosphere). This method allows prediction of multiphase mass budget using first principles. However, due to computational limitations, fully explicit chemical schemes can only be generated for species up to C8. We recently implemented a reduction protocol in GECKO-A to allow the generation of oxidation schemes for long chain organics. This protocol was applied to develop highly detailed oxidation schemes for biogenic compounds. The relevance of the generated schemes was assessed using experiments performed in the Caltech smog chamber for various NOx conditions. The first results show a systematic overestimation of the simulated SOA concentrations by GECKO-A. Several hypotheses were tested to find the origin of the discrepancies beetwen model and measurements.
Jouriles, Ernest N; McDonald, Renee; Mueller, Victoria; Grych, John H
2012-03-01
This article describes a conceptual model of cognitive and emotional processes proposed to mediate the relation between youth exposure to family violence and teen dating violence perpetration. Explicit beliefs about violence, internal knowledge structures, and executive functioning are hypothesized as cognitive mediators, and their potential influences upon one another are described. Theory and research on the role of emotions and emotional processes in the relation between youths' exposure to family violence and teen dating violence perpetration are also reviewed. We present an integrated model that highlights how emotions and emotional processes work in tandem with hypothesized cognitive mediators to predict teen dating violence.
Computational neuroanatomy: ontology-based representation of neural components and connectivity.
Rubin, Daniel L; Talos, Ion-Florin; Halle, Michael; Musen, Mark A; Kikinis, Ron
2009-02-05
A critical challenge in neuroscience is organizing, managing, and accessing the explosion in neuroscientific knowledge, particularly anatomic knowledge. We believe that explicit knowledge-based approaches to make neuroscientific knowledge computationally accessible will be helpful in tackling this challenge and will enable a variety of applications exploiting this knowledge, such as surgical planning. We developed ontology-based models of neuroanatomy to enable symbolic lookup, logical inference and mathematical modeling of neural systems. We built a prototype model of the motor system that integrates descriptive anatomic and qualitative functional neuroanatomical knowledge. In addition to modeling normal neuroanatomy, our approach provides an explicit representation of abnormal neural connectivity in disease states, such as common movement disorders. The ontology-based representation encodes both structural and functional aspects of neuroanatomy. The ontology-based models can be evaluated computationally, enabling development of automated computer reasoning applications. Neuroanatomical knowledge can be represented in machine-accessible format using ontologies. Computational neuroanatomical approaches such as described in this work could become a key tool in translational informatics, leading to decision support applications that inform and guide surgical planning and personalized care for neurological disease in the future.
Local models of astrophysical discs
NASA Astrophysics Data System (ADS)
Latter, Henrik N.; Papaloizou, John
2017-12-01
Local models of gaseous accretion discs have been successfully employed for decades to describe an assortment of small-scale phenomena, from instabilities and turbulence, to dust dynamics and planet formation. For the most part, they have been derived in a physically motivated but essentially ad hoc fashion, with some of the mathematical assumptions never made explicit nor checked for consistency. This approach is susceptible to error, and it is easy to derive local models that support spurious instabilities or fail to conserve key quantities. In this paper we present rigorous derivations, based on an asympototic ordering, and formulate a hierarchy of local models (incompressible, Boussinesq and compressible), making clear which is best suited for a particular flow or phenomenon, while spelling out explicitly the assumptions and approximations of each. We also discuss the merits of the anelastic approximation, emphasizing that anelastic systems struggle to conserve energy unless strong restrictions are imposed on the flow. The problems encountered by the anelastic approximation are exacerbated by the disc's differential rotation, but also attend non-rotating systems such as stellar interiors. We conclude with a defence of local models and their continued utility in astrophysical research.
Explicitly represented polygon wall boundary model for the explicit MPS method
NASA Astrophysics Data System (ADS)
Mitsume, Naoto; Yoshimura, Shinobu; Murotani, Kohei; Yamada, Tomonori
2015-05-01
This study presents an accurate and robust boundary model, the explicitly represented polygon (ERP) wall boundary model, to treat arbitrarily shaped wall boundaries in the explicit moving particle simulation (E-MPS) method, which is a mesh-free particle method for strong form partial differential equations. The ERP model expresses wall boundaries as polygons, which are explicitly represented without using the distance function. These are derived so that for viscous fluids, and with less computational cost, they satisfy the Neumann boundary condition for the pressure and the slip/no-slip condition on the wall surface. The proposed model is verified and validated by comparing computed results with the theoretical solution, results obtained by other models, and experimental results. Two simulations with complex boundary movements are conducted to demonstrate the applicability of the E-MPS method to the ERP model.
Mathematical modelling methodologies in predictive food microbiology: a SWOT analysis.
Ferrer, Jordi; Prats, Clara; López, Daniel; Vives-Rego, Josep
2009-08-31
Predictive microbiology is the area of food microbiology that attempts to forecast the quantitative evolution of microbial populations over time. This is achieved to a great extent through models that include the mechanisms governing population dynamics. Traditionally, the models used in predictive microbiology are whole-system continuous models that describe population dynamics by means of equations applied to extensive or averaged variables of the whole system. Many existing models can be classified by specific criteria. We can distinguish between survival and growth models by seeing whether they tackle mortality or cell duplication. We can distinguish between empirical (phenomenological) models, which mathematically describe specific behaviour, and theoretical (mechanistic) models with a biological basis, which search for the underlying mechanisms driving already observed phenomena. We can also distinguish between primary, secondary and tertiary models, by examining their treatment of the effects of external factors and constraints on the microbial community. Recently, the use of spatially explicit Individual-based Models (IbMs) has spread through predictive microbiology, due to the current technological capacity of performing measurements on single individual cells and thanks to the consolidation of computational modelling. Spatially explicit IbMs are bottom-up approaches to microbial communities that build bridges between the description of micro-organisms at the cell level and macroscopic observations at the population level. They provide greater insight into the mesoscale phenomena that link unicellular and population levels. Every model is built in response to a particular question and with different aims. Even so, in this research we conducted a SWOT (Strength, Weaknesses, Opportunities and Threats) analysis of the different approaches (population continuous modelling and Individual-based Modelling), which we hope will be helpful for current and future researchers.
Multiscale Cloud System Modeling
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Moncrieff, Mitchell W.
2009-01-01
The central theme of this paper is to describe how cloud system resolving models (CRMs) of grid spacing approximately 1 km have been applied to various important problems in atmospheric science across a wide range of spatial and temporal scales and how these applications relate to other modeling approaches. A long-standing problem concerns the representation of organized precipitating convective cloud systems in weather and climate models. Since CRMs resolve the mesoscale to large scales of motion (i.e., 10 km to global) they explicitly address the cloud system problem. By explicitly representing organized convection, CRMs bypass restrictive assumptions associated with convective parameterization such as the scale gap between cumulus and large-scale motion. Dynamical models provide insight into the physical mechanisms involved with scale interaction and convective organization. Multiscale CRMs simulate convective cloud systems in computational domains up to global and have been applied in place of contemporary convective parameterizations in global models. Multiscale CRMs pose a new challenge for model validation, which is met in an integrated approach involving CRMs, operational prediction systems, observational measurements, and dynamical models in a new international project: the Year of Tropical Convection, which has an emphasis on organized tropical convection and its global effects.
Global D-brane models with stabilised moduli and light axions
NASA Astrophysics Data System (ADS)
Cicoli, Michele
2014-03-01
We review recent attempts to try to combine global issues of string compactifications, like moduli stabilisation, with local issues, like semi-realistic D-brane constructions. We list the main problems encountered, and outline a possible solution which allows globally consistent embeddings of chiral models. We also argue that this stabilisation mechanism leads to an axiverse. We finally illustrate our general claims in a concrete example where the Calabi-Yau manifold is explicitly described by toric geometry.
The Impact of Aerosols on Cloud and Precipitation Processes: Cloud-Resolving Model Simulations
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Li, X.; Khain, A.; Simpson, S.
2004-01-01
Cloud microphysics are inevitably affected by the smoke particle (CCN, cloud condensation nuclei) size distributions below the clouds. Therefore, size distributions parameterized as spectral bin microphysics are needed to explicitly study the effects of atmospheric aerosol concentration on cloud development, rainfall production, and rainfall rates for convective clouds. Recently, two detailed spectral-bin microphysical schemes were implemented into the Goddard Cumulus Ensemble (GCE) model. The formulation for the explicit spectral-bin microphysical processes is based on solving stochastic kinetic equations for the size distribution functions of water droplets (i.e., cloud droplets and raindrops), and several types of ice particles (i.e., pristine ice crystals (columnar and plate-like), snow (dendrites and aggregates), graupel and frozen drops/hail). Each type is described by a special size distribution function containing many categories (i.e. 33 bins). Atmospheric aerosols are also described using number density size-distribution functions. A spectral-bin microphysical model is very expensive from a computational point of view and has only been implemented into the 2D version of the GCE at the present time. The model is tested by studying the evolution of deep cloud systems in the west Pacific warm pool region, in the sub-tropics (Florida) and in the mid-latitude using identical thermodynamic conditions but with different concentrations of CCN: a low 'clean' concentration and a high 'dirty' concentration.
Three Dimensional Explicit Model for Cometary Tail Ions Interactions with Solar Wind
NASA Astrophysics Data System (ADS)
Al Bermani, M. J. F.; Alhamed, S. A.; Khalaf, S. Z.; Ali, H. Sh.; Selman, A. A.
2009-06-01
The different interactions between cometary tail and solar wind ions are studied in the present paper based on three-dimensional Lax explicit method. The model used in this research is based on the continuity equations describing the cometary tail-solar wind interactions. Three dimensional system was considered in this paper. Simulation of the physical system was achieved using computer code written using Matlab 7.0. The parameters studied here assumed Halley comet type and include the particle density rho, the particles velocity v, the magnetic field strength B, dynamic pressure p and internal energy E. The results of the present research showed that the interaction near the cometary nucleus is mainly affected by the new ions added to the plasma of the solar wind, which increases the average molecular weight and result in many unique characteristics of the cometary tail. These characteristics were explained in the presence of the IMF.
Cammi, R
2009-10-28
We present a general formulation of the coupled-cluster (CC) theory for a molecular solute described within the framework of the polarizable continuum model (PCM). The PCM-CC theory is derived in its complete form, called PTDE scheme, in which the correlated electronic density is used to have a self-consistent reaction field, and in an approximate form, called PTE scheme, in which the PCM-CC equations are solved assuming the fixed Hartree-Fock solvent reaction field. Explicit forms for the PCM-CC-PTDE equations are derived at the single and double (CCSD) excitation level of the cluster operator. At the same level, explicit equations for the analytical first derivatives of the PCM basic energy functional are presented, and analytical second derivatives are also discussed. The corresponding PCM-CCSD-PTE equations are given as a special case of the full theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skrypnyk, T., E-mail: taras.skrypnyk@unimib.it, E-mail: tskrypnyk@imath.kiev.ua
Using the technique of classical r-matrices and quantum Lax operators, we construct the most general form of the quantum integrable “n-level, many-mode” spin-boson Jaynes-Cummings-Dicke-type hamiltonians describing an interaction of a molecule of N n-level atoms with many modes of electromagnetic field and containing, in general, additional non-linear interaction terms. We explicitly obtain the corresponding quantum Lax operators and spin-boson analogs of the generalized Gaudin hamiltonians and prove their quantum commutativity. We investigate symmetries of the obtained models that are associated with the geometric symmetries of the classical r-matrices and construct the corresponding algebra of quantum integrals. We consider in detailmore » three classes of non-skew-symmetric classical r-matrices with spectral parameters and explicitly obtain the corresponding quantum Lax operators and Jaynes-Cummings-Dicke-type hamiltonians depending on the considered r-matrix.« less
Revised model core potentials for third-row transition-metal atoms from Lu to Hg
NASA Astrophysics Data System (ADS)
Mori, Hirotoshi; Ueno-Noto, Kaori; Osanai, You; Noro, Takeshi; Fujiwara, Takayuki; Klobukowski, Mariusz; Miyoshi, Eisaku
2009-07-01
We have produced new relativistic model core potentials (spdsMCPs) for the third-row transition-metal atoms from Lu to Hg explicitly treating explicitly 5s and 5p electrons in addition to 5d and 6s electrons in the same manner for the first- and second-row transition-metal atoms given in the previous Letters [Y. Osanai, M.S. Mon, T. Noro, H. Mori, H. Nakashima, M. Klobukowski, E. Miyoshi, Chem. Phys. Lett. 452 (2008) 210; Y. Osanai, E. Soejima, T. Noro, H. Mori, M.S. Mon, M. Klobukowski, E. Miyoshi, Chem. Phys. Lett. 463 (2008) 230]. Using suitable correlating functions with the split-valence MCP functions, we demonstrate that the present MCP basis sets show reasonable performance in describing the electronic structures of atoms and molecules, bringing about accurate excitation energies for atoms and proper spectroscopic constants for Au 2, Hg 2, and AuH.
The use of tacit knowledge in occupational safety and health management systems.
Podgórski, Daniel
2010-01-01
A systematic approach to occupational safety and health (OSH) management and concepts of knowledge management (KM) have developed independently since the 1990s. Most KM models assume a division of knowledge into explicit and tacit. The role of tacit knowledge is stressed as necessary for higher performance in an enterprise. This article reviews literature on KM applications in OSH. Next, 10 sections of an OSH management system (OSH MS) are identified, in which creating and transferring tacit knowledge contributes significantly to prevention of occupational injuries and diseases. The roles of tacit knowledge in OSH MS are contrasted with those of explicit knowledge, but a lack of a model that would describe this process holistically is pointed out. Finally, examples of methods and tools supporting the use of KM in OSH MS are presented and topics of future research aimed at enhancing KM applications in OSH MS are proposed.
An introduction to the multisystem model of knowledge integration and translation.
Palmer, Debra; Kramlich, Debra
2011-01-01
Many nurse researchers have designed strategies to assist health care practitioners to move evidence into practice. While many have been identified as "models," most do not have a conceptual framework. They are unidirectional, complex, and difficult for novice research users to understand. These models have focused on empirical knowledge and ignored the importance of practitioners' tacit knowledge. The Communities of Practice conceptual framework allows for the integration of tacit and explicit knowledge into practice. This article describes the development of a new translation model, the Multisystem Model of Knowledge Integration and Translation, supported by the Communities of Practice conceptual framework.
Three-dimensional ``Mercedes-Benz'' model for water
NASA Astrophysics Data System (ADS)
Dias, Cristiano L.; Ala-Nissila, Tapio; Grant, Martin; Karttunen, Mikko
2009-08-01
In this paper we introduce a three-dimensional version of the Mercedes-Benz model to describe water molecules. In this model van der Waals interactions and hydrogen bonds are given explicitly through a Lennard-Jones potential and a Gaussian orientation-dependent terms, respectively. At low temperature the model freezes forming Ice-I and it reproduces the main peaks of the experimental radial distribution function of water. In addition to these structural properties, the model also captures the thermodynamical anomalies of water: The anomalous density profile, the negative thermal expansivity, the large heat capacity, and the minimum in the isothermal compressibility.
Three-dimensional "Mercedes-Benz" model for water.
Dias, Cristiano L; Ala-Nissila, Tapio; Grant, Martin; Karttunen, Mikko
2009-08-07
In this paper we introduce a three-dimensional version of the Mercedes-Benz model to describe water molecules. In this model van der Waals interactions and hydrogen bonds are given explicitly through a Lennard-Jones potential and a Gaussian orientation-dependent terms, respectively. At low temperature the model freezes forming Ice-I and it reproduces the main peaks of the experimental radial distribution function of water. In addition to these structural properties, the model also captures the thermodynamical anomalies of water: The anomalous density profile, the negative thermal expansivity, the large heat capacity, and the minimum in the isothermal compressibility.
20 Years after "The Ontogeny of Human Memory: A Cognitive Neuroscience Perspective," Where Are We?
ERIC Educational Resources Information Center
Jabès, Adeline; Nelson, Charles A.
2015-01-01
In 1995, Nelson published a paper describing a model of memory development during the first years of life. The current article seeks to provide an update on the original work published 20 years ago. Specifically, we review our current knowledge on the relation between the emergence of explicit memory functions throughout development and the…
ERIC Educational Resources Information Center
Mitton-Kukner, Jennifer; Munroe, Elizabeth; Graham, Deborah
2015-01-01
In this paper we describe the challenges we experience teaching an assessment course to pre-service teachers, as part of their studies in a bachelor of education program. As we teach the course, our intent is to explicitly model assessment practices that reflect a philosophy of "success for all," rather than "sort and rank."…
NASA Astrophysics Data System (ADS)
Jia, Mengyu; Wang, Shuang; Chen, Xueying; Gao, Feng; Zhao, Huijuan
2016-03-01
Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we have reported on an improved explicit model, referred to as "Virtual Source" (VS) diffuse approximation (DA), to inherit the mathematical simplicity of the DA while considerably extend its validity in modeling the near-field photon migration in low-albedo medium. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the nearfield to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. The proposed VS-DA model is validated by comparing with the Monte Carlo simulations, and further introduced in the image reconstruction of the Laminar Optical Tomography system.
Rodriguez, Alex; Mokoema, Pol; Corcho, Francesc; Bisetty, Khrisna; Perez, Juan J
2011-02-17
The prediction capabilities of atomistic simulations of peptides are hampered by different difficulties, including the reliability of force fields, the treatment of the solvent or the adequate sampling of the conformational space. In this work, we have studied the conformational profile of the 10 residue miniprotein CLN025 known to exhibit a β-hairpin in its native state to understand the limitations of implicit methods to describe solvent effects and how these may be compensated by using different force fields. For this purpose, we carried out a thorough sampling of the conformational space of CLN025 in explicit solvent using the replica exchange molecular dynamics method as a sampling technique and compared the results with simulations of the system modeled using the analytical linearized Poisson-Boltzmann (ALPB) method with three different AMBER force fields: parm94, parm96, and parm99SB. The results show the peptide to exhibit a funnel-like free energy landscape with two minima in explicit solvent. In contrast, the higher minimum nearly disappears from the energy surface when the system is studied with an implicit representation of the solvent. Moreover, the different force fields used in combination with the ALPB method do not describe the system in the same manner. The results of this work suggest that the balance between intra- and intermolecular interactions is the cause of the differences between implicit and explicit solvent simulations in this system, stressing the role of the environment to define properly the conformational profile of a peptide in solution.
Progress toward an explicit mechanistic model for the light-driven pump, bacteriorhodopsin
NASA Technical Reports Server (NTRS)
Lanyi, J. K.
1999-01-01
Recent crystallographic information about the structure of bacteriorhodopsin and some of its photointermediates, together with a large amount of spectroscopic and mutational data, suggest a mechanistic model for how this protein couples light energy to the translocation of protons across the membrane. Now nearing completion, this detailed molecular model will describe the nature of the steric and electrostatic conflicts at the photoisomerized retinal, as well as the means by which it induces proton transfers in the two half-channels leading to the two membrane surfaces, thereby causing unidirectional, uphill transport.
NASA Astrophysics Data System (ADS)
Ireland, Lewis G.; Browning, Matthew K.
2018-04-01
Some low-mass stars appear to have larger radii than predicted by standard 1D structure models; prior work has suggested that inefficient convective heat transport, due to rotation and/or magnetism, may ultimately be responsible. We examine this issue using 1D stellar models constructed using Modules for Experiments in Stellar Astrophysics (MESA). First, we consider standard models that do not explicitly include rotational/magnetic effects, with convective inhibition modeled by decreasing a depth-independent mixing length theory (MLT) parameter α MLT. We provide formulae linking changes in α MLT to changes in the interior specific entropy, and hence to the stellar radius. Next, we modify the MLT formulation in MESA to mimic explicitly the influence of rotation and magnetism, using formulations suggested by Stevenson and MacDonald & Mullan, respectively. We find rapid rotation in these models has a negligible impact on stellar structure, primarily because a star’s adiabat, and hence its radius, is predominantly affected by layers near the surface; convection is rapid and largely uninfluenced by rotation there. Magnetic fields, if they influenced convective transport in the manner described by MacDonald & Mullan, could lead to more noticeable radius inflation. Finally, we show that these non-standard effects on stellar structure can be fabricated using a depth-dependent α MLT: a non-magnetic, non-rotating model can be produced that is virtually indistinguishable from one that explicitly parameterizes rotation and/or magnetism using the two formulations above. We provide formulae linking the radially variable α MLT to these putative MLT reformulations.
Advancing reservoir operation description in physically based hydrological models
NASA Astrophysics Data System (ADS)
Anghileri, Daniela; Giudici, Federico; Castelletti, Andrea; Burlando, Paolo
2016-04-01
Last decades have seen significant advances in our capacity of characterizing and reproducing hydrological processes within physically based models. Yet, when the human component is considered (e.g. reservoirs, water distribution systems), the associated decisions are generally modeled with very simplistic rules, which might underperform in reproducing the actual operators' behaviour on a daily or sub-daily basis. For example, reservoir operations are usually described by a target-level rule curve, which represents the level that the reservoir should track during normal operating conditions. The associated release decision is determined by the current state of the reservoir relative to the rule curve. This modeling approach can reasonably reproduce the seasonal water volume shift due to reservoir operation. Still, it cannot capture more complex decision making processes in response, e.g., to the fluctuations of energy prices and demands, the temporal unavailability of power plants or varying amount of snow accumulated in the basin. In this work, we link a physically explicit hydrological model with detailed hydropower behavioural models describing the decision making process by the dam operator. In particular, we consider two categories of behavioural models: explicit or rule-based behavioural models, where reservoir operating rules are empirically inferred from observational data, and implicit or optimization based behavioural models, where, following a normative economic approach, the decision maker is represented as a rational agent maximising a utility function. We compare these two alternate modelling approaches on the real-world water system of Lake Como catchment in the Italian Alps. The water system is characterized by the presence of 18 artificial hydropower reservoirs generating almost 13% of the Italian hydropower production. Results show to which extent the hydrological regime in the catchment is affected by different behavioural models and reservoir operating strategies.
Finding Your Literature Match - A Physics Literature Recommender System
NASA Astrophysics Data System (ADS)
Henneken, Edwin; Kurtz, Michael
2010-03-01
A recommender system is a filtering algorithm that helps you find the right match by offering suggestions based on your choices and information you have provided. A latent factor model is a successful approach. Here an item is characterized by a vector describing to what extent a product is described by each of N categories, and a person is characterized by an ``interest'' vector, based on explicit or implicit feedback by this user. The recommender system assigns ratings to new items and suggests items this user might be interested in. Here we present results of a recommender system designed to find recent literature of interest to people working in the field of solid state physics. Since we do not have explicit feedback, our user vector consists of (implicit) ``usage.'' Using a system of N keywords we construct normalized keyword vectors for articles based on the keywords of that article and its bibliography. The normalized ``interest'' vector is created by calculating the normalized frequency of keyword occurrence in the papers cited by the papers read.
A General Reversible Hereditary Constitutive Model. Part 1; Theoretical Developments
NASA Technical Reports Server (NTRS)
Saleeb, A. F.; Arnold, S. M.
1997-01-01
Using an internal-variable formalism as a starting point, we describe the viscoelastic extension of a previously-developed viscoplasticity formulation of the complete potential structure type. It is mainly motivated by experimental evidence for the presence of rate/time effects in the so-called quasilinear, reversible, material response range. Several possible generalizations are described, in the general format of hereditary-integral representations for non-equilibrium, stress-type, state variables, both for isotropic as well as anisotropic materials. In particular, thorough discussions are given on the important issues of thermodynamic admissibility requirements for such general descriptions, resulting in a set of explicit mathematical constraints on the associated kernel (relaxation and creep compliance) functions. In addition, a number of explicit, integrated forms are derived, under stress and strain control to facilitate the parametric and qualitative response characteristic studies reported here, as well as to help identify critical factors in the actual experimental characterizations from test data that will be reported in Part II.
Requirements for the formal representation of pathophysiology mechanisms by clinicians
Helvensteijn, M.; Kokash, N.; Martorelli, I.; Sarwar, D.; Islam, S.; Grenon, P.; Hunter, P.
2016-01-01
Knowledge of multiscale mechanisms in pathophysiology is the bedrock of clinical practice. If quantitative methods, predicting patient-specific behaviour of these pathophysiology mechanisms, are to be brought to bear on clinical decision-making, the Human Physiome community and Clinical community must share a common computational blueprint for pathophysiology mechanisms. A number of obstacles stand in the way of this sharing—not least the technical and operational challenges that must be overcome to ensure that (i) the explicit biological meanings of the Physiome's quantitative methods to represent mechanisms are open to articulation, verification and study by clinicians, and that (ii) clinicians are given the tools and training to explicitly express disease manifestations in direct contribution to modelling. To this end, the Physiome and Clinical communities must co-develop a common computational toolkit, based on this blueprint, to bridge the representation of knowledge of pathophysiology mechanisms (a) that is implicitly depicted in electronic health records and the literature, with (b) that found in mathematical models explicitly describing mechanisms. In particular, this paper makes use of a step-wise description of a specific disease mechanism as a means to elicit the requirements of representing pathophysiological meaning explicitly. The computational blueprint developed from these requirements addresses the Clinical community goals to (i) organize and manage healthcare resources in terms of relevant disease-related knowledge of mechanisms and (ii) train the next generation of physicians in the application of quantitative methods relevant to their research and practice. PMID:27051514
Bidirectional holographic codes and sub-AdS locality
NASA Astrophysics Data System (ADS)
Yang, Zhao; Hayden, Patrick; Qi, Xiaoliang
Tensor networks implementing quantum error correcting codes have recently been used as toy models of the holographic duality which explicitly realize some of the more puzzling features of the AdS/CFT correspondence. These models reproduce the Ryu-Takayanagi entropy formula for boundary intervals, and allow bulk operators to be mapped to the boundary in a redundant fashion. These exactly solvable, explicit models have provided valuable insight but nonetheless suffer from many deficiencies, some of which we attempt to address in this talk. We propose a new class of tensor network models that subsume the earlier advances and, in addition, incorporate additional features of holographic duality, including: (1) a holographic interpretation of all boundary states, not just those in a ''code'' subspace, (2) a set of bulk states playing the role of ''classical geometries'' which reproduce the Ryu-Takayanagi formula for boundary intervals, (3) a bulk gauge symmetry analogous to diffeomorphism invariance in gravitational theories, (4) emergent bulk locality for sufficiently sparse excitations, and the ability to describe geometry at sub-AdS resolutions or even flat space. David and Lucile Packard Foundation.
Bidirectional holographic codes and sub-AdS locality
NASA Astrophysics Data System (ADS)
Yang, Zhao; Hayden, Patrick; Qi, Xiao-Liang
2016-01-01
Tensor networks implementing quantum error correcting codes have recently been used to construct toy models of holographic duality explicitly realizing some of the more puzzling features of the AdS/CFT correspondence. These models reproduce the Ryu-Takayanagi entropy formula for boundary intervals, and allow bulk operators to be mapped to the boundary in a redundant fashion. These exactly solvable, explicit models have provided valuable insight but nonetheless suffer from many deficiencies, some of which we attempt to address in this article. We propose a new class of tensor network models that subsume the earlier advances and, in addition, incorporate additional features of holographic duality, including: (1) a holographic interpretation of all boundary states, not just those in a "code" subspace, (2) a set of bulk states playing the role of "classical geometries" which reproduce the Ryu-Takayanagi formula for boundary intervals, (3) a bulk gauge symmetry analogous to diffeomorphism invariance in gravitational theories, (4) emergent bulk locality for sufficiently sparse excitations, and (5) the ability to describe geometry at sub-AdS resolutions or even flat space.
Multi-Scale Modeling and the Eddy-Diffusivity/Mass-Flux (EDMF) Parameterization
NASA Astrophysics Data System (ADS)
Teixeira, J.
2015-12-01
Turbulence and convection play a fundamental role in many key weather and climate science topics. Unfortunately, current atmospheric models cannot explicitly resolve most turbulent and convective flow. Because of this fact, turbulence and convection in the atmosphere has to be parameterized - i.e. equations describing the dynamical evolution of the statistical properties of turbulence and convection motions have to be devised. Recently a variety of different models have been developed that attempt at simulating the atmosphere using variable resolution. A key problem however is that parameterizations are in general not explicitly aware of the resolution - the scale awareness problem. In this context, we will present and discuss a specific approach, the Eddy-Diffusivity/Mass-Flux (EDMF) parameterization, that not only is in itself a multi-scale parameterization but it is also particularly well suited to deal with the scale-awareness problems that plague current variable-resolution models. It does so by representing small-scale turbulence using a classic Eddy-Diffusivity (ED) method, and the larger-scale (boundary layer and tropospheric-scale) eddies as a variety of plumes using the Mass-Flux (MF) concept.
Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, J.; Polly, B.; Collis, J.
2013-09-01
This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less
Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
and Ben Polly, Joseph Robertson; Polly, Ben; Collis, Jon
2013-09-01
This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define "explicit" input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less
Structure-reactivity modeling using mixture-based representation of chemical reactions.
Polishchuk, Pavel; Madzhidov, Timur; Gimadiev, Timur; Bodrov, Andrey; Nugmanov, Ramil; Varnek, Alexandre
2017-09-01
We describe a novel approach of reaction representation as a combination of two mixtures: a mixture of reactants and a mixture of products. In turn, each mixture can be encoded using an earlier reported approach involving simplex descriptors (SiRMS). The feature vector representing these two mixtures results from either concatenated product and reactant descriptors or the difference between descriptors of products and reactants. This reaction representation doesn't need an explicit labeling of a reaction center. The rigorous "product-out" cross-validation (CV) strategy has been suggested. Unlike the naïve "reaction-out" CV approach based on a random selection of items, the proposed one provides with more realistic estimation of prediction accuracy for reactions resulting in novel products. The new methodology has been applied to model rate constants of E2 reactions. It has been demonstrated that the use of the fragment control domain applicability approach significantly increases prediction accuracy of the models. The models obtained with new "mixture" approach performed better than those required either explicit (Condensed Graph of Reaction) or implicit (reaction fingerprints) reaction center labeling.
Explicit equilibria in a kinetic model of gambling
NASA Astrophysics Data System (ADS)
Bassetti, F.; Toscani, G.
2010-06-01
We introduce and discuss a nonlinear kinetic equation of Boltzmann type which describes the evolution of wealth in a pure gambling process, where the entire sum of wealths of two agents is up for gambling, and randomly shared between the agents. For this equation the analytical form of the steady states is found for various realizations of the random fraction of the sum which is shared to the agents. Among others, the exponential distribution appears as steady state in case of a uniformly distributed random fraction, while Gamma distribution appears for a random fraction which is Beta distributed. The case in which the gambling game is only conservative-in-the-mean is shown to lead to an explicit heavy tailed distribution.
Gaufberg, Elizabeth; Bor, David; Dinardo, Perry; Krupat, Edward; Pine, Elizabeth; Ogur, Barbara; Hirsh, David A
2017-01-01
Graduates of Harvard Medical School's Cambridge Integrated Clerkship (CIC) describe several core processes that may underlie professional identity formation (PIF): encouragement to integrate pre-professional and professional identities; support for learner autonomy in discovering meaningful roles and responsibilities; learning through caring relationships; and a curriculum and an institutional culture that make values explicit. The authors suggest that the benefits of educational integrity accrue when idealistic learners inhabit an educational model that aligns with their own core values, and when professional development occurs in the context of an institutional home that upholds these values. Medical educators should clarify and animate principles within curricula and learning environments explicitly in order to support the professional identity formation of their learners.
Degree Distribution of Position-Dependent Ball-Passing Networks in Football Games
NASA Astrophysics Data System (ADS)
Narizuka, Takuma; Yamamoto, Ken; Yamazaki, Yoshihiro
2015-08-01
We propose a simple stochastic model describing the position-dependent ball-passing network in football (soccer) games. In this network, a player in a certain area in a divided field is a node, and a pass between two nodes corresponds to an edge. Our stochastic process model is characterized by the consecutive choice of a node depending on its intrinsic fitness. We derive an explicit expression for the degree distribution and find that the derived distribution reproduces that for actual data reasonably well.
CDPOP: A spatially explicit cost distance population genetics program
Erin L. Landguth; S. A. Cushman
2010-01-01
Spatially explicit simulation of gene flow in complex landscapes is essential to explain observed population responses and provide a foundation for landscape genetics. To address this need, we wrote a spatially explicit, individual-based population genetics model (CDPOP). The model implements individual-based population modelling with Mendelian inheritance and k-allele...
High School Students' Meta-Modeling Knowledge
NASA Astrophysics Data System (ADS)
Fortus, David; Shwartz, Yael; Rosenfeld, Sherman
2016-12-01
Modeling is a core scientific practice. This study probed the meta-modeling knowledge (MMK) of high school students who study science but had not had any explicit prior exposure to modeling as part of their formal schooling. Our goals were to (A) evaluate the degree to which MMK is dependent on content knowledge and (B) assess whether the upper levels of the modeling learning progression defined by Schwarz et al. (2009) are attainable by Israeli K-12 students. Nine Israeli high school students studying physics, chemistry, biology, or general science were interviewed individually, once using a context related to the science subject that they were learning and once using an unfamiliar context. All the interviewees displayed MMK superior to that of elementary and middle school students, despite the lack of formal instruction on the practice. Their MMK was independent of content area, but their ability to engage in the practice of modeling was content dependent. This study indicates that, given proper support, the upper levels of the learning progression described by Schwarz et al. (2009) may be attainable by K-12 science students. The value of explicitly focusing on MMK as a learning goal in science education is considered.
An efficient, explicit finite-rate algorithm to compute flows in chemical nonequilibrium
NASA Technical Reports Server (NTRS)
Palmer, Grant
1989-01-01
An explicit finite-rate code was developed to compute hypersonic viscous chemically reacting flows about three-dimensional bodies. Equations describing the finite-rate chemical reactions were fully coupled to the gas dynamic equations using a new coupling technique. The new technique maintains stability in the explicit finite-rate formulation while permitting relatively large global time steps.
Simple models for studying complex spatiotemporal patterns of animal behavior
NASA Astrophysics Data System (ADS)
Tyutyunov, Yuri V.; Titova, Lyudmila I.
2017-06-01
Minimal mathematical models able to explain complex patterns of animal behavior are essential parts of simulation systems describing large-scale spatiotemporal dynamics of trophic communities, particularly those with wide-ranging species, such as occur in pelagic environments. We present results obtained with three different modelling approaches: (i) an individual-based model of animal spatial behavior; (ii) a continuous taxis-diffusion-reaction system of partial-difference equations; (iii) a 'hybrid' approach combining the individual-based algorithm of organism movements with explicit description of decay and diffusion of the movement stimuli. Though the models are based on extremely simple rules, they all allow description of spatial movements of animals in a predator-prey system within a closed habitat, reproducing some typical patterns of the pursuit-evasion behavior observed in natural populations. In all three models, at each spatial position the animal movements are determined by local conditions only, so the pattern of collective behavior emerges due to self-organization. The movement velocities of animals are proportional to the density gradients of specific cues emitted by individuals of the antagonistic species (pheromones, exometabolites or mechanical waves of the media, e.g., sound). These cues play a role of taxis stimuli: prey attract predators, while predators repel prey. Depending on the nature and the properties of the movement stimulus we propose using either a simplified individual-based model, a continuous taxis pursuit-evasion system, or a little more detailed 'hybrid' approach that combines simulation of the individual movements with the continuous model describing diffusion and decay of the stimuli in an explicit way. These can be used to improve movement models for many species, including large marine predators.
Interatomic potentials in condensed matter via the maximum-entropy principle
NASA Astrophysics Data System (ADS)
Carlsson, A. E.
1987-09-01
A general method is described for the calculation of interatomic potentials in condensed-matter systems by use of a maximum-entropy Ansatz for the interatomic correlation functions. The interatomic potentials are given explicitly in terms of statistical correlation functions involving the potential energy and the structure factor of a ``reference medium.'' Illustrations are given for Al-Cu alloys and a model transition metal.
Influence of kondo effect on the specific heat jump of anisotropic superconductors
NASA Astrophysics Data System (ADS)
Yoksan, S.
1986-01-01
A calculation for the specific heat jump of an anisotropic superconductor with Kondo impurities is presented. The impurities are treated within the Matsuura - Ichinose - Nagaoka framework and the anisotropy effect is described by the factorizable model of Markowitz and Kadanoff. We give explicit expressions for the change in specific heat jump due to anisotropy and impurities which can be tested experimentally.
Estimating the numerical diapycnal mixing in an eddy-permitting ocean model
NASA Astrophysics Data System (ADS)
Megann, Alex
2018-01-01
Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, having attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimates have been made of the typical magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is a recent ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre. It forms the ocean component of the GC2 climate model, and is closely related to the ocean component of the UKESM1 Earth System Model, the UK's contribution to the CMIP6 model intercomparison. GO5.0 uses version 3.4 of the NEMO model, on the ORCA025 global tripolar grid. An approach to quantifying the numerical diapycnal mixing in this model, based on the isopycnal watermass analysis of Lee et al. (2002), is described, and the estimates thereby obtained of the effective diapycnal diffusivity in GO5.0 are compared with the values of the explicit diffusivity used by the model. It is shown that the effective mixing in this model configuration is up to an order of magnitude higher than the explicit mixing in much of the ocean interior, implying that mixing in the model below the mixed layer is largely dominated by numerical mixing. This is likely to have adverse consequences for the representation of heat uptake in climate models intended for decadal climate projections, and in particular is highly relevant to the interpretation of the CMIP6 class of climate models, many of which use constant-depth ocean models at ¼° resolution
2016-01-01
An important challenge in the simulation of biomolecular systems is a quantitative description of the protonation and deprotonation process of amino acid residues. Despite the seeming simplicity of adding or removing a positively charged hydrogen nucleus, simulating the actual protonation/deprotonation process is inherently difficult. It requires both the explicit treatment of the excess proton, including its charge defect delocalization and Grotthuss shuttling through inhomogeneous moieties (water and amino residues), and extensive sampling of coupled condensed phase motions. In a recent paper (J. Chem. Theory Comput.2014, 10, 2729−273725061442), a multiscale approach was developed to map high-level quantum mechanics/molecular mechanics (QM/MM) data into a multiscale reactive molecular dynamics (MS-RMD) model in order to describe amino acid deprotonation in bulk water. In this article, we extend the fitting approach (called FitRMD) to create MS-RMD models for ionizable amino acids within proteins. The resulting models are shown to faithfully reproduce the free energy profiles of the reference QM/MM Hamiltonian for PT inside an example protein, the ClC-ec1 H+/Cl– antiporter. Moreover, we show that the resulting MS-RMD models are computationally efficient enough to then characterize more complex 2-dimensional free energy surfaces due to slow degrees of freedom such as water hydration of internal protein cavities that can be inherently coupled to the excess proton charge translocation. The FitRMD method is thus shown to be an effective way to map ab initio level accuracy into a much more computationally efficient reactive MD method in order to explicitly simulate and quantitatively describe amino acid protonation/deprotonation in proteins. PMID:26734942
Lee, Sangyun; Liang, Ruibin; Voth, Gregory A; Swanson, Jessica M J
2016-02-09
An important challenge in the simulation of biomolecular systems is a quantitative description of the protonation and deprotonation process of amino acid residues. Despite the seeming simplicity of adding or removing a positively charged hydrogen nucleus, simulating the actual protonation/deprotonation process is inherently difficult. It requires both the explicit treatment of the excess proton, including its charge defect delocalization and Grotthuss shuttling through inhomogeneous moieties (water and amino residues), and extensive sampling of coupled condensed phase motions. In a recent paper (J. Chem. Theory Comput. 2014, 10, 2729-2737), a multiscale approach was developed to map high-level quantum mechanics/molecular mechanics (QM/MM) data into a multiscale reactive molecular dynamics (MS-RMD) model in order to describe amino acid deprotonation in bulk water. In this article, we extend the fitting approach (called FitRMD) to create MS-RMD models for ionizable amino acids within proteins. The resulting models are shown to faithfully reproduce the free energy profiles of the reference QM/MM Hamiltonian for PT inside an example protein, the ClC-ec1 H(+)/Cl(-) antiporter. Moreover, we show that the resulting MS-RMD models are computationally efficient enough to then characterize more complex 2-dimensional free energy surfaces due to slow degrees of freedom such as water hydration of internal protein cavities that can be inherently coupled to the excess proton charge translocation. The FitRMD method is thus shown to be an effective way to map ab initio level accuracy into a much more computationally efficient reactive MD method in order to explicitly simulate and quantitatively describe amino acid protonation/deprotonation in proteins.
Communication skills training: describing a new conceptual model.
Brown, Richard F; Bylund, Carma L
2008-01-01
Current research in communication in physician-patient consultations is multidisciplinary and multimethodological. As this research has progressed, a considerable body of evidence on the best practices in physician-patient communication has been amassed. This evidence provides a foundation for communication skills training (CST) at all levels of medical education. Although the CST literature has demonstrated that communication skills can be taught, one critique of this literature is that it is not always clear which skills are being taught and whether those skills are matched with those being assessed. The Memorial Sloan-Kettering Cancer Center Comskil Model for CST seeks to answer those critiques by explicitly defining the important components of a consultation, based on Goals, Plans, and Actions theories and sociolinguistic theory. Sequenced guidelines as a mechanism for teaching about particular communication challenges are adapted from these other methods. The authors propose that consultation communication can be guided by an overarching goal, which is achieved through the use of a set of predetermined strategies. Strategies are common in CST; however, strategies often contain embedded communication skills. These skills can exist across strategies, and the Comskil Model seeks to make them explicit in these contexts. Separate from the skills are process tasks and cognitive appraisals that need to be addressed in teaching. The authors also describe how assessment practices foster concordance between skills taught and those assessed through careful coding of trainees' communication encounters and direct feedback.
Rigorous Model Reduction for a Damped-Forced Nonlinear Beam Model: An Infinite-Dimensional Analysis
NASA Astrophysics Data System (ADS)
Kogelbauer, Florian; Haller, George
2018-06-01
We use invariant manifold results on Banach spaces to conclude the existence of spectral submanifolds (SSMs) in a class of nonlinear, externally forced beam oscillations. SSMs are the smoothest nonlinear extensions of spectral subspaces of the linearized beam equation. Reduction in the governing PDE to SSMs provides an explicit low-dimensional model which captures the correct asymptotics of the full, infinite-dimensional dynamics. Our approach is general enough to admit extensions to other types of continuum vibrations. The model-reduction procedure we employ also gives guidelines for a mathematically self-consistent modeling of damping in PDEs describing structural vibrations.
Emotion modelling towards affective pathogenesis.
Bas, James Le
2009-12-01
Objective: There is a need in psychiatry for models that integrate pathological states with normal systems. The interaction of arousal and emotion is the focus of an exploration of affective pathogenesis. Method: Given that the explicit causes of affective disorder remain nascent, methods of linking emotion and disorder are evaluated. Results: A network model of emotional families is presented, in which emotions exist as quantal gradients. Morbid emotional states are seen as the activation of distal emotion sites. The phenomenology of affective disorders is described with reference to this model. Recourse is made to non-linear dynamic theory. Conclusions: Metaphoric emotion models have face validity and may prove a useful heuristic.
Modelling the nonlinear behaviour of an underplatform damper test rig for turbine applications
NASA Astrophysics Data System (ADS)
Pesaresi, L.; Salles, L.; Jones, A.; Green, J. S.; Schwingshackl, C. W.
2017-02-01
Underplatform dampers (UPD) are commonly used in aircraft engines to mitigate the risk of high-cycle fatigue failure of turbine blades. The energy dissipated at the friction contact interface of the damper reduces the vibration amplitude significantly, and the couplings of the blades can also lead to significant shifts of the resonance frequencies of the bladed disk. The highly nonlinear behaviour of bladed discs constrained by UPDs requires an advanced modelling approach to ensure that the correct damper geometry is selected during the design of the turbine, and that no unexpected resonance frequencies and amplitudes will occur in operation. Approaches based on an explicit model of the damper in combination with multi-harmonic balance solvers have emerged as a promising way to predict the nonlinear behaviour of UPDs correctly, however rigorous experimental validations are required before approaches of this type can be used with confidence. In this study, a nonlinear analysis based on an updated explicit damper model having different levels of detail is performed, and the results are evaluated against a newly-developed UPD test rig. Detailed linear finite element models are used as input for the nonlinear analysis, allowing the inclusion of damper flexibility and inertia effects. The nonlinear friction interface between the blades and the damper is described with a dense grid of 3D friction contact elements which allow accurate capturing of the underlying nonlinear mechanism that drives the global nonlinear behaviour. The introduced explicit damper model showed a great dependence on the correct contact pressure distribution. The use of an accurate, measurement based, distribution, better matched the nonlinear dynamic behaviour of the test rig. Good agreement with the measured frequency response data could only be reached when the zero harmonic term (constant term) was included in the multi-harmonic expansion of the nonlinear problem, highlighting its importance when the contact interface experiences large normal load variation. The resulting numerical damper kinematics with strong translational and rotational motion, and the global blades frequency response were fully validated experimentally, showing the accuracy of the suggested high detailed explicit UPD modelling approach.
Fitts’ Law in the Control of Isometric Grip Force With Naturalistic Targets
Thumser, Zachary C.; Slifkin, Andrew B.; Beckler, Dylan T.; Marasco, Paul D.
2018-01-01
Fitts’ law models the relationship between amplitude, precision, and speed of rapid movements. It is widely used to quantify performance in pointing tasks, study human-computer interaction, and generally to understand perceptual-motor information processes, including research to model performance in isometric force production tasks. Applying Fitts’ law to an isometric grip force task would allow for quantifying grasp performance in rehabilitative medicine and may aid research on prosthetic control and design. We examined whether Fitts’ law would hold when participants attempted to accurately produce their intended force output while grasping a manipulandum when presented with images of various everyday objects (we termed this the implicit task). Although our main interest was the implicit task, to benchmark it and establish validity, we examined performance against a more standard visual feedback condition via a digital force-feedback meter on a video monitor (explicit task). Next, we progressed from visual force feedback with force meter targets to the same targets without visual force feedback (operating largely on feedforward control with tactile feedback). This provided an opportunity to see if Fitts’ law would hold without vision, and allowed us to progress toward the more naturalistic implicit task (which does not include visual feedback). Finally, we changed the nature of the targets from requiring explicit force values presented as arrows on a force-feedback meter (explicit targets) to the more naturalistic and intuitive target forces implied by images of objects (implicit targets). With visual force feedback the relation between task difficulty and the time to produce the target grip force was predicted by Fitts’ law (average r2 = 0.82). Without vision, average grip force scaled accurately although force variability was insensitive to the target presented. In contrast, images of everyday objects generated more reliable grip forces without the visualized force meter. In sum, population means were well-described by Fitts’ law for explicit targets with vision (r2 = 0.96) and implicit targets (r2 = 0.89), but not as well-described for explicit targets without vision (r2 = 0.54). Implicit targets should provide a realistic see-object-squeeze-object test using Fitts’ law to quantify the relative speed-accuracy relationship of any given grasper. PMID:29773999
Caruso, Geoffrey; Cavailhès, Jean; Peeters, Dominique; Thomas, Isabelle; Frankhauser, Pierre; Vuidel, Gilles
2015-01-01
This paper describes a dataset of 6284 land transactions prices and plot surfaces in 3 medium-sized cities in France (Besançon, Dijon and Brest). The dataset includes road accessibility as obtained from a minimization algorithm, and the amount of green space available to households in the neighborhood of the transactions, as evaluated from a land cover dataset. Further to the data presentation, the paper describes how these variables can be used to estimate the non-observable parameters of a residential choice function explicitly derived from a microeconomic model. The estimates are used by Caruso et al. (2015) to run a calibrated microeconomic urban growth simulation model where households are assumed to trade-off accessibility and local green space amenities. PMID:26958606
Optimization of wood plastic composite decks
NASA Astrophysics Data System (ADS)
Ravivarman, S.; Venkatesh, G. S.; Karmarkar, A.; Shivkumar N., D.; Abhilash R., M.
2018-04-01
Wood Plastic Composite (WPC) is a new class of natural fibre based composite material that contains plastic matrix reinforced with wood fibres or wood flour. In the present work, Wood Plastic Composite was prepared with 70-wt% of wood flour reinforced in polypropylene matrix. Mechanical characterization of the composite was done by carrying out laboratory tests such as tensile test and flexural test as per the American Society for Testing and Materials (ASTM) standards. Computer Aided Design (CAD) model of the laboratory test specimen (tensile test) was created and explicit finite element analysis was carried out on the finite element model in non-linear Explicit FE code LS - DYNA. The piecewise linear plasticity (MAT 24) material model was identified as a suitable model in LS-DYNA material library, describing the material behavior of the developed composite. The composite structures for decking application in construction industry were then optimized for cross sectional area and distance between two successive supports (span length) by carrying out various numerical experiments in LS-DYNA. The optimized WPC deck (Elliptical channel-2 E10) has 45% reduced weight than the baseline model (solid cross-section) considered in this study with the load carrying capacity meeting acceptance criterion (allowable deflection & stress) for outdoor decking application.
Computational neuroanatomy: ontology-based representation of neural components and connectivity
Rubin, Daniel L; Talos, Ion-Florin; Halle, Michael; Musen, Mark A; Kikinis, Ron
2009-01-01
Background A critical challenge in neuroscience is organizing, managing, and accessing the explosion in neuroscientific knowledge, particularly anatomic knowledge. We believe that explicit knowledge-based approaches to make neuroscientific knowledge computationally accessible will be helpful in tackling this challenge and will enable a variety of applications exploiting this knowledge, such as surgical planning. Results We developed ontology-based models of neuroanatomy to enable symbolic lookup, logical inference and mathematical modeling of neural systems. We built a prototype model of the motor system that integrates descriptive anatomic and qualitative functional neuroanatomical knowledge. In addition to modeling normal neuroanatomy, our approach provides an explicit representation of abnormal neural connectivity in disease states, such as common movement disorders. The ontology-based representation encodes both structural and functional aspects of neuroanatomy. The ontology-based models can be evaluated computationally, enabling development of automated computer reasoning applications. Conclusion Neuroanatomical knowledge can be represented in machine-accessible format using ontologies. Computational neuroanatomical approaches such as described in this work could become a key tool in translational informatics, leading to decision support applications that inform and guide surgical planning and personalized care for neurological disease in the future. PMID:19208191
Multiscale modeling of porous ceramics using movable cellular automaton method
NASA Astrophysics Data System (ADS)
Smolin, Alexey Yu.; Smolin, Igor Yu.; Smolina, Irina Yu.
2017-10-01
The paper presents a multiscale model for porous ceramics based on movable cellular automaton method, which is a particle method in novel computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the unique position in space. As a result, we get the average values of Young's modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behavior at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via effective properties determined earliar. If the pore size distribution function of the material has N maxima we need to perform computations for N-1 levels in order to get the properties step by step from the lowest scale up to the macroscale. The proposed approach was applied to modeling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behavior of the model sample at the macroscale.
The Impact of Aerosols on Cloud and Precipitation Processes: Cloud-Resolving Model Simulations
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Li, X.; Khain, A.; Simpson, S.; Johnson, D.; Remer, L.
2004-01-01
Cloud microphysics is inevitably affected by the smoke particle (CCN, cloud condensation nuclei) size distributions below the clouds. Therefore, size distributions parameterized as spectral bin microphysics are needed to explicitly study the effects of atmospheric aerosol concentration on cloud development, rainfall production, and rainfall rates for convective clouds. Recently, two detailed spectral-bin microphysical schemes were implemented into the Goddard Cumulus Ensembel (GCE) model. The formulation for the explicit spectral-bin microphysical processes is based on solving stochastic kinetic equations for the size distribution functions of water droplets (i.e., cloud droplets and raindrops), and several types of ice particles [i.e. pristine ice crystals (columnar and plate-like), snow (dendrites and aggregates), graupel and frozen drops/hail]. Each type is described by a special size distribution function containing many categories (i.e. 33 bins). Atmospheric aerosols are also described using number density size distribution functions. A spectral-bin microphysical model is very expensive from a computational point of view and has only been implemented into the 2D version of the GCE at the present time. The model is tested by studying the evolution of deep tropical clouds in the west Pacific warm pool region and in the mid-latitude continent with different concentrations of CCN: a low "c1ean"concentration and a high "dirty" concentration. In addition, differences and similarities between bulk microphysics and spectral-bin microphysical schemes will be examined and discussed.
The Impact of Aerosols on Cloud and Precipitation Processes: Cloud-resolving Model Simulations
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Li, X.; Khain, A.; Simpson, S.; Johnson, D.; Remer, L.
2004-01-01
Cloud microphysics is inevitably affected by the smoke particle (CCN, cloud condensation nuclei) size distributions below the clouds. Therefore, size distributions parameterized as spectral bin microphysics are needed to explicitly study the effects of atmospheric aerosol concentration on cloud development, r d a U production, and rainfall rates for convective clouds. Recently, two detailed spectral-bin microphysical schemes were implemented into the Goddard Cumulus Ensembe1 (GCE) model. The formulation for the explicit spectral-bin microphysical processes is based on solving stochastic kinetic equations for the size distribution functions of water droplets (i.e., cloud droplets and raindrops), and several types of ice particles [i.e. pristine ice crystals (columnar and platelike), snow (dendrites and aggregates), graupel and frozen drops/hail]. Each type is described by a special size distribution function containing many categories (i.e. 33 bins). Atmospheric aerosols are also described using number density size-distribution functions. A spectral-bin microphysical model is very expensive from a computational point of view and has only been implemented into the 2D version of the GCE at the present time. The model is tested by studying the evolution of deep tropical clouds in the west Pacific warm pool region and in the mid-latitude continent with different concentrations of CCN: a low "c1ean"concentration and a high "dirty" concentration. In addition, differences and similarities between bulk microphysics and spectral-bin microphysical schemes will be examined and discussed.
Torfs, Elena; Balemans, Sophie; Locatelli, Florent; Diehl, Stefan; Bürger, Raimund; Laurent, Julien; François, Pierre; Nopens, Ingmar
2017-03-01
Advanced 1-D models for Secondary Settling Tanks (SSTs) explicitly account for several phenomena that influence the settling process (such as hindered settling and compression settling). For each of these phenomena a valid mathematical expression needs to be selected and its parameters calibrated to obtain a model that can be used for operation and control. This is, however, a challenging task as these phenomena may occur simultaneously. Therefore, the presented work evaluates several available expressions for hindered settling based on long-term batch settling data. Specific attention is paid to the behaviour of these hindered settling functions in the compression region in order to evaluate how the modelling of sludge compression is influenced by the choice of a certain hindered settling function. The analysis shows that the exponential hindered settling forms, which are most commonly used in traditional SST models, not only account for hindered settling but partly lump other phenomena (compression) as well. This makes them unsuitable for advanced 1-D models that explicitly include each phenomenon in a modular way. A power-law function is shown to be more appropriate to describe the hindered settling velocity in advanced 1-D SST models. Copyright © 2016 Elsevier Ltd. All rights reserved.
Electrostatic Origin of Salt-Induced Nucleosome Array Compaction
Korolev, Nikolay; Allahverdi, Abdollah; Yang, Ye; Fan, Yanping; Lyubartsev, Alexander P.; Nordenskiöld, Lars
2010-01-01
The physical mechanism of the folding and unfolding of chromatin is fundamentally related to transcription but is incompletely characterized and not fully understood. We experimentally and theoretically studied chromatin compaction by investigating the salt-mediated folding of an array made of 12 positioning nucleosomes with 177 bp repeat length. Sedimentation velocity measurements were performed to monitor the folding provoked by addition of cations Na+, K+, Mg2+, Ca2+, spermidine3+, Co(NH3)63+, and spermine4+. We found typical polyelectrolyte behavior, with the critical concentration of cation needed to bring about maximal folding covering a range of almost five orders of magnitude (from 2 μM for spermine4+ to 100 mM for Na+). A coarse-grained model of the nucleosome array based on a continuum dielectric description and including the explicit presence of mobile ions and charged flexible histone tails was used in computer simulations to investigate the cation-mediated compaction. The results of the simulations with explicit ions are in general agreement with the experimental data, whereas simple Debye-Hückel models are intrinsically incapable of describing chromatin array folding by multivalent cations. We conclude that the theoretical description of the salt-induced chromatin folding must incorporate explicit mobile ions that include ion correlation and ion competition effects. PMID:20858435
Superintegrability of geodesic motion on the sausage model
NASA Astrophysics Data System (ADS)
Arutyunov, Gleb; Heinze, Martin; Medina-Rincon, Daniel
2017-06-01
Reduction of the η-deformed sigma model on AdS_5× S5 to the two-dimensional squashed sphere (S^2)η can be viewed as a special case of the Fateev sausage model where the coupling constant ν is imaginary. We show that geodesic motion in this model is described by a certain superintegrable mechanical system with four-dimensional phase space. This is done by means of explicitly constructing three integrals of motion which satisfy the sl(2) Poisson algebra relations, albeit being non-polynomial in momenta. Further, we find a canonical transformation which transforms the Hamiltonian of this mechanical system to the one describing the geodesic motion on the usual two-sphere. By inverting this transformation we map geodesics on this auxiliary two-sphere back to the sausage model. This paper is a tribute to the memory of Prof Petr Kulish.
Effects of Explicit Instructions, Metacognition, and Motivation on Creative Performance
ERIC Educational Resources Information Center
Hong, Eunsook; O'Neil, Harold F.; Peng, Yun
2016-01-01
Effects of explicit instructions, metacognition, and intrinsic motivation on creative homework performance were examined in 303 Chinese 10th-grade students. Models that represent hypothesized relations among these constructs and trait covariates were tested using structural equation modelling. Explicit instructions geared to originality were…
Intrusive effects of implicitly processed information on explicit memory.
Sentz, Dustin F; Kirkhart, Matthew W; LoPresto, Charles; Sobelman, Steven
2002-02-01
This study described the interference of implicitly processed information on the memory for explicitly processed information. Participants studied a list of words either auditorily or visually under instructions to remember the words (explicit study). They were then visually presented another word list under instructions which facilitate implicit but not explicit processing. Following a distractor task, memory for the explicit study list was tested with either a visual or auditory recognition task that included new words, words from the explicit study list, and words implicitly processed. Analysis indicated participants both failed to recognize words from the explicit study list and falsely recognized words that were implicitly processed as originating from the explicit study list. However, this effect only occurred when the testing modality was visual, thereby matching the modality for the implicitly processed information, regardless of the modality of the explicit study list. This "modality effect" for explicit memory was interpreted as poor source memory for implicitly processed information and in light of the procedures used. as well as illustrating an example of "remembering causing forgetting."
A Knowledge-Based Representation Scheme for Environmental Science Models
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Dungan, Jennifer L.; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
One of the primary methods available for studying environmental phenomena is the construction and analysis of computational models. We have been studying how artificial intelligence techniques can be applied to assist in the development and use of environmental science models within the context of NASA-sponsored activities. We have identified several high-utility areas as potential targets for research and development: model development; data visualization, analysis, and interpretation; model publishing and reuse, training and education; and framing, posing, and answering questions. Central to progress on any of the above areas is a representation for environmental models that contains a great deal more information than is present in a traditional software implementation. In particular, a traditional software implementation is devoid of any semantic information that connects the code with the environmental context that forms the background for the modeling activity. Before we can build AI systems to assist in model development and usage, we must develop a representation for environmental models that adequately describes a model's semantics and explicitly represents the relationship between the code and the modeling task at hand. We have developed one such representation in conjunction with our work on the SIGMA (Scientists' Intelligent Graphical Modeling Assistant) environment. The key feature of the representation is that it provides a semantic grounding for the symbols in a set of modeling equations by linking those symbols to an explicit representation of the underlying environmental scenario.
Description of the Hexadecapole Deformation Parameter in the sdg Interacting Boson Model
NASA Astrophysics Data System (ADS)
Liu, Yu-xin; Sun, Di; Wang, Jia-jun; Han, Qi-zhi
1998-04-01
The hexadecapole deformation parameter β4 of the rare-earth and actinide nuclei is investigated in the framework of the sdg interacing boson model. An explicit relation between the geometric hexadecapole deformation parameter β4 and the intrinsic deformation parameters epsilon4, epsilon2 are obtained. The deformation parameters β4 of the rare-earths and actinides are determined without any free parameter. The calculated results agree with experimental data well. It also shows that the SU(5) limit of the sdg interacting boson model can describe the β4 systematics as well as the SU(3) limit.
Berezinskii-Kosterlitz-Thouless transition in the time-reversal-symmetric Hofstadter-Hubbard model
NASA Astrophysics Data System (ADS)
Iskin, M.
2018-01-01
Assuming that two-component Fermi gases with opposite artificial magnetic fields on a square optical lattice are well described by the so-called time-reversal-symmetric Hofstadter-Hubbard model, we explore the thermal superfluid properties along with the critical Berezinskii-Kosterlitz-Thouless (BKT) transition temperature in this model over a wide range of its parameters. In particular, since our self-consistent BCS-BKT approach takes the multiband butterfly spectrum explicitly into account, it unveils how dramatically the interband contribution to the phase stiffness dominates the intraband one with an increasing interaction strength for any given magnetic flux.
Modelling parasite aggregation: disentangling statistical and ecological approaches.
Yakob, Laith; Soares Magalhães, Ricardo J; Gray, Darren J; Milinovich, Gabriel; Wardrop, Nicola; Dunning, Rebecca; Barendregt, Jan; Bieri, Franziska; Williams, Gail M; Clements, Archie C A
2014-05-01
The overdispersion in macroparasite infection intensity among host populations is commonly simulated using a constant negative binomial aggregation parameter. We describe an alternative to utilising the negative binomial approach and demonstrate important disparities in intervention efficacy projections that can come about from opting for pattern-fitting models that are not process-explicit. We present model output in the context of the epidemiology and control of soil-transmitted helminths due to the significant public health burden imposed by these parasites, but our methods are applicable to other infections with demonstrable aggregation in parasite numbers among hosts. Copyright © 2014. Published by Elsevier Ltd.
Karr, Jonathan R; Williams, Alex H; Zucker, Jeremy D; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A; Bot, Brian M; Hoff, Bruce R; Kellen, Michael R; Covert, Markus W; Stolovitzky, Gustavo A; Meyer, Pablo
2015-05-01
Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.
Modeling spatial variation in avian survival and residency probabilities
Saracco, James F.; Royle, J. Andrew; DeSante, David F.; Gardner, Beth
2010-01-01
The importance of understanding spatial variation in processes driving animal population dynamics is widely recognized. Yet little attention has been paid to spatial modeling of vital rates. Here we describe a hierarchical spatial autoregressive model to provide spatially explicit year-specific estimates of apparent survival (phi) and residency (pi) probabilities from capture-recapture data. We apply the model to data collected on a declining bird species, Wood Thrush (Hylocichla mustelina), as part of a broad-scale bird-banding network, the Monitoring Avian Productivity and Survivorship (MAPS) program. The Wood Thrush analysis showed variability in both phi and pi among years and across space. Spatial heterogeneity in residency probability was particularly striking, suggesting the importance of understanding the role of transients in local populations. We found broad-scale spatial patterning in Wood Thrush phi and pi that lend insight into population trends and can direct conservation and research. The spatial model developed here represents a significant advance over approaches to investigating spatial pattern in vital rates that aggregate data at coarse spatial scales and do not explicitly incorporate spatial information in the model. Further development and application of hierarchical capture-recapture models offers the opportunity to more fully investigate spatiotemporal variation in the processes that drive population changes.
Luminance, Colour, Viewpoint and Border Enhanced Disparity Energy Model
Martins, Jaime A.; Rodrigues, João M. F.; du Buf, Hans
2015-01-01
The visual cortex is able to extract disparity information through the use of binocular cells. This process is reflected by the Disparity Energy Model, which describes the role and functioning of simple and complex binocular neuron populations, and how they are able to extract disparity. This model uses explicit cell parameters to mathematically determine preferred cell disparities, like spatial frequencies, orientations, binocular phases and receptive field positions. However, the brain cannot access such explicit cell parameters; it must rely on cell responses. In this article, we implemented a trained binocular neuronal population, which encodes disparity information implicitly. This allows the population to learn how to decode disparities, in a similar way to how our visual system could have developed this ability during evolution. At the same time, responses of monocular simple and complex cells can also encode line and edge information, which is useful for refining disparities at object borders. The brain should then be able, starting from a low-level disparity draft, to integrate all information, including colour and viewpoint perspective, in order to propagate better estimates to higher cortical areas. PMID:26107954
A numerical model of two-phase flow at the micro-scale using the volume-of-fluid method
NASA Astrophysics Data System (ADS)
Shams, Mosayeb; Raeini, Ali Q.; Blunt, Martin J.; Bijeljic, Branko
2018-03-01
This study presents a simple and robust numerical scheme to model two-phase flow in porous media where capillary forces dominate over viscous effects. The volume-of-fluid method is employed to capture the fluid-fluid interface whose dynamics is explicitly described based on a finite volume discretization of the Navier-Stokes equations. Interfacial forces are calculated directly on reconstructed interface elements such that the total curvature is preserved. The computed interfacial forces are explicitly added to the Navier-Stokes equations using a sharp formulation which effectively eliminates spurious currents. The stability and accuracy of the implemented scheme is validated on several two- and three-dimensional test cases, which indicate the capability of the method to model two-phase flow processes at the micro-scale. In particular we show how the co-current flow of two viscous fluids leads to greatly enhanced flow conductance for the wetting phase in corners of the pore space, compared to a case where the non-wetting phase is an inviscid gas.
Nonlinear Fano interferences in open quantum systems: An exactly solvable model
NASA Astrophysics Data System (ADS)
Finkelstein-Shapiro, Daniel; Calatayud, Monica; Atabek, Osman; Mujica, Vladimiro; Keller, Arne
2016-06-01
We obtain an explicit solution for the stationary-state populations of a dissipative Fano model, where a discrete excited state is coupled to a continuum set of states; both excited sets of states are reachable by photoexcitation from the ground state. The dissipative dynamic is described by a Liouville equation in Lindblad form and the field intensity can take arbitrary values within the model. We show that the population of the continuum states as a function of laser frequency can always be expressed as a Fano profile plus a Lorentzian function with effective parameters whose explicit expressions are given in the case of a closed system coupled to a bath as well as for the original Fano scattering framework. Although the solution is intricate, it can be elegantly expressed as a linear transformation of the kernel of a 4 ×4 matrix which has the meaning of an effective Liouvillian. We unveil key notable processes related to the optical nonlinearity and which had not been reported to date: electromagnetic-induced transparency, population inversions, power narrowing and broadening, as well as an effective reduction of the Fano asymmetry parameter.
Schlosser, Florian; Moskaleva, Lyudmila V; Kremleva, Alena; Krüger, Sven; Rösch, Notker
2010-06-28
With a relativistic all-electron density functional method, we studied two anionic uranium(VI) carbonate complexes that are important for uranium speciation and transport in aqueous medium, the mononuclear tris(carbonato) complex [UO(2)(CO(3))(3)](4-) and the trinuclear hexa(carbonato) complex [(UO(2))(3)(CO(3))(6)](6-). Focusing on the structures in solution, we applied for the first time a full solvation treatment to these complexes. We approximated short-range effects by explicit aqua ligands and described long-range electrostatic interactions via a polarizable continuum model. Structures and vibrational frequencies of "gas-phase" models with explicit aqua ligands agree best with experiment. This is accidental because the continuum model of the solvent to some extent overestimates the electrostatic interactions of these highly anionic systems with the bulk solvent. The calculated free energy change when three mono-nuclear complexes associate to the trinuclear complex, agrees well with experiment and supports the formation of the latter species upon acidification of a uranyl carbonate solution.
Quantum mechanical force field for water with explicit electronic polarization.
Han, Jaebeom; Mazack, Michael J M; Zhang, Peng; Truhlar, Donald G; Gao, Jiali
2013-08-07
A quantum mechanical force field (QMFF) for water is described. Unlike traditional approaches that use quantum mechanical results and experimental data to parameterize empirical potential energy functions, the present QMFF uses a quantum mechanical framework to represent intramolecular and intermolecular interactions in an entire condensed-phase system. In particular, the internal energy terms used in molecular mechanics are replaced by a quantum mechanical formalism that naturally includes electronic polarization due to intermolecular interactions and its effects on the force constants of the intramolecular force field. As a quantum mechanical force field, both intermolecular interactions and the Hamiltonian describing the individual molecular fragments can be parameterized to strive for accuracy and computational efficiency. In this work, we introduce a polarizable molecular orbital model Hamiltonian for water and for oxygen- and hydrogen-containing compounds, whereas the electrostatic potential responsible for intermolecular interactions in the liquid and in solution is modeled by a three-point charge representation that realistically reproduces the total molecular dipole moment and the local hybridization contributions. The present QMFF for water, which is called the XP3P (explicit polarization with three-point-charge potential) model, is suitable for modeling both gas-phase clusters and liquid water. The paper demonstrates the performance of the XP3P model for water and proton clusters and the properties of the pure liquid from about 900 × 10(6) self-consistent-field calculations on a periodic system consisting of 267 water molecules. The unusual dipole derivative behavior of water, which is incorrectly modeled in molecular mechanics, is naturally reproduced as a result of an electronic structural treatment of chemical bonding by XP3P. We anticipate that the XP3P model will be useful for studying proton transport in solution and solid phases as well as across biological ion channels through membranes.
Frembgen-Kesner, Tamara; Andrews, Casey T; Li, Shuxiang; Ngo, Nguyet Anh; Shubert, Scott A; Jain, Aakash; Olayiwola, Oluwatoni J; Weishaar, Mitch R; Elcock, Adrian H
2015-05-12
Recently, we reported the parametrization of a set of coarse-grained (CG) nonbonded potential functions, derived from all-atom explicit-solvent molecular dynamics (MD) simulations of amino acid pairs and designed for use in (implicit-solvent) Brownian dynamics (BD) simulations of proteins; this force field was named COFFDROP (COarse-grained Force Field for Dynamic Representations Of Proteins). Here, we describe the extension of COFFDROP to include bonded backbone terms derived from fitting to results of explicit-solvent MD simulations of all possible two-residue peptides containing the 20 standard amino acids, with histidine modeled in both its protonated and neutral forms. The iterative Boltzmann inversion (IBI) method was used to optimize new CG potential functions for backbone-related terms by attempting to reproduce angle, dihedral, and distance probability distributions generated by the MD simulations. In a simple test of the transferability of the extended force field, the angle, dihedral, and distance probability distributions obtained from BD simulations of 56 three-residue peptides were compared to results from corresponding explicit-solvent MD simulations. In a more challenging test of the COFFDROP force field, it was used to simulate eight intrinsically disordered proteins and was shown to quite accurately reproduce the experimental hydrodynamic radii (Rhydro), provided that the favorable nonbonded interactions of the force field were uniformly scaled downward in magnitude. Overall, the results indicate that the COFFDROP force field is likely to find use in modeling the conformational behavior of intrinsically disordered proteins and multidomain proteins connected by flexible linkers.
2015-08-01
21 Figure 4. Data-based proportion of DDD , DDE and DDT in total DDx in fish and sediment by... DDD dichlorodiphenyldichloroethane DDE dichlorodiphenyldichloroethylene DDT dichlorodiphenyltrichloroethane DoD Department of Defense ERM... DDD ) at the other site. The spatially-explicit model consistently predicts tissue concentrations that closely match both the average and the
Bergeron, Kim; Abdi, Samiya; DeCorby, Kara; Mensah, Gloria; Rempel, Benjamin; Manson, Heather
2017-11-28
There is limited research on capacity building interventions that include theoretical foundations. The purpose of this systematic review is to identify underlying theories, models and frameworks used to support capacity building interventions relevant to public health practice. The aim is to inform and improve capacity building practices and services offered by public health organizations. Four search strategies were used: 1) electronic database searching; 2) reference lists of included papers; 3) key informant consultation; and 4) grey literature searching. Inclusion and exclusion criteria are outlined with included papers focusing on capacity building, learning plans, professional development plans in combination with tools, resources, processes, procedures, steps, model, framework, guideline, described in a public health or healthcare setting, or non-government, government, or community organizations as they relate to healthcare, and explicitly or implicitly mention a theory, model and/or framework that grounds the type of capacity building approach developed. Quality assessment were performed on all included articles. Data analysis included a process for synthesizing, analyzing and presenting descriptive summaries, categorizing theoretical foundations according to which theory, model and/or framework was used and whether or not the theory, model or framework was implied or explicitly identified. Nineteen articles were included in this review. A total of 28 theories, models and frameworks were identified. Of this number, two theories (Diffusion of Innovations and Transformational Learning), two models (Ecological and Interactive Systems Framework for Dissemination and Implementation) and one framework (Bloom's Taxonomy of Learning) were identified as the most frequently cited. This review identifies specific theories, models and frameworks to support capacity building interventions relevant to public health organizations. It provides public health practitioners with a menu of potentially usable theories, models and frameworks to support capacity building efforts. The findings also support the need for the use of theories, models or frameworks to be intentional, explicitly identified, referenced and for it to be clearly outlined how they were applied to the capacity building intervention.
NASA Astrophysics Data System (ADS)
Messner, Mark C.; Rhee, Moono; Arsenlis, Athanasios; Barton, Nathan R.
2017-06-01
This work develops a method for calibrating a crystal plasticity model to the results of discrete dislocation (DD) simulations. The crystal model explicitly represents junction formation and annihilation mechanisms and applies these mechanisms to describe hardening in hexagonal close packed metals. The model treats these dislocation mechanisms separately from elastic interactions among populations of dislocations, which the model represents through a conventional strength-interaction matrix. This split between elastic interactions and junction formation mechanisms more accurately reproduces the DD data and results in a multi-scale model that better represents the lower scale physics. The fitting procedure employs concepts of machine learning—feature selection by regularized regression and cross-validation—to develop a robust, physically accurate crystal model. The work also presents a method for ensuring the final, calibrated crystal model respects the physical symmetries of the crystal system. Calibrating the crystal model requires fitting two linear operators: one describing elastic dislocation interactions and another describing junction formation and annihilation dislocation reactions. The structure of these operators in the final, calibrated model reflect the crystal symmetry and slip system geometry of the DD simulations.
NASA Technical Reports Server (NTRS)
Gibson, S. G.
1983-01-01
A system of computer programs was developed to model general three dimensional surfaces. Surfaces are modeled as sets of parametric bicubic patches. There are also capabilities to transform coordinates, to compute mesh/surface intersection normals, and to format input data for a transonic potential flow analysis. A graphical display of surface models and intersection normals is available. There are additional capabilities to regulate point spacing on input curves and to compute surface/surface intersection curves. Input and output data formats are described; detailed suggestions are given for user input. Instructions for execution are given, and examples are shown.
Karr, Jonathan R.; Williams, Alex H.; Zucker, Jeremy D.; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A.; Bot, Brian M.; Hoff, Bruce R.; Kellen, Michael R.; Covert, Markus W.; Stolovitzky, Gustavo A.; Meyer, Pablo
2015-01-01
Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model’s structure and in silico “experimental” data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation. PMID:26020786
Prediction of Complex Aerodynamic Flows with Explicit Algebraic Stress Models
NASA Technical Reports Server (NTRS)
Abid, Ridha; Morrison, Joseph H.; Gatski, Thomas B.; Speziale, Charles G.
1996-01-01
An explicit algebraic stress equation, developed by Gatski and Speziale, is used in the framework of K-epsilon formulation to predict complex aerodynamic turbulent flows. The nonequilibrium effects are modeled through coefficients that depend nonlinearly on both rotational and irrotational strains. The proposed model was implemented in the ISAAC Navier-Stokes code. Comparisons with the experimental data are presented which clearly demonstrate that explicit algebraic stress models can predict the correct response to nonequilibrium flow.
Gopalaswamy, Arjun M.; Royle, J. Andrew; Hines, James E.; Singh, Pallavi; Jathanna, Devcharan; Kumar, N. Samba; Karanth, K. Ullas
2012-01-01
1. The advent of spatially explicit capture-recapture models is changing the way ecologists analyse capture-recapture data. However, the advantages offered by these new models are not fully exploited because they can be difficult to implement. 2. To address this need, we developed a user-friendly software package, created within the R programming environment, called SPACECAP. This package implements Bayesian spatially explicit hierarchical models to analyse spatial capture-recapture data. 3. Given that a large number of field biologists prefer software with graphical user interfaces for analysing their data, SPACECAP is particularly useful as a tool to increase the adoption of Bayesian spatially explicit capture-recapture methods in practice.
Fractional Stochastic Field Theory
NASA Astrophysics Data System (ADS)
Honkonen, Juha
2018-02-01
Models describing evolution of physical, chemical, biological, social and financial processes are often formulated as differential equations with the understanding that they are large-scale equations for averages of quantities describing intrinsically random processes. Explicit account of randomness may lead to significant changes in the asymptotic behaviour (anomalous scaling) in such models especially in low spatial dimensions, which in many cases may be captured with the use of the renormalization group. Anomalous scaling and memory effects may also be introduced with the use of fractional derivatives and fractional noise. Construction of renormalized stochastic field theory with fractional derivatives and fractional noise in the underlying stochastic differential equations and master equations and the interplay between fluctuation-induced and built-in anomalous scaling behaviour is reviewed and discussed.
Sankar, Punnaivanam; Alain, Krief; Aghila, Gnanasekaran
2010-05-24
We have developed a model structure-editing tool, ChemEd, programmed in JAVA, which allows drawing chemical structures on a graphical user interface (GUI) by selecting appropriate structural fragments defined in a fragment library. The terms representing the structural fragments are organized in fragment ontology to provide a conceptual support. ChemEd describes the chemical structure in an XML document (ChemFul) with rich semantics explicitly encoding the details of the chemical bonding, the hybridization status, and the electron environment around each atom. The document can be further processed through suitable algorithms and with the support of external chemical ontologies to generate understandable reports about the functional groups present in the structure and their specific environment.
An alternative model for a partially coherent elliptical dark hollow beam
NASA Astrophysics Data System (ADS)
Li, Xu; Wang, Fei; Cai, Yangjian
2011-04-01
An alternative theoretical model named partially coherent hollow elliptical Gaussian beam (HEGB) is proposed to describe a partially coherent beam with an elliptical dark hollow profile. Explicit expression for the propagation factors of a partially coherent HEGB is derived. Based on the generalized Collins formula, analytical formulae for the cross-spectral density and mean-squared beam width of a partially coherent HEGB, propagating through a paraxial ABCD optical system, are derived. Propagation properties of a partially coherent HEGB in free space are studied as a numerical example.
Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains
NASA Astrophysics Data System (ADS)
Cofré, Rodrigo; Maldonado, Cesar
2018-01-01
We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. We review large deviations techniques useful in this context to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.
High Performance Programming Using Explicit Shared Memory Model on the Cray T3D
NASA Technical Reports Server (NTRS)
Saini, Subhash; Simon, Horst D.; Lasinski, T. A. (Technical Monitor)
1994-01-01
The Cray T3D is the first-phase system in Cray Research Inc.'s (CRI) three-phase massively parallel processing program. In this report we describe the architecture of the T3D, as well as the CRAFT (Cray Research Adaptive Fortran) programming model, and contrast it with PVM, which is also supported on the T3D We present some performance data based on the NAS Parallel Benchmarks to illustrate both architectural and software features of the T3D.
Deng, Nanjie; Zhang, Bin W.; Levy, Ronald M.
2015-01-01
The ability to accurately model solvent effects on free energy surfaces is important for understanding many biophysical processes including protein folding and misfolding, allosteric transitions and protein-ligand binding. Although all-atom simulations in explicit solvent can provide an accurate model for biomolecules in solution, explicit solvent simulations are hampered by the slow equilibration on rugged landscapes containing multiple basins separated by barriers. In many cases, implicit solvent models can be used to significantly speed up the conformational sampling; however, implicit solvent simulations do not fully capture the effects of a molecular solvent, and this can lead to loss of accuracy in the estimated free energies. Here we introduce a new approach to compute free energy changes in which the molecular details of explicit solvent simulations are retained while also taking advantage of the speed of the implicit solvent simulations. In this approach, the slow equilibration in explicit solvent, due to the long waiting times before barrier crossing, is avoided by using a thermodynamic cycle which connects the free energy basins in implicit solvent and explicit solvent using a localized decoupling scheme. We test this method by computing conformational free energy differences and solvation free energies of the model system alanine dipeptide in water. The free energy changes between basins in explicit solvent calculated using fully explicit solvent paths agree with the corresponding free energy differences obtained using the implicit/explicit thermodynamic cycle to within 0.3 kcal/mol out of ~3 kcal/mol at only ~8 % of the computational cost. We note that WHAM methods can be used to further improve the efficiency and accuracy of the explicit/implicit thermodynamic cycle. PMID:26236174
Deng, Nanjie; Zhang, Bin W; Levy, Ronald M
2015-06-09
The ability to accurately model solvent effects on free energy surfaces is important for understanding many biophysical processes including protein folding and misfolding, allosteric transitions, and protein–ligand binding. Although all-atom simulations in explicit solvent can provide an accurate model for biomolecules in solution, explicit solvent simulations are hampered by the slow equilibration on rugged landscapes containing multiple basins separated by barriers. In many cases, implicit solvent models can be used to significantly speed up the conformational sampling; however, implicit solvent simulations do not fully capture the effects of a molecular solvent, and this can lead to loss of accuracy in the estimated free energies. Here we introduce a new approach to compute free energy changes in which the molecular details of explicit solvent simulations are retained while also taking advantage of the speed of the implicit solvent simulations. In this approach, the slow equilibration in explicit solvent, due to the long waiting times before barrier crossing, is avoided by using a thermodynamic cycle which connects the free energy basins in implicit solvent and explicit solvent using a localized decoupling scheme. We test this method by computing conformational free energy differences and solvation free energies of the model system alanine dipeptide in water. The free energy changes between basins in explicit solvent calculated using fully explicit solvent paths agree with the corresponding free energy differences obtained using the implicit/explicit thermodynamic cycle to within 0.3 kcal/mol out of ∼3 kcal/mol at only ∼8% of the computational cost. We note that WHAM methods can be used to further improve the efficiency and accuracy of the implicit/explicit thermodynamic cycle.
NASA Astrophysics Data System (ADS)
Sibileau, Alberto; Auricchio, Ferdinando; Morganti, Simone; Díez, Pedro
2018-01-01
Architectured materials (or metamaterials) are constituted by a unit-cell with a complex structural design repeated periodically forming a bulk material with emergent mechanical properties. One may obtain specific macro-scale (or bulk) properties in the resulting architectured material by properly designing the unit-cell. Typically, this is stated as an optimal design problem in which the parameters describing the shape and mechanical properties of the unit-cell are selected in order to produce the desired bulk characteristics. This is especially pertinent due to the ease manufacturing of these complex structures with 3D printers. The proper generalized decomposition provides explicit parametic solutions of parametric PDEs. Here, the same ideas are used to obtain parametric solutions of the algebraic equations arising from lattice structural models. Once the explicit parametric solution is available, the optimal design problem is a simple post-process. The same strategy is applied in the numerical illustrations, first to a unit-cell (and then homogenized with periodicity conditions), and in a second phase to the complete structure of a lattice material specimen.
2018-01-01
We review key mathematical models of the South African human immunodeficiency virus (HIV) epidemic from the early 1990s onwards. In our descriptions, we sometimes differentiate between the concepts of a model world and its mathematical or computational implementation. The model world is the conceptual realm in which we explicitly declare the rules – usually some simplification of ‘real world’ processes as we understand them. Computing details of informative scenarios in these model worlds is a task requiring specialist knowledge, but all other aspects of the modelling process, from describing the model world to identifying the scenarios and interpreting model outputs, should be understandable to anyone with an interest in the epidemic. PMID:29568647
2014-01-01
Pollinator decline has been linked to landscape change, through both habitat fragmentation and the loss of habitat suitable for the pollinators to live within. One method for exploring why landscape change should affect pollinator populations is to combine individual-level behavioural ecological techniques with larger-scale landscape ecology. A modelling framework is described that uses spatially-explicit individual-based models to explore the effects of individual behavioural rules within a landscape. The technique described gives a simple method for exploring the effects of the removal of wild corridors, and the creation of wild set-aside fields: interventions that are common to many national agricultural policies. The effects of these manipulations on central-place nesting pollinators are varied, and depend upon the behavioural rules that the pollinators are using to move through the environment. The value of this modelling framework is discussed, and future directions for exploration are identified. PMID:24795848
Rands, Sean A
2014-01-01
Pollinator decline has been linked to landscape change, through both habitat fragmentation and the loss of habitat suitable for the pollinators to live within. One method for exploring why landscape change should affect pollinator populations is to combine individual-level behavioural ecological techniques with larger-scale landscape ecology. A modelling framework is described that uses spatially-explicit individual-based models to explore the effects of individual behavioural rules within a landscape. The technique described gives a simple method for exploring the effects of the removal of wild corridors, and the creation of wild set-aside fields: interventions that are common to many national agricultural policies. The effects of these manipulations on central-place nesting pollinators are varied, and depend upon the behavioural rules that the pollinators are using to move through the environment. The value of this modelling framework is discussed, and future directions for exploration are identified.
A stochastic model for eye movements during fixation on a stationary target.
NASA Technical Reports Server (NTRS)
Vasudevan, R.; Phatak, A. V.; Smith, J. D.
1971-01-01
A stochastic model describing small eye movements occurring during steady fixation on a stationary target is presented. Based on eye movement data for steady gaze, the model has a hierarchical structure; the principal level represents the random motion of the image point within a local area of fixation, while the higher level mimics the jump processes involved in transitions from one local area to another. Target image motion within a local area is described by a Langevin-like stochastic differential equation taking into consideration the microsaccadic jumps pictured as being due to point processes and the high frequency muscle tremor, represented as a white noise. The transform of the probability density function for local area motion is obtained, leading to explicit expressions for their means and moments. Evaluation of these moments based on the model is comparable with experimental results.
Fractional cable model for signal conduction in spiny neuronal dendrites
NASA Astrophysics Data System (ADS)
Vitali, Silvia; Mainardi, Francesco
2017-06-01
The cable model is widely used in several fields of science to describe the propagation of signals. A relevant medical and biological example is the anomalous subdiffusion in spiny neuronal dendrites observed in several studies of the last decade. Anomalous subdiffusion can be modelled in several ways introducing some fractional component into the classical cable model. The Chauchy problem associated to these kind of models has been investigated by many authors, but up to our knowledge an explicit solution for the signalling problem has not yet been published. Here we propose how this solution can be derived applying the generalized convolution theorem (known as Efros theorem) for Laplace transforms. The fractional cable model considered in this paper is defined by replacing the first order time derivative with a fractional derivative of order α ∈ (0, 1) of Caputo type. The signalling problem is solved for any input function applied to the accessible end of a semi-infinite cable, which satisfies the requirements of the Efros theorem. The solutions corresponding to the simple cases of impulsive and step inputs are explicitly calculated in integral form containing Wright functions. Thanks to the variability of the parameter α, the corresponding solutions are expected to adapt to the qualitative behaviour of the membrane potential observed in experiments better than in the standard case α = 1.
Jia, Mengyu; Chen, Xueying; Zhao, Huijuan; Cui, Shanshan; Liu, Ming; Liu, Lingling; Gao, Feng
2015-01-26
Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we herein report on an improved explicit model for a semi-infinite geometry, referred to as "Virtual Source" (VS) diffuse approximation (DA), to fit for low-albedo medium and short source-detector separation. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the near-field to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. This parameterized scheme is proved to inherit the mathematical simplicity of the DA approximation while considerably extending its validity in modeling the near-field photon migration in low-albedo medium. The superiority of the proposed VS-DA method to the established ones is demonstrated in comparison with Monte-Carlo simulations over wide ranges of the source-detector separation and the medium optical properties.
Non-hydrostatic general circulation model of the Venus atmosphere
NASA Astrophysics Data System (ADS)
Rodin, Alexander V.; Mingalev, Igor; Orlov, Konstantin; Ignatiev, Nikolay
We present the first non-hydrostatic global circulation model of the Venus atmosphere based on the complete set of gas dynamics equations. The model employs a spatially uniform triangular mesh that allows to avoid artificial damping of the dynamical processes in the polar regions, with altitude as a vertical coordinate. Energy conversion from the solar flux into atmospheric motion is described via explicitly specified heating and cooling rates or, alternatively, with help of the radiation block based on comprehensive treatment of the Venus atmosphere spectroscopy, including line mixing effects in CO2 far wing absorption. Momentum equations are integrated using the semi-Lagrangian explicit scheme that provides high accuracy of mass and energy conservation. Due to high vertical grid resolution required by gas dynamics calculations, the model is integrated on the short time step less than one second. The model reliably repro-duces zonal superrotation, smoothly extending far below the cloud layer, tidal patterns at the cloud level and above, and non-rotating, sun-synchronous global convective cell in the upper atmosphere. One of the most interesting features of the model is the development of the polar vortices resembling those observed by Venus Express' VIRTIS instrument. Initial analysis of the simulation results confirms the hypothesis that it is thermal tides that provides main driver for the superrotation.
A Markov chain model for reliability growth and decay
NASA Technical Reports Server (NTRS)
Siegrist, K.
1982-01-01
A mathematical model is developed to describe a complex system undergoing a sequence of trials in which there is interaction between the internal states of the system and the outcomes of the trials. For example, the model might describe a system undergoing testing that is redesigned after each failure. The basic assumptions for the model are that the state of the system after a trial depends probabilistically only on the state before the trial and on the outcome of the trial and that the outcome of a trial depends probabilistically only on the state of the system before the trial. It is shown that under these basic assumptions, the successive states form a Markov chain and the successive states and outcomes jointly form a Markov chain. General results are obtained for the transition probabilities, steady-state distributions, etc. A special case studied in detail describes a system that has two possible state ('repaired' and 'unrepaired') undergoing trials that have three possible outcomes ('inherent failure', 'assignable-cause' 'failure' and 'success'). For this model, the reliability function is computed explicitly and an optimal repair policy is obtained.
A k-Omega Turbulence Model for Quasi-Three-Dimensional Turbomachinery Flows
NASA Technical Reports Server (NTRS)
Chima, Rodrick V.
1995-01-01
A two-equation k-omega turbulence model has been developed and applied to a quasi-three-dimensional viscous analysis code for blade-to-blade flows in turbomachinery. the code includes the effects of rotation, radius change, and variable stream sheet thickness. The flow equations are given and the explicit runge-Kutta solution scheme is described. the k-omega model equations are also given and the upwind implicit approximate-factorization solution scheme is described. Three cases were calculated: transitional flow over a flat plate, a transonic compressor rotor, and transonic turbine vane with heat transfer. Results were compared to theory, experimental data, and to results using the Baldwin-Lomax turbulence model. The two models compared reasonably well with the data and surprisingly well with each other. Although the k-omega model behaves well numerically and simulates effects of transition, freestream turbulence, and wall roughness, it was not decisively better than the Baldwin-Lomax model for the cases considered here.
Barbu, Corentin; Dumonteil, Eric; Gourbière, Sébastien
2010-01-01
Background Chagas disease is a major parasitic disease in Latin America, prevented in part by vector control programs that reduce domestic populations of triatomines. However, the design of control strategies adapted to non-domiciliated vectors, such as Triatoma dimidiata, remains a challenge because it requires an accurate description of their spatio-temporal distributions, and a proper understanding of the underlying dispersal processes. Methodology/Principal Findings We combined extensive spatio-temporal data sets describing house infestation dynamics by T. dimidiata within a village, and spatially explicit population dynamics models in a selection model approach. Several models were implemented to provide theoretical predictions under different hypotheses on the origin of the dispersers and their dispersal characteristics, which we compared with the spatio-temporal pattern of infestation observed in the field. The best models fitted the dynamic of infestation described by a one year time-series, and also predicted with a very good accuracy the infestation process observed during a second replicate one year time-series. The parameterized models gave key insights into the dispersal of these vectors. i) About 55% of the triatomines infesting houses came from the peridomestic habitat, the rest corresponding to immigration from the sylvatic habitat, ii) dispersing triatomines were 5–15 times more attracted by houses than by peridomestic area, and iii) the moving individuals spread on average over rather small distances, typically 40–60 m/15 days. Conclusion/Significance Since these dispersal characteristics are associated with much higher abundance of insects in the periphery of the village, we discuss the possibility that spatially targeted interventions allow for optimizing the efficacy of vector control activities within villages. Such optimization could prove very useful in the context of limited resources devoted to vector control. PMID:20689823
NASA Astrophysics Data System (ADS)
Beneš, Michal; Pažanin, Igor
2018-03-01
This paper reports an analytical investigation of non-isothermal fluid flow in a thin (or long) vertical pipe filled with porous medium via asymptotic analysis. We assume that the fluid inside the pipe is cooled (or heated) by the surrounding medium and that the flow is governed by the prescribed pressure drop between pipe's ends. Starting from the dimensionless Darcy-Brinkman-Boussinesq system, we formally derive a macroscopic model describing the effective flow at small Brinkman-Darcy number. The asymptotic approximation is given by the explicit formulae for the velocity, pressure and temperature clearly acknowledging the effects of the cooling (heating) and porous structure. The theoretical error analysis is carried out to indicate the order of accuracy and to provide a rigorous justification of the effective model.
Fast and accurate focusing analysis of large photon sieve using pinhole ring diffraction model.
Liu, Tao; Zhang, Xin; Wang, Lingjie; Wu, Yanxiong; Zhang, Jizhen; Qu, Hemeng
2015-06-10
In this paper, we developed a pinhole ring diffraction model for the focusing analysis of a large photon sieve. Instead of analyzing individual pinholes, we discuss the focusing of all of the pinholes in a single ring. An explicit equation for the diffracted field of individual pinhole ring has been proposed. We investigated the validity range of this generalized model and analytically describe the sufficient conditions for the validity of this pinhole ring diffraction model. A practical example and investigation reveals the high accuracy of the pinhole ring diffraction model. This simulation method could be used for fast and accurate focusing analysis of a large photon sieve.
NASA Astrophysics Data System (ADS)
Braakhekke, Maarten; Rebel, Karin; Dekker, Stefan; Smith, Benjamin; Sutanudjaja, Edwin; van Beek, Rens; van Kampenhout, Leo; Wassen, Martin
2017-04-01
In up to 30% of the global land surface ecosystems are potentially influenced by the presence of a shallow groundwater table. In these regions upward water flux by capillary rise increases soil moisture availability in the root zone, which has a strong effect on evapotranspiration, vegetation dynamics, and fluxes of carbon and nitrogen. Most global hydrological models and several land surface models simulate groundwater table dynamics and their effects on land surface processes. However, these models typically have relatively simplistic representation of vegetation and do not consider changes in vegetation type and structure. Dynamic global vegetation models (DGVMs), describe land surface from an ecological perspective, combining detailed description of vegetation dynamics and structure, and biogeochemical processes and are thus more appropriate to simulate the ecological and biogeochemical effects of groundwater interactions. However, currently virtually all DGVMs ignore these effects, assuming that water tables are too deep to affect soil moisture in the root zone. We have implemented a tight coupling between the dynamic global ecosystem model LPJ-GUESS and the global hydrological model PCR-GLOBWB, which explicitly simulates groundwater dynamics. This coupled model allows us to explicitly account for groundwater effects on terrestrial ecosystem processes at global scale. Results of global simulations indicate that groundwater strongly influences fluxes of water, carbon and nitrogen, in many regions, adding up to a considerable effect at the global scale.
Application of State Analysis and Goal-based Operations to a MER Mission Scenario
NASA Technical Reports Server (NTRS)
Morris, John Richard; Ingham, Michel D.; Mishkin, Andrew H.; Rasmussen, Robert D.; Starbird, Thomas W.
2006-01-01
State Analysis is a model-based systems engineering methodology employing a rigorous discovery process which articulates operations concepts and operability needs as an integrated part of system design. The process produces requirements on system and software design in the form of explicit models which describe the system behavior in terms of state variables and the relationships among them. By applying State Analysis to an actual MER flight mission scenario, this study addresses the specific real world challenges of complex space operations and explores technologies that can be brought to bear on future missions. The paper first describes the tools currently used on a daily basis for MER operations planning and provides an in-depth description of the planning process, in the context of a Martian day's worth of rover engineering activities, resource modeling, flight rules, science observations, and more. It then describes how State Analysis allows for the specification of a corresponding goal-based sequence that accomplishes the same objectives, with several important additional benefits.
Application of State Analysis and Goal-Based Operations to a MER Mission Scenario
NASA Technical Reports Server (NTRS)
Morris, J. Richard; Ingham, Michel D.; Mishkin, Andrew H.; Rasmussen, Robert D.; Starbird, Thomas W.
2006-01-01
State Analysis is a model-based systems engineering methodology employing a rigorous discovery process which articulates operations concepts and operability needs as an integrated part of system design. The process produces requirements on system and software design in the form of explicit models which describe the behavior of states and the relationships among them. By applying State Analysis to an actual MER flight mission scenario, this study addresses the specific real world challenges of complex space operations and explores technologies that can be brought to bear on future missions. The paper describes the tools currently used on a daily basis for MER operations planning and provides an in-depth description of the planning process, in the context of a Martian day's worth of rover engineering activities, resource modeling, flight rules, science observations, and more. It then describes how State Analysis allows for the specification of a corresponding goal-based sequence that accomplishes the same objectives, with several important additional benefits.
The Role of Aerosols on Precipitation Processes: Cloud Resolving Model Simulations
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Li, X.; Matsui, T.
2012-01-01
Cloud microphysics is inevitably affected by the smoke particle (CCN, cloud condensation nuclei) size distributions below the clouds. Therefore, size distributions parameterized as spectral bin microphysics are needed to explicitly study the effects of atmospheric aerosol concentration on cloud development, rainfall production, and rainfall rates for convective clouds. Recently, a detailed spectral-bin microphysical scheme was implemented into the Goddard Cumulus Ensemble (GCE) model. The formulation for the explicit spectral bin microphysical processes is based on solving stochastic kinetic equations for the size distribution functions of water droplets (i.e., cloud droplets and raindrops), and several types of ice particles [i.e. pristine ice crystals (columnar and plate-like), snow (dendrites and aggregates), graupel and frozen drops/hail]. Each type is described by a special size distribution function containing many categories (i.e., 33 bins). Atmospheric aerosols are also described using number density size-distribution functions. The model is tested by studying the evolution of deep cloud systems in the west Pacific warm pool region, the sub-tropics (Florida) and midlatitudes using identical thermodynamic conditions but with different concentrations of CCN: a low "clean" concentration and a high "dirty" concentration. Results indicate that the low CCN concentration case produces rainfall at the surface sooner than the high CeN case but has less cloud water mass aloft. Because the spectral-bin model explicitly calculates and allows for the examination of both the mass and number concentration of species in each size category, a detailed analysis of the instantaneous size spectrum can be obtained for these cases. It is shown that since the low (CN case produces fewer droplets, larger sizes develop due to greater condensational and collection growth, leading to a broader size spectrum in comparison to the high CCN case. Sensitivity tests were performed to identify the impact of ice processes, radiation and large-scale influence on cloud-aerosol interactive processes, especially regarding surface rainfall amounts and characteristics (i.e., heavy or convective versus light or stratiform types). In addition, an inert tracer was included to follow the vertical redistribution of aerosols by cloud processes. We will also give a brief review from observational evidence on the role of aerosol on precipitation processes.
InterSpread Plus: a spatial and stochastic simulation model of disease in animal populations.
Stevenson, M A; Sanson, R L; Stern, M W; O'Leary, B D; Sujau, M; Moles-Benfell, N; Morris, R S
2013-04-01
We describe the spatially explicit, stochastic simulation model of disease spread, InterSpread Plus, in terms of its epidemiological framework, operation, and mode of use. The input data required by the model, the method for simulating contact and infection spread, and methods for simulating disease control measures are described. Data and parameters that are essential for disease simulation modelling using InterSpread Plus are distinguished from those that are non-essential, and it is suggested that a rational approach to simulating disease epidemics using this tool is to start with core data and parameters, adding additional layers of complexity if and when the specific requirements of the simulation exercise require it. We recommend that simulation models of disease are best developed as part of epidemic contingency planning so decision makers are familiar with model outputs and assumptions and are well-positioned to evaluate their strengths and weaknesses to make informed decisions in times of crisis. Copyright © 2012 Elsevier B.V. All rights reserved.
Pentreath, R J; Woodhead, D S
2001-09-28
In order to demonstrate, explicitly, that the environment can be protected with respect to controlled sources of ionising radiation, it is essential to have a systematic framework within which dosimetry models for fauna and flora can be used. And because of the practical limitations on what could reasonably be modelled and the amount of information that could reasonably be obtained, it is also necessary to limit the application of such models to a 'set' of fauna and flora within a reference' context. This paper, therefore, outlines the factors that will need to be considered to select such 'reference' fauna and flora, and describes some of the factors and constraints necessary to develop the associated dosimetry models. It also describes some of the most basic environmental geometrics within which the dose models could be set in order to make comparisons amongst different radiation sources.
van Tuijl, Lonneke A; de Jong, Peter J; Sportel, B Esther; de Hullu, Eva; Nauta, Maaike H
2014-03-01
A negative self-view is a prominent factor in most cognitive vulnerability models of depression and anxiety. Recently, there has been increased attention to differentiate between the implicit (automatic) and the explicit (reflective) processing of self-related evaluations. This longitudinal study aimed to test the association between implicit and explicit self-esteem and symptoms of adolescent depression and social anxiety disorder. Two complementary models were tested: the vulnerability model and the scarring effect model. Participants were 1641 first and second year pupils of secondary schools in the Netherlands. The Rosenberg Self-Esteem Scale, self-esteem Implicit Association Test and Revised Child Anxiety and Depression Scale were completed to measure explicit self-esteem, implicit self-esteem and symptoms of social anxiety disorder (SAD) and major depressive disorder (MDD), respectively, at baseline and two-year follow-up. Explicit self-esteem at baseline was associated with symptoms of MDD and SAD at follow-up. Symptomatology at baseline was not associated with explicit self-esteem at follow-up. Implicit self-esteem was not associated with symptoms of MDD or SAD in either direction. We relied on self-report measures of MDD and SAD symptomatology. Also, findings are based on a non-clinical sample. Our findings support the vulnerability model, and not the scarring effect model. The implications of these findings suggest support of an explicit self-esteem intervention to prevent increases in MDD and SAD symptomatology in non-clinical adolescents. Copyright © 2013 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamerlin, Shina C. L.; Haranczyk, Maciej; Warshel, Arieh
2009-05-01
Phosphate hydrolysis is ubiquitous in biology. However, despite intensive research on this class of reactions, the precise nature of the reaction mechanism remains controversial. In this work, we have examined the hydrolysis of three homologous phosphate diesters. The solvation free energy was simulated by means of either an implicit solvation model (COSMO), hybrid quantum mechanical / molecular mechanical free energy perturbation (QM/MM-FEP) or a mixed solvation model in which N water molecules were explicitly included in the ab initio description of the reacting system (where N=1-3), with the remainder of the solvent being implicitly modelled as a continuum. Here, bothmore » COSMO and QM/MM-FEP reproduce Delta Gobs within an error of about 2kcal/mol. However, we demonstrate that in order to obtain any form of reliable results from a mixed model, it is essential to carefully select the explicit water molecules from short QM/MM runs that act as a model for the true infinite system. Additionally, the mixed models tend to be increasingly inaccurate the more explicit water molecules are placed into the system. Thus, our analysis indicates that this approach provides an unreliable way for modelling phosphate hydrolysis in solution.« less
Advanced hierarchical distance sampling
Royle, Andy
2016-01-01
In this chapter, we cover a number of important extensions of the basic hierarchical distance-sampling (HDS) framework from Chapter 8. First, we discuss the inclusion of “individual covariates,” such as group size, in the HDS model. This is important in many surveys where animals form natural groups that are the primary observation unit, with the size of the group expected to have some influence on detectability. We also discuss HDS integrated with time-removal and double-observer or capture-recapture sampling. These “combined protocols” can be formulated as HDS models with individual covariates, and thus they have a commonality with HDS models involving group structure (group size being just another individual covariate). We cover several varieties of open-population HDS models that accommodate population dynamics. On one end of the spectrum, we cover models that allow replicate distance sampling surveys within a year, which estimate abundance relative to availability and temporary emigration through time. We consider a robust design version of that model. We then consider models with explicit dynamics based on the Dail and Madsen (2011) model and the work of Sollmann et al. (2015). The final major theme of this chapter is relatively newly developed spatial distance sampling models that accommodate explicit models describing the spatial distribution of individuals known as Point Process models. We provide novel formulations of spatial DS and HDS models in this chapter, including implementations of those models in the unmarked package using a hack of the pcount function for N-mixture models.
Reber, Paul J
2013-08-01
Memory systems research has typically described the different types of long-term memory in the brain as either declarative versus non-declarative or implicit versus explicit. These descriptions reflect the difference between declarative, conscious, and explicit memory that is dependent on the medial temporal lobe (MTL) memory system, and all other expressions of learning and memory. The other type of memory is generally defined by an absence: either the lack of dependence on the MTL memory system (nondeclarative) or the lack of conscious awareness of the information acquired (implicit). However, definition by absence is inherently underspecified and leaves open questions of how this type of memory operates, its neural basis, and how it differs from explicit, declarative memory. Drawing on a variety of studies of implicit learning that have attempted to identify the neural correlates of implicit learning using functional neuroimaging and neuropsychology, a theory of implicit memory is presented that describes it as a form of general plasticity within processing networks that adaptively improve function via experience. Under this model, implicit memory will not appear as a single, coherent, alternative memory system but will instead be manifested as a principle of improvement from experience based on widespread mechanisms of cortical plasticity. The implications of this characterization for understanding the role of implicit learning in complex cognitive processes and the effects of interactions between types of memory will be discussed for examples within and outside the psychology laboratory. Copyright © 2013 Elsevier Ltd. All rights reserved.
Explicit filtering in large eddy simulation using a discontinuous Galerkin method
NASA Astrophysics Data System (ADS)
Brazell, Matthew J.
The discontinuous Galerkin (DG) method is a formulation of the finite element method (FEM). DG provides the ability for a high order of accuracy in complex geometries, and allows for highly efficient parallelization algorithms. These attributes make the DG method attractive for solving the Navier-Stokes equations for large eddy simulation (LES). The main goal of this work is to investigate the feasibility of adopting an explicit filter in the numerical solution of the Navier-Stokes equations with DG. Explicit filtering has been shown to increase the numerical stability of under-resolved simulations and is needed for LES with dynamic sub-grid scale (SGS) models. The explicit filter takes advantage of DG's framework where the solution is approximated using a polyno- mial basis where the higher modes of the solution correspond to a higher order polynomial basis. By removing high order modes, the filtered solution contains low order frequency content much like an explicit low pass filter. The explicit filter implementation is tested on a simple 1-D solver with an initial condi- tion that has some similarity to turbulent flows. The explicit filter does restrict the resolution as well as remove accumulated energy in the higher modes from aliasing. However, the ex- plicit filter is unable to remove numerical errors causing numerical dissipation. A second test case solves the 3-D Navier-Stokes equations of the Taylor-Green vortex flow (TGV). The TGV is useful for SGS model testing because it is initially laminar and transitions into a fully turbulent flow. The SGS models investigated include the constant coefficient Smagorinsky model, dynamic Smagorinsky model, and dynamic Heinz model. The constant coefficient Smagorinsky model is over dissipative, this is generally not desirable however it does add stability. The dynamic Smagorinsky model generally performs better, especially during the laminar-turbulent transition region as expected. The dynamic Heinz model which is based on an improved model, handles the laminar-turbulent transition region well while also showing additional robustness.
Explicit modeling of volatile organic compounds partitioning in the atmospheric aqueous phase
NASA Astrophysics Data System (ADS)
Mouchel-Vallon, C.; Bräuer, P.; Camredon, M.; Valorso, R.; Madronich, S.; Herrmann, H.; Aumont, B.
2012-09-01
The gas phase oxidation of organic species is a multigenerational process involving a large number of secondary compounds. Most secondary organic species are water-soluble multifunctional oxygenated molecules. The fully explicit chemical mechanism GECKO-A (Generator of Explicit Chemistry and Kinetics of Organics in the Atmosphere) is used to describe the oxidation of organics in the gas phase and their mass transfer to the aqueous phase. The oxidation of three hydrocarbons of atmospheric interest (isoprene, octane and α-pinene) is investigated for various NOx conditions. The simulated oxidative trajectories are examined in a new two dimensional space defined by the mean oxidation state and the solubility. The amount of dissolved organic matter was found to be very low (<2%) under a water content typical of deliquescent aerosols. For cloud water content, 50% (isoprene oxidation) to 70% (octane oxidation) of the carbon atoms are found in the aqueous phase after the removal of the parent hydrocarbons for low NOx conditions. For high NOx conditions, this ratio is only 5% in the isoprene oxidation case, but remains large for α-pinene and octane oxidation cases (40% and 60%, respectively). Although the model does not yet include chemical reactions in the aqueous phase, much of this dissolved organic matter should be processed in cloud drops and modify both oxidation rates and the speciation of organic species.
Explicit modeling of volatile organic compounds partitioning in the atmospheric aqueous phase
NASA Astrophysics Data System (ADS)
Mouchel-Vallon, C.; Bräuer, P.; Camredon, M.; Valorso, R.; Madronich, S.; Herrmann, H.; Aumont, B.
2013-01-01
The gas phase oxidation of organic species is a multigenerational process involving a large number of secondary compounds. Most secondary organic species are water-soluble multifunctional oxygenated molecules. The fully explicit chemical mechanism GECKO-A (Generator of Explicit Chemistry and Kinetics of Organics in the Atmosphere) is used to describe the oxidation of organics in the gas phase and their mass transfer to the aqueous phase. The oxidation of three hydrocarbons of atmospheric interest (isoprene, octane and α-pinene) is investigated for various NOx conditions. The simulated oxidative trajectories are examined in a new two dimensional space defined by the mean oxidation state and the solubility. The amount of dissolved organic matter was found to be very low (yield less than 2% on carbon atom basis) under a water content typical of deliquescent aerosols. For cloud water content, 50% (isoprene oxidation) to 70% (octane oxidation) of the carbon atoms are found in the aqueous phase after the removal of the parent hydrocarbons for low NOx conditions. For high NOx conditions, this ratio is only 5% in the isoprene oxidation case, but remains large for α-pinene and octane oxidation cases (40% and 60%, respectively). Although the model does not yet include chemical reactions in the aqueous phase, much of this dissolved organic matter should be processed in cloud drops and modify both oxidation rates and the speciation of organic species.
Bocedi, Greta; Reid, Jane M
2015-01-01
Explaining the evolution and maintenance of polyandry remains a key challenge in evolutionary ecology. One appealing explanation is the sexually selected sperm (SSS) hypothesis, which proposes that polyandry evolves due to indirect selection stemming from positive genetic covariance with male fertilization efficiency, and hence with a male's success in postcopulatory competition for paternity. However, the SSS hypothesis relies on verbal analogy with “sexy-son” models explaining coevolution of female preferences for male displays, and explicit models that validate the basic SSS principle are surprisingly lacking. We developed analogous genetically explicit individual-based models describing the SSS and “sexy-son” processes. We show that the analogy between the two is only partly valid, such that the genetic correlation arising between polyandry and fertilization efficiency is generally smaller than that arising between preference and display, resulting in less reliable coevolution. Importantly, indirect selection was too weak to cause polyandry to evolve in the presence of negative direct selection. Negatively biased mutations on fertilization efficiency did not generally rescue runaway evolution of polyandry unless realized fertilization was highly skewed toward a single male, and coevolution was even weaker given random mating order effects on fertilization. Our models suggest that the SSS process is, on its own, unlikely to generally explain the evolution of polyandry. PMID:25330405
Molecular Dynamics based on a Generalized Born solvation model: application to protein folding
NASA Astrophysics Data System (ADS)
Onufriev, Alexey
2004-03-01
An accurate description of the aqueous environment is essential for realistic biomolecular simulations, but may become very expensive computationally. We have developed a version of the Generalized Born model suitable for describing large conformational changes in macromolecules. The model represents the solvent implicitly as continuum with the dielectric properties of water, and include charge screening effects of salt. The computational cost associated with the use of this model in Molecular Dynamics simulations is generally considerably smaller than the cost of representing water explicitly. Also, compared to traditional Molecular Dynamics simulations based on explicit water representation, conformational changes occur much faster in implicit solvation environment due to the absence of viscosity. The combined speed-up allow one to probe conformational changes that occur on much longer effective time-scales. We apply the model to folding of a 46-residue three helix bundle protein (residues 10-55 of protein A, PDB ID 1BDD). Starting from an unfolded structure at 450 K, the protein folds to the lowest energy state in 6 ns of simulation time, which takes about a day on a 16 processor SGI machine. The predicted structure differs from the native one by 2.4 A (backbone RMSD). Analysis of the structures seen on the folding pathway reveals details of the folding process unavailable form experiment.
Exploring healthcare communication models in private physiotherapy practice.
Hiller, Amy; Guillemin, Marilys; Delany, Clare
2015-10-01
This project explored whether models of healthcare communication are evident within patient-physiotherapist communication in the private practice setting. Using qualitative ethnographic methods, fifty-two patient-physiotherapist treatment sessions were observed and interviews with nine physiotherapists were undertaken. Data were analyzed using thematic analysis. In these clinical encounters physiotherapists led the communication. The communication was structured and focussed on physical aspects of the patient's presentation. These features were mediated via casual conversation and the use of touch to respond to the individual patient. Physiotherapists did not explicitly link their therapeutic communication style to established communication models. However, they described a purposeful approach to how they communicated within the treatment encounter. The communication occurring in the private practice physiotherapy treatment encounter is predominantly representative of a 'practitioner-centred' model. However, the subtle use of touch and casual conversation implicitly communicate competence and care, representative of a patient-centred model. Physiotherapists do not explicitly draw from theories of communication to inform their practice. Physiotherapists may benefit from further education to achieve patient-centred communication. Equally, the incorporation of casual conversation and the use of touch into theory of physiotherapy patient-centred communication would highlight these specific skills that physiotherapists already utilize in practice. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Embedded-explicit emergent literacy intervention I: Background and description of approach.
Justice, Laura M; Kaderavek, Joan N
2004-07-01
This article, the first of a two-part series, provides background information and a general description of an emergent literacy intervention model for at-risk preschoolers and kindergartners. The embedded-explicit intervention model emphasizes the dual importance of providing young children with socially embedded opportunities for meaningful, naturalistic literacy experiences throughout the day, in addition to regular structured therapeutic interactions that explicitly target critical emergent literacy goals. The role of the speech-language pathologist (SLP) in the embedded-explicit model encompasses both indirect and direct service delivery: The SLP consults and collaborates with teachers and parents to ensure the highest quality and quantity of socially embedded literacy-focused experiences and serves as a direct provider of explicit interventions using structured curricula and/or lesson plans. The goal of this integrated model is to provide comprehensive emergent literacy interventions across a spectrum of early literacy skills to ensure the successful transition of at-risk children from prereaders to readers.
Neal D. Niemuth; Michael E. Estey; Charles R. Loesch
2005-01-01
Conservation planning for birds is increasingly focused on landscapes. However, little spatially explicit information is available to guide landscape-level conservation planning for many species of birds. We used georeferenced 1995 Breeding Bird Survey (BBS) data in conjunction with land-cover information to develop a spatially explicit habitat model predicting the...
Explicit robust schemes for implementation of general principal value-based constitutive models
NASA Technical Reports Server (NTRS)
Arnold, S. M.; Saleeb, A. F.; Tan, H. Q.; Zhang, Y.
1993-01-01
The issue of developing effective and robust schemes to implement general hyperelastic constitutive models is addressed. To this end, special purpose functions are used to symbolically derive, evaluate, and automatically generate the associated FORTRAN code for the explicit forms of the corresponding stress function and material tangent stiffness tensors. These explicit forms are valid for the entire deformation range. The analytical form of these explicit expressions is given here for the case in which the strain-energy potential is taken as a nonseparable polynomial function of the principle stretches.
NASA Astrophysics Data System (ADS)
Monthus, Cécile
2018-03-01
For the many-body-localized phase of random Majorana models, a general strong disorder real-space renormalization procedure known as RSRG-X (Pekker et al 2014 Phys. Rev. X 4 011052) is described to produce the whole set of excited states, via the iterative construction of the local integrals of motion (LIOMs). The RG rules are then explicitly derived for arbitrary quadratic Hamiltonians (free-fermions models) and for the Kitaev chain with local interactions involving even numbers of consecutive Majorana fermions. The emphasis is put on the advantages of the Majorana language over the usual quantum spin language to formulate unified RSRG-X rules.
Frembgen-Kesner, Tamara; Andrews, Casey T.; Li, Shuxiang; Ngo, Nguyet Anh; Shubert, Scott A.; Jain, Aakash; Olayiwola, Oluwatoni; Weishaar, Mitch R.; Elcock, Adrian H.
2015-01-01
Recently, we reported the parameterization of a set of coarse-grained (CG) nonbonded potential functions, derived from all-atom explicit-solvent molecular dynamics (MD) simulations of amino acid pairs, and designed for use in (implicit-solvent) Brownian dynamics (BD) simulations of proteins; this force field was named COFFDROP (COarse-grained Force Field for Dynamic Representations Of Proteins). Here, we describe the extension of COFFDROP to include bonded backbone terms derived from fitting to results of explicit-solvent MD simulations of all possible two-residue peptides containing the 20 standard amino acids, with histidine modeled in both its protonated and neutral forms. The iterative Boltzmann inversion (IBI) method was used to optimize new CG potential functions for backbone-related terms by attempting to reproduce angle, dihedral and distance probability distributions generated by the MD simulations. In a simple test of the transferability of the extended force field, the angle, dihedral and distance probability distributions obtained from BD simulations of 56 three-residue peptides were compared to results from corresponding explicit-solvent MD simulations. In a more challenging test of the COFFDROP force field, it was used to simulate eight intrinsically disordered proteins and was shown to quite accurately reproduce the experimental hydrodynamic radii (Rhydro), provided that the favorable nonbonded interactions of the force field were uniformly scaled downwards in magnitude. Overall, the results indicate that the COFFDROP force field is likely to find use in modeling the conformational behavior of intrinsically disordered proteins and multi-domain proteins connected by flexible linkers. PMID:26574429
Spatially explicit multi-criteria decision analysis for managing vector-borne diseases
2011-01-01
The complex epidemiology of vector-borne diseases creates significant challenges in the design and delivery of prevention and control strategies, especially in light of rapid social and environmental changes. Spatial models for predicting disease risk based on environmental factors such as climate and landscape have been developed for a number of important vector-borne diseases. The resulting risk maps have proven value for highlighting areas for targeting public health programs. However, these methods generally only offer technical information on the spatial distribution of disease risk itself, which may be incomplete for making decisions in a complex situation. In prioritizing surveillance and intervention strategies, decision-makers often also need to consider spatially explicit information on other important dimensions, such as the regional specificity of public acceptance, population vulnerability, resource availability, intervention effectiveness, and land use. There is a need for a unified strategy for supporting public health decision making that integrates available data for assessing spatially explicit disease risk, with other criteria, to implement effective prevention and control strategies. Multi-criteria decision analysis (MCDA) is a decision support tool that allows for the consideration of diverse quantitative and qualitative criteria using both data-driven and qualitative indicators for evaluating alternative strategies with transparency and stakeholder participation. Here we propose a MCDA-based approach to the development of geospatial models and spatially explicit decision support tools for the management of vector-borne diseases. We describe the conceptual framework that MCDA offers as well as technical considerations, approaches to implementation and expected outcomes. We conclude that MCDA is a powerful tool that offers tremendous potential for use in public health decision-making in general and vector-borne disease management in particular. PMID:22206355
Hamiltonian structure of the guiding center plasma model
NASA Astrophysics Data System (ADS)
Burby, J. W.; Sengupta, W.
2018-02-01
The guiding center plasma model (also known as kinetic MHD) is a rigorous sub-cyclotron-frequency closure of the Vlasov-Maxwell system. While the model has been known for decades and it plays a fundamental role in describing the physics of strongly magnetized collisionless plasmas, its Hamiltonian structure has never been found. We provide explicit expressions for the model's Poisson bracket and Hamiltonian and thereby prove that the model is an infinite-dimensional Hamiltonian system. The bracket is derived in a manner which ensures that it satisfies the Jacobi identity. We also report on several previously unknown circulation theorems satisfied by the guiding center plasma model. Without knowledge of the Hamiltonian structure, these circulation theorems would be difficult to guess.
Water solvent effects using continuum and discrete models: The nitromethane molecule, CH3NO2.
Modesto-Costa, Lucas; Uhl, Elmar; Borges, Itamar
2015-11-15
The first three valence transitions of the two nitromethane conformers (CH3NO2) are two dark n → π* transitions and a very intense π → π* transition. In this work, these transitions in gas-phase and solvated in water of both conformers were investigated theoretically. The polarizable continuum model (PCM), two conductor-like screening (COSMO) models, and the discrete sequential quantum mechanics/molecular mechanics (S-QM/MM) method were used to describe the solvation effect on the electronic spectra. Time dependent density functional theory (TDDFT), configuration interaction including all single substitutions and perturbed double excitations (CIS(D)), the symmetry-adapted-cluster CI (SAC-CI), the multistate complete active space second order perturbation theory (CASPT2), and the algebraic-diagrammatic construction (ADC(2)) electronic structure methods were used. Gas-phase CASPT2, SAC-CI, and ADC(2) results are in very good agreement with published experimental and theoretical spectra. Among the continuum models, PCM combined either with CASPT2, SAC-CI, or B3LYP provided good agreement with available experimental data. COSMO combined with ADC(2) described the overall trends of the transition energy shifts. The effect of increasing the number of explicit water molecules in the S-QM/MM approach was discussed and the formation of hydrogen bonds was clearly established. By including explicitly 24 water molecules corresponding to the complete first solvation shell in the S-QM/MM approach, the ADC(2) method gives more accurate results as compared to the TDDFT approach and with similar computational demands. The ADC(2) with S-QM/MM model is, therefore, the best compromise for accurate solvent calculations in a polar environment. © 2015 Wiley Periodicals, Inc.
Uncertainty in spatially explicit animal dispersal models
Mooij, Wolf M.; DeAngelis, Donald L.
2003-01-01
Uncertainty in estimates of survival of dispersing animals is a vexing difficulty in conservation biology. The current notion is that this uncertainty decreases the usefulness of spatially explicit population models in particular. We examined this problem by comparing dispersal models of three levels of complexity: (1) an event-based binomial model that considers only the occurrence of mortality or arrival, (2) a temporally explicit exponential model that employs mortality and arrival rates, and (3) a spatially explicit grid-walk model that simulates the movement of animals through an artificial landscape. Each model was fitted to the same set of field data. A first objective of the paper is to illustrate how the maximum-likelihood method can be used in all three cases to estimate the means and confidence limits for the relevant model parameters, given a particular set of data on dispersal survival. Using this framework we show that the structure of the uncertainty for all three models is strikingly similar. In fact, the results of our unified approach imply that spatially explicit dispersal models, which take advantage of information on landscape details, suffer less from uncertainly than do simpler models. Moreover, we show that the proposed strategy of model development safeguards one from error propagation in these more complex models. Finally, our approach shows that all models related to animal dispersal, ranging from simple to complex, can be related in a hierarchical fashion, so that the various approaches to modeling such dispersal can be viewed from a unified perspective.
VR-CoDES and patient-centeredness. The intersection points between a measure and a concept.
Del Piccolo, Lidia
2017-11-01
The Verona Coding Definitions of Emotional sequences (VR-CoDES) system has been applied in a wide range of studies, in some of these, because of its attention on healthcare provider's ability to respond to patient emotions, it has been used as a proxy of patient-centeredness. The paper aims to discuss how the VR-CoDES can contribute to the broader concept of patient-centeredness and its limitations. VR-CoDES and patient-centeredness concept are briefly described, trying to detect commonalities and distinctions. The VR-CoDES dimensions of Explicit/non explicit responding and Providing or Reducing Space are analysed in relation to relevant aspects of patient-centred communication. Emotional aspects are encompassed within patient-centeredness model, but they represent only one of the numerous dimensions that contribute to define patient-centeredness as well as Explicit/non explicit responding and Providing or Reducing Space serve different functions during communication. The VR-CoDES can contribute to operationalize the description of emotional aspects emerging in a consultation, by inducing coders to adopt a factual attitude in assessing how health providers react to patient's expression of emotions. To better define empirically which measure affective aspects and dimensions of health provider responses are relevant and may contribute to patient-centeredness in different clinical settings. Copyright © 2017. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Vaidya, Bhargav; Prasad, Deovrat; Mignone, Andrea; Sharma, Prateek; Rickler, Luca
2017-12-01
An important ingredient in numerical modelling of high temperature magnetized astrophysical plasmas is the anisotropic transport of heat along magnetic field lines from higher to lower temperatures. Magnetohydrodynamics typically involves solving the hyperbolic set of conservation equations along with the induction equation. Incorporating anisotropic thermal conduction requires to also treat parabolic terms arising from the diffusion operator. An explicit treatment of parabolic terms will considerably reduce the simulation time step due to its dependence on the square of the grid resolution (Δx) for stability. Although an implicit scheme relaxes the constraint on stability, it is difficult to distribute efficiently on a parallel architecture. Treating parabolic terms with accelerated super-time-stepping (STS) methods has been discussed in literature, but these methods suffer from poor accuracy (first order in time) and also have difficult-to-choose tuneable stability parameters. In this work, we highlight a second-order (in time) Runge-Kutta-Legendre (RKL) scheme (first described by Meyer, Balsara & Aslam 2012) that is robust, fast and accurate in treating parabolic terms alongside the hyperbolic conversation laws. We demonstrate its superiority over the first-order STS schemes with standard tests and astrophysical applications. We also show that explicit conduction is particularly robust in handling saturated thermal conduction. Parallel scaling of explicit conduction using RKL scheme is demonstrated up to more than 104 processors.
Modeling Quantum Dynamics in Multidimensional Systems
NASA Astrophysics Data System (ADS)
Liss, Kyle; Weinacht, Thomas; Pearson, Brett
2017-04-01
Coupling between different degrees-of-freedom is an inherent aspect of dynamics in multidimensional quantum systems. As experiments and theory begin to tackle larger molecular structures and environments, models that account for vibrational and/or electronic couplings are essential for interpretation. Relevant processes include intramolecular vibrational relaxation, conical intersections, and system-bath coupling. We describe a set of simulations designed to model coupling processes in multidimensional molecular systems, focusing on models that provide insight and allow visualization of the dynamics. Undergraduates carried out much of the work as part of a senior research project. In addition to the pedagogical value, the simulations allow for comparison between both explicit and implicit treatments of a system's many degrees-of-freedom.
geophylobuilder 1.0: an arcgis extension for creating 'geophylogenies'.
Kidd, David M; Liu, Xianhua
2008-01-01
Evolution is inherently a spatiotemporal process; however, despite this, phylogenetic and geographical data and models remain largely isolated from one another. Geographical information systems provide a ready-made spatial modelling, analysis and dissemination environment within which phylogenetic models can be explicitly linked with their associated spatial data and subsequently integrated with other georeferenced data sets describing the biotic and abiotic environment. geophylobuilder 1.0 is an extension for the arcgis geographical information system that builds a 'geophylogenetic' data model from a phylogenetic tree and associated geographical data. Geophylogenetic database objects can subsequently be queried, spatially analysed and visualized in both 2D and 3D within a geographical information systems. © 2007 The Authors.
Scattering from Colloid-Polymer Conjugates with Excluded Volume Effect
Li, Xin; Sanchez-Diaz, Luis E.; Smith, Gregory Scott; ...
2015-01-13
This work presents scattering functions of conjugates consisting of a colloid particle and a self-avoiding polymer chain as a model for protein-polymer conjugates and nanoparticle-polymer conjugates in solution. The model is directly derived from the two-point correlation function with the inclusion of excluded volume effects. The dependence of the calculated scattering function on the geometric shape of the colloid and polymer stiffness is investigated. The model is able to describe the experimental scattering signature of the solutions of suspending hard particle-polymer conjugates and provide additional conformational information. This model explicitly elucidates the link between the global conformation of a conjugatemore » and the microstructure of its constituent components.« less
Effective Reading and Writing Instruction: A Focus on Modeling
ERIC Educational Resources Information Center
Regan, Kelley; Berkeley, Sheri
2012-01-01
When providing effective reading and writing instruction, teachers need to provide explicit modeling. Modeling is particularly important when teaching students to use cognitive learning strategies. Examples of how teachers can provide specific, explicit, and flexible instructional modeling is presented in the context of two evidence-based…
Cockrell, Robert Chase; Christley, Scott; Chang, Eugene; An, Gary
2015-01-01
Perhaps the greatest challenge currently facing the biomedical research community is the ability to integrate highly detailed cellular and molecular mechanisms to represent clinical disease states as a pathway to engineer effective therapeutics. This is particularly evident in the representation of organ-level pathophysiology in terms of abnormal tissue structure, which, through histology, remains a mainstay in disease diagnosis and staging. As such, being able to generate anatomic scale simulations is a highly desirable goal. While computational limitations have previously constrained the size and scope of multi-scale computational models, advances in the capacity and availability of high-performance computing (HPC) resources have greatly expanded the ability of computational models of biological systems to achieve anatomic, clinically relevant scale. Diseases of the intestinal tract are exemplary examples of pathophysiological processes that manifest at multiple scales of spatial resolution, with structural abnormalities present at the microscopic, macroscopic and organ-levels. In this paper, we describe a novel, massively parallel computational model of the gut, the Spatially Explicitly General-purpose Model of Enteric Tissue_HPC (SEGMEnT_HPC), which extends an existing model of the gut epithelium, SEGMEnT, in order to create cell-for-cell anatomic scale simulations. We present an example implementation of SEGMEnT_HPC that simulates the pathogenesis of ileal pouchitis, and important clinical entity that affects patients following remedial surgery for ulcerative colitis.
From Asking to Answering: Making Questions Explicit
ERIC Educational Resources Information Center
Washington, Gene
2006-01-01
"From Asking To Answering: Making Questions Explicit" describes a pedagogical procedure the author has used in writing classes (expository, technical and creative) to help students better understand the purpose, and effect, of text-questions. It accomplishes this by means of thirteen discrete categories (e.g., CLAIMS, COMMITMENT, ANAPHORA, or…
Numerical and Physical Modelling of Bubbly Flow Phenomena - Final Report to the Department of Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrea Prosperetti
This report describes the main features of the results obtained in the course of this project. A new approach to the systematic development of closure relations for the averaged equations of disperse multiphase flow is outlined. The focus of the project is on spatially non-uniform systems and several aspects in which such systems differ from uniform ones are described. Then, the procedure used in deriving the closure relations is given and some explicit results shown. The report also contains a list of publications supported by this grant and a list of the persons involved in the work.
Development of a grid-independent approximate Riemannsolver. Ph.D. Thesis - Michigan Univ.
NASA Technical Reports Server (NTRS)
Rumsey, Christopher Lockwood
1991-01-01
A grid-independent approximate Riemann solver for use with the Euler and Navier-Stokes equations was introduced and explored. The two-dimensional Euler and Navier-Stokes equations are described in Cartesian and generalized coordinates, as well as the traveling wave form of the Euler equations. The spatial and temporal discretization are described for both explicit and implicit time-marching schemes. The grid-aligned flux function of Roe is outlined, while the 5-wave grid-independent flux function is derived. The stability and monotonicity analysis of the 5-wave model are presented. Two-dimensional results are provided and extended to three dimensions. The corresponding results are presented.
NASA Astrophysics Data System (ADS)
Hamzah, Afiq; Hamid, Fatimah A.; Ismail, Razali
2016-12-01
An explicit solution for long-channel surrounding-gate (SRG) MOSFETs is presented from intrinsic to heavily doped body including the effects of interface traps and fixed oxide charges. The solution is based on the core SRGMOSFETs model of the Unified Charge Control Model (UCCM) for heavily doped conditions. The UCCM model of highly doped SRGMOSFETs is derived to obtain the exact equivalent expression as in the undoped case. Taking advantage of the undoped explicit charge-based expression, the asymptotic limits for below threshold and above threshold have been redefined to include the effect of trap states for heavily doped cases. After solving the asymptotic limits, an explicit mobile charge expression is obtained which includes the trap state effects. The explicit mobile charge model shows very good agreement with respect to numerical simulation over practical terminal voltages, doping concentration, geometry effects, and trap state effects due to the fixed oxide charges and interface traps. Then, the drain current is obtained using the Pao-Sah's dual integral, which is expressed as a function of inversion charge densities at the source/drain ends. The drain current agreed well with the implicit solution and numerical simulation for all regions of operation without employing any empirical parameters. A comparison with previous explicit models has been conducted to verify the competency of the proposed model with the doping concentration of 1× {10}19 {{cm}}-3, as the proposed model has better advantages in terms of its simplicity and accuracy at a higher doping concentration.
From Cycle Rooted Spanning Forests to the Critical Ising Model: an Explicit Construction
NASA Astrophysics Data System (ADS)
de Tilière, Béatrice
2013-04-01
Fisher established an explicit correspondence between the 2-dimensional Ising model defined on a graph G and the dimer model defined on a decorated version {{G}} of this graph (Fisher in J Math Phys 7:1776-1781, 1966). In this paper we explicitly relate the dimer model associated to the critical Ising model and critical cycle rooted spanning forests (CRSFs). This relation is established through characteristic polynomials, whose definition only depends on the respective fundamental domains, and which encode the combinatorics of the model. We first show a matrix-tree type theorem establishing that the dimer characteristic polynomial counts CRSFs of the decorated fundamental domain {{G}_1}. Our main result consists in explicitly constructing CRSFs of {{G}_1} counted by the dimer characteristic polynomial, from CRSFs of G 1, where edges are assigned Kenyon's critical weight function (Kenyon in Invent Math 150(2):409-439, 2002); thus proving a relation on the level of configurations between two well known 2-dimensional critical models.
Guenot, J.; Kollman, P. A.
1992-01-01
Although aqueous simulations with periodic boundary conditions more accurately describe protein dynamics than in vacuo simulations, these are computationally intensive for most proteins. Trp repressor dynamic simulations with a small water shell surrounding the starting model yield protein trajectories that are markedly improved over gas phase, yet computationally efficient. Explicit water in molecular dynamics simulations maintains surface exposure of protein hydrophilic atoms and burial of hydrophobic atoms by opposing the otherwise asymmetric protein-protein forces. This properly orients protein surface side chains, reduces protein fluctuations, and lowers the overall root mean square deviation from the crystal structure. For simulations with crystallographic waters only, a linear or sigmoidal distance-dependent dielectric yields a much better trajectory than does a constant dielectric model. As more water is added to the starting model, the differences between using distance-dependent and constant dielectric models becomes smaller, although the linear distance-dependent dielectric yields an average structure closer to the crystal structure than does a constant dielectric model. Multiplicative constants greater than one, for the linear distance-dependent dielectric simulations, produced trajectories that are progressively worse in describing trp repressor dynamics. Simulations of bovine pancreatic trypsin were used to ensure that the trp repressor results were not protein dependent and to explore the effect of the nonbonded cutoff on the distance-dependent and constant dielectric simulation models. The nonbonded cutoff markedly affected the constant but not distance-dependent dielectric bovine pancreatic trypsin inhibitor simulations. As with trp repressor, the distance-dependent dielectric model with a shell of water surrounding the protein produced a trajectory in better agreement with the crystal structure than a constant dielectric model, and the physical properties of the trajectory average structure, both with and without a nonbonded cutoff, were comparable. PMID:1304396
Diffusion on an Ising chain with kinks
NASA Astrophysics Data System (ADS)
Hamma, Alioscia; Mansour, Toufik; Severini, Simone
2009-07-01
We count the number of histories between the two degenerate minimum energy configurations of the Ising model on a chain, as a function of the length n and the number d of kinks that appear above the critical temperature. This is equivalent to count permutations of length n avoiding certain subsequences depending on d. We give explicit generating functions and compute the asymptotics. The setting considered has a role when describing dynamics induced by quantum Hamiltonians with deconfined quasi-particles.
Hidden sector behind the CKM matrix
NASA Astrophysics Data System (ADS)
Okawa, Shohei; Omura, Yuji
2017-08-01
The small quark mixing, described by the Cabibbo-Kobayashi-Maskawa (CKM) matrix in the standard model, may be a clue to reveal new physics around the TeV scale. We consider a simple scenario that extra particles in a hidden sector radiatively mediate the flavor violation to the quark sector around the TeV scale and effectively realize the observed CKM matrix. The lightest particle in the hidden sector, whose contribution to the CKM matrix is expected to be dominant, is a good dark matter (DM) candidate. There are many possible setups to describe this scenario, so that we investigate some universal predictions of this kind of model, focusing on the contribution of DM to the quark mixing and flavor physics. In this scenario, there is an explicit relation between the CKM matrix and flavor violating couplings, such as four-quark couplings, because both are radiatively induced by the particles in the hidden sector. Then, we can explicitly find the DM mass region and the size of Yukawa couplings between the DM and quarks, based on the study of flavor physics and DM physics. In conclusion, we show that DM mass in our scenario is around the TeV scale, and the Yukawa couplings are between O (0.01 ) and O (1 ). The spin-independent DM scattering cross section is estimated as O (10-9) [pb]. An extra colored particle is also predicted at the O (10 ) TeV scale.
Error Generation in CATS-Based Agents
NASA Technical Reports Server (NTRS)
Callantine, Todd
2003-01-01
This research presents a methodology for generating errors from a model of nominally preferred correct operator activities, given a particular operational context, and maintaining an explicit link to the erroneous contextual information to support analyses. It uses the Crew Activity Tracking System (CATS) model as the basis for error generation. This report describes how the process works, and how it may be useful for supporting agent-based system safety analyses. The report presents results obtained by applying the error-generation process and discusses implementation issues. The research is supported by the System-Wide Accident Prevention Element of the NASA Aviation Safety Program.
Computation of high Reynolds number internal/external flows
NASA Technical Reports Server (NTRS)
Cline, M. C.; Wilmoth, R. G.
1981-01-01
A general, user oriented computer program, called VNAF2, developed to calculate high Reynolds number internal/external flows is described. The program solves the two dimensional, time dependent Navier-Stokes equations. Turbulence is modeled with either a mixing length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented.
Localized mRNA translation and protein association
NASA Astrophysics Data System (ADS)
Zhdanov, Vladimir P.
2014-08-01
Recent direct observations of localization of mRNAs and proteins both in prokaryotic and eukaryotic cells can be related to slowdown of diffusion of these species due to macromolecular crowding and their ability to aggregate and form immobile or slowly mobile complexes. Here, a generic kinetic model describing both these factors is presented and comprehensively analyzed. Although the model is non-linear, an accurate self-consistent analytical solution of the corresponding reaction-diffusion equation has been constructed, the types of localized protein distributions have been explicitly shown, and the predicted kinetic regimes of gene expression have been classified.
A model structure for an EBM program in a multihospital system.
Schumacher, Dale N; Stock, Joseph R; Richards, Joan K
2003-01-01
Evidence-based medicine (EBM) offers a great opportunity to translate advances in medical science into advances in clinical practice. We describe the structure of a comprehensive EBM program in a multihospital community teaching system. This EBM model is distinct and separate from the peer review process and has achieved substantial physician involvement. The program emanates from the Board of Directors Quality of Care Committee and has strong administrative support. The approach relies extensively on physician involvement and expert physician panels to enhance existing EBM practice guidelines, with an explicit strategy of performance reports and feedback.
NASA Astrophysics Data System (ADS)
Eltahir, E. A.
2011-12-01
A mechanistic and spatially-explicit model of hydrological and entomological processes that lead to malaria transmission is developed and tested against field observations. HYDREMATS (HYDRology, Entomology, and MAlaria Transmission Simulator) is described in (Bomblies and Eltahir, WRR, 44,2008). HYDREMATS is suitable for low cost screening of environmental management interventions, and for studying the impact of climate change on malaria transmission. Examples of specific applications will be presented from Niger in Africa. The potential for using HYDREMATS to study the impact of water reservoirs on malaria transmission will be discussed.
Background / Question / Methods Planning for the recovery of threatened species is increasingly informed by spatially-explicit population models. However, using simulation model results to guide land management decisions can be difficult due to the volume and complexity of model...
Concurrent processing simulation of the space station
NASA Technical Reports Server (NTRS)
Gluck, R.; Hale, A. L.; Sunkel, John W.
1989-01-01
The development of a new capability for the time-domain simulation of multibody dynamic systems and its application to the study of a large angle rotational maneuvers of the Space Station is described. The effort was divided into three sequential tasks, which required significant advancements of the state-of-the art to accomplish. These were: (1) the development of an explicit mathematical model via symbol manipulation of a flexible, multibody dynamic system; (2) the development of a methodology for balancing the computational load of an explicit mathematical model for concurrent processing; and (3) the implementation and successful simulation of the above on a prototype Custom Architectured Parallel Processing System (CAPPS) containing eight processors. The throughput rate achieved by the CAPPS operating at only 70 percent efficiency, was 3.9 times greater than that obtained sequentially by the IBM 3090 supercomputer simulating the same problem. More significantly, analysis of the results leads to the conclusion that the relative cost effectiveness of concurrent vs. sequential digital computation will grow substantially as the computational load is increased. This is a welcomed development in an era when very complex and cumbersome mathematical models of large space vehicles must be used as substitutes for full scale testing which has become impractical.
CONSTRUCTING, PERTURBATION ANALYSIIS AND TESTING OF A MULTI-HABITAT PERIODIC MATRIX POPULATION MODEL
We present a matrix model that explicitly incorporates spatial habitat structure and seasonality and discuss preliminary results from a landscape level experimental test. Ecological risk to populations is often modeled without explicit treatment of spatially or temporally distri...
Green-Ampt approximations: A comprehensive analysis
NASA Astrophysics Data System (ADS)
Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.
2016-04-01
Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.
ERIC Educational Resources Information Center
Dang, Trang Thi Doan; Nguyen, Huong Thu
2013-01-01
Two approaches to grammar instruction are often discussed in the ESL literature: direct explicit grammar instruction (DEGI) (deduction) and indirect explicit grammar instruction (IEGI) (induction). This study aims to explore the effects of indirect explicit grammar instruction on EFL learners' mastery of English tenses. Ninety-four…
Spatial capture-recapture models allowing Markovian transience or dispersal
Royle, J. Andrew; Fuller, Angela K.; Sutherland, Chris
2016-01-01
Spatial capture–recapture (SCR) models are a relatively recent development in quantitative ecology, and they are becoming widely used to model density in studies of animal populations using camera traps, DNA sampling and other methods which produce spatially explicit individual encounter information. One of the core assumptions of SCR models is that individuals possess home ranges that are spatially stationary during the sampling period. For many species, this assumption is unlikely to be met and, even for species that are typically territorial, individuals may disperse or exhibit transience at some life stages. In this paper we first conduct a simulation study to evaluate the robustness of estimators of density under ordinary SCR models when dispersal or transience is present in the population. Then, using both simulated and real data, we demonstrate that such models can easily be described in the BUGS language providing a practical framework for their analysis, which allows us to evaluate movement dynamics of species using capture–recapture data. We find that while estimators of density are extremely robust, even to pathological levels of movement (e.g., complete transience), the estimator of the spatial scale parameter of the encounter probability model is confounded with the dispersal/transience scale parameter. Thus, use of ordinary SCR models to make inferences about density is feasible, but interpretation of SCR model parameters in relation to movement should be avoided. Instead, when movement dynamics are of interest, such dynamics should be parameterized explicitly in the model.
Hierarchical modeling of cluster size in wildlife surveys
Royle, J. Andrew
2008-01-01
Clusters or groups of individuals are the fundamental unit of observation in many wildlife sampling problems, including aerial surveys of waterfowl, marine mammals, and ungulates. Explicit accounting of cluster size in models for estimating abundance is necessary because detection of individuals within clusters is not independent and detectability of clusters is likely to increase with cluster size. This induces a cluster size bias in which the average cluster size in the sample is larger than in the population at large. Thus, failure to account for the relationship between delectability and cluster size will tend to yield a positive bias in estimates of abundance or density. I describe a hierarchical modeling framework for accounting for cluster-size bias in animal sampling. The hierarchical model consists of models for the observation process conditional on the cluster size distribution and the cluster size distribution conditional on the total number of clusters. Optionally, a spatial model can be specified that describes variation in the total number of clusters per sample unit. Parameter estimation, model selection, and criticism may be carried out using conventional likelihood-based methods. An extension of the model is described for the situation where measurable covariates at the level of the sample unit are available. Several candidate models within the proposed class are evaluated for aerial survey data on mallard ducks (Anas platyrhynchos).
Modeling of Non-isothermal Austenite Formation in Spring Steel
NASA Astrophysics Data System (ADS)
Huang, He; Wang, Baoyu; Tang, Xuefeng; Li, Junling
2017-12-01
The austenitization kinetics description of spring steel 60Si2CrA plays an important role in providing guidelines for industrial production. The dilatometric curves of 60Si2CrA steel were measured using a dilatometer DIL805A at heating rates of 0.3 K to 50 K/s (0.3 °C/s to 50 °C/s). Based on the dilatometric curves, a unified kinetics model using the internal state variable (ISV) method was derived to describe the non-isothermal austenitization kinetics of 60Si2CrA, and the abovementioned model models the incubation and transition periods. The material constants in the model were determined using a genetic algorithm-based optimization technique. Additionally, good agreement between predicted and experimental volume fractions of transformed austenite was obtained, indicating that the model is effective for describing the austenitization kinetics of 60Si2CrA steel. Compared with other modeling methods of austenitization kinetics, this model, which uses the ISV method, has some advantages, such as a simple formula and explicit physics meaning, and can be probably used in engineering practice.
Application of the θ-method to a telegraphic model of fluid flow in a dual-porosity medium
NASA Astrophysics Data System (ADS)
González-Calderón, Alfredo; Vivas-Cruz, Luis X.; Herrera-Hernández, Erik César
2018-01-01
This work focuses mainly on the study of numerical solutions, which are obtained using the θ-method, of a generalized Warren and Root model that includes a second-order wave-like equation in its formulation. The solutions approximately describe the single-phase hydraulic head in fractures by considering the finite velocity of propagation by means of a Cattaneo-like equation. The corresponding discretized model is obtained by utilizing a non-uniform grid and a non-uniform time step. A simple relationship is proposed to give the time-step distribution. Convergence is analyzed by comparing results from explicit, fully implicit, and Crank-Nicolson schemes with exact solutions: a telegraphic model of fluid flow in a single-porosity reservoir with relaxation dynamics, the Warren and Root model, and our studied model, which is solved with the inverse Laplace transform. We find that the flux and the hydraulic head have spurious oscillations that most often appear in small-time solutions but are attenuated as the solution time progresses. Furthermore, we show that the finite difference method is unable to reproduce the exact flux at time zero. Obtaining results for oilfield production times, which are in the order of months in real units, is only feasible using parallel implicit schemes. In addition, we propose simple parallel algorithms for the memory flux and for the explicit scheme.
Radon transport model into a porous ground layer of finite capacity
NASA Astrophysics Data System (ADS)
Parovik, Roman
2017-10-01
The model of radon transfer is considered in a porous ground layer of finite power. With the help of the Laplace integral transformation, a numerical solution of this model is obtained which is based on the construction of a generalized quadrature formula of the highest degree of accuracy for the transition to the original - the function of solving this problem. The calculated curves are constructed and investigated depending on the diffusion and advection coefficients.The work was a mathematical model that describes the effect of the sliding attachment (stick-slip), taking into account hereditarity. This model can be regarded as a mechanical model of earthquake preparation. For such a model was proposed explicit finite- difference scheme, on which were built the waveform and phase trajectories hereditarity effect of stick-slip.
Song, Yun S; Steinrücken, Matthias
2012-03-01
The transition density function of the Wright-Fisher diffusion describes the evolution of population-wide allele frequencies over time. This function has important practical applications in population genetics, but finding an explicit formula under a general diploid selection model has remained a difficult open problem. In this article, we develop a new computational method to tackle this classic problem. Specifically, our method explicitly finds the eigenvalues and eigenfunctions of the diffusion generator associated with the Wright-Fisher diffusion with recurrent mutation and arbitrary diploid selection, thus allowing one to obtain an accurate spectral representation of the transition density function. Simplicity is one of the appealing features of our approach. Although our derivation involves somewhat advanced mathematical concepts, the resulting algorithm is quite simple and efficient, only involving standard linear algebra. Furthermore, unlike previous approaches based on perturbation, which is applicable only when the population-scaled selection coefficient is small, our method is nonperturbative and is valid for a broad range of parameter values. As a by-product of our work, we obtain the rate of convergence to the stationary distribution under mutation-selection balance.
Solvent effects on the properties of hyperbranched polythiophenes.
Torras, Juan; Zanuy, David; Aradilla, David; Alemán, Carlos
2016-09-21
The structural and electronic properties of all-thiophene dendrimers and dendrons in solution have been evaluated using very different theoretical approaches based on quantum mechanical (QM) and hybrid QM/molecular mechanics (MM) methodologies: (i) calculations on minimum energy conformations using an implicit solvation model in combination with density functional theory (DFT) or time-dependent DFT (TD-DFT) methods; (ii) hybrid QM/MM calculations, in which the solute and solvent molecules are represented at the DFT level as point charges, respectively, on snapshots extracted from classical molecular dynamics (MD) simulations using explicit solvent molecules, and (iii) QM/MM-MD trajectories in which the solute is described at the DFT or TD-DFT level and the explicit solvent molecules are represented using classical force-fields. Calculations have been performed in dichloromethane, tetrahydrofuran and dimethylformamide. A comparison of the results obtained using the different approaches with the available experimental data indicates that the incorporation of effects associated with both the conformational dynamics of the dendrimer and the explicit solvent molecules is strictly necessary to satisfactorily reproduce the properties of the investigated systems. Accordingly, QM/MM-MD simulations are able to capture such effects providing a reliable description of electronic properties-conformational flexibility relationships in all-Th dendrimers.
Song, Yun S.; Steinrücken, Matthias
2012-01-01
The transition density function of the Wright–Fisher diffusion describes the evolution of population-wide allele frequencies over time. This function has important practical applications in population genetics, but finding an explicit formula under a general diploid selection model has remained a difficult open problem. In this article, we develop a new computational method to tackle this classic problem. Specifically, our method explicitly finds the eigenvalues and eigenfunctions of the diffusion generator associated with the Wright–Fisher diffusion with recurrent mutation and arbitrary diploid selection, thus allowing one to obtain an accurate spectral representation of the transition density function. Simplicity is one of the appealing features of our approach. Although our derivation involves somewhat advanced mathematical concepts, the resulting algorithm is quite simple and efficient, only involving standard linear algebra. Furthermore, unlike previous approaches based on perturbation, which is applicable only when the population-scaled selection coefficient is small, our method is nonperturbative and is valid for a broad range of parameter values. As a by-product of our work, we obtain the rate of convergence to the stationary distribution under mutation–selection balance. PMID:22209899
We used a spatially explicit population model of wolves (Canis lupus) to propose a framework for defining rangewide recovery priorities and finer-scale strategies for regional reintroductions. The model predicts that Yellowstone and central Idaho, where wolves have recently been ...
Multiscale Simulation of Porous Ceramics Based on Movable Cellular Automaton Method
NASA Astrophysics Data System (ADS)
Smolin, A.; Smolin, I.; Eremina, G.; Smolina, I.
2017-10-01
The paper presents a model for simulating mechanical behaviour of multiscale porous ceramics based on movable cellular automaton method, which is a novel particle method in computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the random unique position in space. As a result, we get the average values of Young’s modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behaviour at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via the effective properties determined at the previous scale level. If the pore size distribution function of the material has N maxima we need to perform computations for N - 1 levels in order to get the properties from the lowest scale up to the macroscale step by step. The proposed approach was applied to modelling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behaviour of the model sample at the macroscale.
Automorphic Forms and Mock Modular Forms in String Theory
NASA Astrophysics Data System (ADS)
Nazaroglu, Caner
We study a variety of modular invariant objects in relation to string theory. First, we focus on Jacobi forms over generic rank lattices and Siegel forms that appear in N = 2, D = 4 compactifications of heterotic string with Wilson lines. Constraints from low energy spectrum and modularity are employed to deduce the relevant supersymmetric partition functions entirely. This procedure is applied on models that lead to Jacobi forms of index 3, 4, 5 as well as Jacobi forms over root lattices A2 and A3. These computations are then checked against an explicit orbifold model which can be Higgsed to the models under question. Models with a single Wilson line are then studied in detail with their relation to paramodular group Gammam as T-duality group made explicit. These results on the heterotic string side are then turned into predictions for geometric invariants using TypeII - Heterotic duality. Secondly, we study theta functions for indenite signature lattices of generic signature. Building on results in literature for signature (n-1,1) and (n-2,2) lattices, we work out the properties of generalized error functions which we call r-tuple error functions. We then use these functions to build such indenite theta functions and describe their modular completions.
Foy, Robbie; Ovretveit, John; Shekelle, Paul G; Pronovost, Peter J; Taylor, Stephanie L; Dy, Sydney; Hempel, Susanne; McDonald, Kathryn M; Rubenstein, Lisa V; Wachter, Robert M
2011-05-01
Theories provide a way of understanding and predicting the effects of patient safety practices (PSPs), interventions intended to prevent or mitigate harm caused by healthcare or risks of such harm. Yet most published evaluations make little or no explicit reference to theory, thereby hindering efforts to generalise findings from one context to another. Theories from a wide range of disciplines are potentially relevant to research on PSPs. Theory can be used in research to explain clinical and organisational behaviour, to guide the development and selection of PSPs, and in evaluating their implementation and mechanisms of action. One key recommendation from an expert consensus process is that researchers should describe the theoretical basis for chosen intervention components or provide an explicit logic model for 'why this PSP should work.' Future theory-driven evaluations would enhance generalisability and help build a cumulative understanding of the nature of change.
Memory for music in Alzheimer's disease: unforgettable?
Baird, Amee; Samson, Séverine
2009-03-01
The notion that memory for music can be preserved in patients with Alzheimer's Disease (AD) has been raised by a number of case studies. In this paper, we review the current research examining musical memory in patients with AD. In keeping with models of memory described in the non-musical domain, we propose that various forms of musical memory exist, and may be differentially impaired in AD, reflecting the pattern of neuropathological changes associated with the condition. Our synthesis of this literature reveals a dissociation between explicit and implicit musical memory functions. Implicit, specifically procedural musical memory, or the ability to play a musical instrument, can be spared in musicians with AD. In contrast, explicit musical memory, or the recognition of familiar or unfamiliar melodies, is typically impaired. Thus, the notion that music is unforgettable in AD is not wholly supported. Rather, it appears that the ability to play a musical instrument may be unforgettable in some musicians with AD.
Moving from Explicit to Implicit: A Case Study of Improving Inferential Comprehension
ERIC Educational Resources Information Center
Yeh, Yi-Fen; McTigue, Erin M.; Joshi, R. Malatesha
2012-01-01
The article describes a successful intervention program in developing inferential comprehension in a sixth grader. Steve (pseudonym) was proficient in word reading, was able to detect explicit information while reading, but struggled with linking textual information to yield integral ideas. After 10 weeks of working with Steve on word analogies,…
From deep TLS validation to ensembles of atomic models built from elemental motions
Urzhumtsev, Alexandre; Afonine, Pavel V.; Van Benschoten, Andrew H.; ...
2015-07-28
The translation–libration–screw model first introduced by Cruickshank, Schomaker and Trueblood describes the concerted motions of atomic groups. Using TLS models can improve the agreement between calculated and experimental diffraction data. Because the T, L and S matrices describe a combination of atomic vibrations and librations, TLS models can also potentially shed light on molecular mechanisms involving correlated motions. However, this use of TLS models in mechanistic studies is hampered by the difficulties in translating the results of refinement into molecular movement or a structural ensemble. To convert the matrices into a constituent molecular movement, the matrix elements must satisfy severalmore » conditions. Refining the T, L and S matrix elements as independent parameters without taking these conditions into account may result in matrices that do not represent concerted molecular movements. Here, a mathematical framework and the computational tools to analyze TLS matrices, resulting in either explicit decomposition into descriptions of the underlying motions or a report of broken conditions, are described. The description of valid underlying motions can then be output as a structural ensemble. All methods are implemented as part of the PHENIX project.« less
Taylor, Mark J; Taylor, Natasha
2014-12-01
England and Wales are moving toward a model of 'opt out' for use of personal confidential data in health research. Existing research does not make clear how acceptable this move is to the public. While people are typically supportive of health research, when asked to describe the ideal level of control there is a marked lack of consensus over the preferred model of consent (e.g. explicit consent, opt out etc.). This study sought to investigate a relatively unexplored difference between the consent model that people prefer and that which they are willing to accept. It also sought to explore any reasons for such acceptance.A mixed methods approach was used to gather data, incorporating a structured questionnaire and in-depth focus group discussions led by an external facilitator. The sampling strategy was designed to recruit people with different involvement in the NHS but typically with experience of NHS services. Three separate focus groups were carried out over three consecutive days.The central finding is that people are typically willing to accept models of consent other than that which they would prefer. Such acceptance is typically conditional upon a number of factors, including: security and confidentiality, no inappropriate commercialisation or detrimental use, transparency, independent overview, the ability to object to any processing considered to be inappropriate or particularly sensitive.This study suggests that most people would find research use without the possibility of objection to be unacceptable. However, the study also suggests that people who would prefer to be asked explicitly before data were used for purposes beyond direct care may be willing to accept an opt out model of consent if the reasons for not seeking explicit consent are accessible to them and they trust that data is only going to be used under conditions, and with safeguards, that they would consider to be acceptable even if not preferable.
NASA Astrophysics Data System (ADS)
Rinaldo, A.; Bertuzzo, E.; Mari, L.; Righetto, L.; Gatto, M.; Casagrandi, R.; Rodriguez-Iturbe, I.
2010-12-01
A recently proposed model for cholera epidemics is examined. The model accounts for local communities of susceptibles and infectives in a spatially explicit arrangement of nodes linked by networks having different topologies. The vehicle of infection (Vibrio cholerae) is transported through the network links which are thought of as hydrological connections among susceptible communities. The mathematical tools used are borrowed from general schemes of reactive transport on river networks acting as the environmental matrix for the circulation and mixing of water-borne pathogens. The results of a large-scale application to the Kwa Zulu (Natal) epidemics of 2001-2002 will be discussed. Useful theoretical results derived in the spatially-explicit context will also be reviewed (like e.g. the exact derivation of the speed of propagation for traveling fronts of epidemics on regular lattices endowed with uniform population density). Network effects will be discussed. The analysis of the limit case of uniformly distributed population density proves instrumental in establishing the overall conditions for the relevance of spatially explicit models. To that extent, it is shown that the ratio between spreading and disease outbreak timescales proves the crucial parameter. The relevance of our results lies in the major differences potentially arising between the predictions of spatially explicit models and traditional compartmental models of the SIR-like type. Our results suggest that in many cases of real-life epidemiological interest timescales of disease dynamics may trigger outbreaks that significantly depart from the predictions of compartmental models. Finally, a view on further developments includes: hydrologically improved aquatic reservoir models for pathogens; human mobility patterns affecting disease propagation; double-peak emergence and seasonality in the spatially explicit epidemic context.
Environmental decision-making and the influences of various stressors, such as landscape and climate changes on water quantity and quality, requires the application of environmental modeling. Spatially explicit environmental and watershed-scale models using GIS as a base framewor...
HexSim - A general purpose framework for spatially-explicit, individual-based modeling
HexSim is a framework for constructing spatially-explicit, individual-based computer models designed for simulating terrestrial wildlife population dynamics and interactions. HexSim is useful for a broad set of modeling applications. This talk will focus on a subset of those ap...
Hierarchical algorithms for modeling the ocean on hierarchical architectures
NASA Astrophysics Data System (ADS)
Hill, C. N.
2012-12-01
This presentation will describe an approach to using accelerator/co-processor technology that maps hierarchical, multi-scale modeling techniques to an underlying hierarchical hardware architecture. The focus of this work is on making effective use of both CPU and accelerator/co-processor parts of a system, for large scale ocean modeling. In the work, a lower resolution basin scale ocean model is locally coupled to multiple, "embedded", limited area higher resolution sub-models. The higher resolution models execute on co-processor/accelerator hardware and do not interact directly with other sub-models. The lower resolution basin scale model executes on the system CPU(s). The result is a multi-scale algorithm that aligns with hardware designs in the co-processor/accelerator space. We demonstrate this approach being used to substitute explicit process models for standard parameterizations. Code for our sub-models is implemented through a generic abstraction layer, so that we can target multiple accelerator architectures with different programming environments. We will present two application and implementation examples. One uses the CUDA programming environment and targets GPU hardware. This example employs a simple non-hydrostatic two dimensional sub-model to represent vertical motion more accurately. The second example uses a highly threaded three-dimensional model at high resolution. This targets a MIC/Xeon Phi like environment and uses sub-models as a way to explicitly compute sub-mesoscale terms. In both cases the accelerator/co-processor capability provides extra compute cycles that allow improved model fidelity for little or no extra wall-clock time cost.
Fuggle, Peter; Bevington, Dickon; Cracknell, Liz; Hanley, James; Hare, Suzanne; Lincoln, John; Richardson, Garry; Stevens, Nina; Tovey, Heather; Zlotowitz, Sally
2015-07-01
AMBIT (Adolescent Mentalization-Based Integrative Treatment) is a developing team approach to working with hard-to-reach adolescents. The approach applies the principle of mentalization to relationships with clients, team relationships and working across agencies. It places a high priority on the need for locally developed evidence-based practice, and proposes that outcome evaluation needs to be explicitly linked with processes of team learning using a learning organization framework. A number of innovative methods of team learning are incorporated into the AMBIT approach, particularly a system of web-based wiki-formatted AMBIT manuals individualized for each participating team. The paper describes early development work of the model and illustrates ways of establishing explicit links between outcome evaluation, team learning and manualization by describing these methods as applied to two AMBIT-trained teams; one team working with young people on the edge of care (AMASS - the Adolescent Multi-Agency Support Service) and another working with substance use (CASUS - Child and Adolescent Substance Use Service in Cambridgeshire). Measurement of the primary outcomes for each team (which were generally very positive) facilitated team learning and adaptations of methods of practice that were consolidated through manualization. © The Author(s) 2014.
Open Cascades as Simple Solutions to Providing Ultrasensitivity and Adaptation in Cellular Signaling
Srividhya, Jeyaraman; Li, Yongfeng; Pomerening, Joseph R.
2011-01-01
Cell signaling is achieved predominantly by reversible phosphorylation-dephosphorylation reaction cascades. Up until now, circuits conferring adaptation have all required the presence of a cascade with some type of closed topology: negative–feedback loop with a buffering node, or incoherent feedforward loop with a proportioner node. In this paper—using Goldbeter and Koshland-type expressions—we propose a differential equation model to describe a generic, open signaling cascade that elicits an adaptation response. This is accomplished by coupling N phosphorylation–dephosphorylation cycles unidirectionally, without any explicit feedback loops. Using this model, we show that as the length of the cascade grows, the steady states of the downstream cycles reach a limiting value. In other words, our model indicates that there are a minimum number of cycles required to achieve a maximum in sensitivity and amplitude in the response of a signaling cascade. We also describe for the first time that the phenomenon of ultrasensitivity can be further subdivided into three sub–regimes, separated by sharp stimulus threshold values: OFF, OFF-ON-OFF, and ON. In the OFF-ON-OFF regime, an interesting property emerges. In the presence of a basal amount of activity, the temporal evolution of early cycles yields damped peak responses. On the other hand, the downstream cycles switch rapidly to a higher activity state for an extended period of time, prior to settling to an OFF state (OFF-ON-OFF). This response arises from the changing dynamics between a feed–forward activation module and dephosphorylation reactions. In conclusion, our model gives the new perspective that open signaling cascades embedded in complex biochemical circuits may possess the ability to show a switch–like adaptation response, without the need for any explicit feedback circuitry. PMID:21566270
On explicit algebraic stress models for complex turbulent flows
NASA Technical Reports Server (NTRS)
Gatski, T. B.; Speziale, C. G.
1992-01-01
Explicit algebraic stress models that are valid for three-dimensional turbulent flows in noninertial frames are systematically derived from a hierarchy of second-order closure models. This represents a generalization of the model derived by Pope who based his analysis on the Launder, Reece, and Rodi model restricted to two-dimensional turbulent flows in an inertial frame. The relationship between the new models and traditional algebraic stress models -- as well as anistropic eddy visosity models -- is theoretically established. The need for regularization is demonstrated in an effort to explain why traditional algebraic stress models have failed in complex flows. It is also shown that these explicit algebraic stress models can shed new light on what second-order closure models predict for the equilibrium states of homogeneous turbulent flows and can serve as a useful alternative in practical computations.
A novel description of FDG excretion in the renal system: application to metformin-treated models
NASA Astrophysics Data System (ADS)
Garbarino, S.; Caviglia, G.; Sambuceti, G.; Benvenuto, F.; Piana, M.
2014-05-01
This paper introduces a novel compartmental model describing the excretion of 18F-fluoro-deoxyglucose (FDG) in the renal system and a numerical method based on the maximum likelihood for its reduction. This approach accounts for variations in FDG concentration due to water re-absorption in renal tubules and the increase of the bladder’s volume during the FDG excretion process. From the computational viewpoint, the reconstruction of the tracer kinetic parameters is obtained by solving the maximum likelihood problem iteratively, using a non-stationary, steepest descent approach that explicitly accounts for the Poisson nature of nuclear medicine data. The reliability of the method is validated against two sets of synthetic data realized according to realistic conditions. Finally we applied this model to describe FDG excretion in the case of animal models treated with metformin. In particular we show that our approach allows the quantitative estimation of the reduction of FDG de-phosphorylation induced by metformin.
A general numerical model for wave rotor analysis
NASA Technical Reports Server (NTRS)
Paxson, Daniel W.
1992-01-01
Wave rotors represent one of the promising technologies for achieving very high core temperatures and pressures in future gas turbine engines. Their operation depends upon unsteady gas dynamics and as such, their analysis is quite difficult. This report describes a numerical model which has been developed to perform such an analysis. Following a brief introduction, a summary of the wave rotor concept is given. The governing equations are then presented, along with a summary of the assumptions used to obtain them. Next, the numerical integration technique is described. This is an explicit finite volume technique based on the method of Roe. The discussion then focuses on the implementation of appropriate boundary conditions. Following this, some results are presented which first compare the numerical approximation to the governing differential equations and then compare the overall model to an actual wave rotor experiment. Finally, some concluding remarks are presented concerning the limitations of the simplifying assumptions and areas where the model may be improved.
GRAIN-SCALE FAILURE IN THERMAL SPALLATION DRILLING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walsh, S C; Lomov, I; Roberts, J J
2012-01-19
Geothermal power promises clean, renewable, reliable and potentially widely-available energy, but is limited by high initial capital costs. New drilling technologies are required to make geothermal power financially competitive with other energy sources. One potential solution is offered by Thermal Spallation Drilling (TSD) - a novel drilling technique in which small particles (spalls) are released from the rock surface by rapid heating. While TSD has the potential to improve drilling rates of brittle granitic rocks, the coupled thermomechanical processes involved in TSD are poorly described, making system control and optimization difficult for this drilling technology. In this paper, we discussmore » results from a new modeling effort investigating thermal spallation drilling. In particular, we describe an explicit model that simulates the grain-scale mechanics of thermal spallation and use this model to examine existing theories concerning spalling mechanisms. We will report how borehole conditions influence spall production, and discuss implications for macro-scale models of drilling systems.« less
1976-07-01
rupture (right) to represent a bilat- eral rupture is described in the text Page 48 50 51 56 60 3.11 Far-field radiation patterns for the bi ...particularly effective for detecting, isolating and timing the various seismic phases ^g’ p*’ pn’ Sg’ s*’ Sn , etc.) that are recorded on event seismograms in...of the stress field during rupture. 5. A criterion allowing the rupture to heal . All earthquake models must, implicitly or explicitly, deal with
SATware: A Semantic Approach for Building Sentient Spaces
NASA Astrophysics Data System (ADS)
Massaguer, Daniel; Mehrotra, Sharad; Vaisenberg, Ronen; Venkatasubramanian, Nalini
This chapter describes the architecture of a semantic-based middleware environment for building sensor-driven sentient spaces. The proposed middleware explicitly models sentient space semantics (i.e., entities, spaces, activities) and supports mechanisms to map sensor observations to the state of the sentient space. We argue how such a semantic approach provides a powerful programming environment for building sensor spaces. In addition, the approach provides natural ways to exploit semantics for variety of purposes including scheduling under resource constraints and sensor recalibration.
A strategy for understanding noise-induced annoyance
NASA Astrophysics Data System (ADS)
Fidell, S.; Green, D. M.; Schultz, T. J.; Pearsons, K. S.
1988-08-01
This report provides a rationale for development of a systematic approach to understanding noise-induced annoyance. Two quantitative models are developed to explain: (1) the prevalence of annoyance due to residential exposure to community noise sources; and (2) the intrusiveness of individual noise events. Both models deal explicitly with the probabilistic nature of annoyance, and assign clear roles to acoustic and nonacoustic determinants of annoyance. The former model provides a theoretical foundation for empirical dosage-effect relationships between noise exposure and community response, while the latter model differentiates between the direct and immediate annoyance of noise intrusions and response bias factors that influence the reporting of annoyance. The assumptions of both models are identified, and the nature of the experimentation necessary to test hypotheses derived from the models is described.
Plimpton, Steven J.; Sershen, Cheryl L.; May, Elebeoba E.
2015-01-01
This paper describes a method for incorporating a diffusion field modeling oxygen usage and dispersion in a multi-scale model of Mycobacterium tuberculosis (Mtb) infection mediated granuloma formation. We implemented this method over a floating-point field to model oxygen dynamics in host tissue during chronic phase response and Mtb persistence. The method avoids the requirement of satisfying the Courant-Friedrichs-Lewy (CFL) condition, which is necessary in implementing the explicit version of the finite-difference method, but imposes an impractical bound on the time step. Instead, diffusion is modeled by a matrix-based, steady state approximate solution to the diffusion equation. Moreover, presented in figuremore » 1 is the evolution of the diffusion profiles of a containment granuloma over time.« less
QSAR modeling based on structure-information for properties of interest in human health.
Hall, L H; Hall, L M
2005-01-01
The development of QSAR models based on topological structure description is presented for problems in human health. These models are based on the structure-information approach to quantitative biological modeling and prediction, in contrast to the mechanism-based approach. The structure-information approach is outlined, starting with basic structure information developed from the chemical graph (connection table). Information explicit in the connection table (element identity and skeletal connections) leads to significant (implicit) structure information that is useful for establishing sound models of a wide range of properties of interest in drug design. Valence state definition leads to relationships for valence state electronegativity and atom/group molar volume. Based on these important aspects of molecules, together with skeletal branching patterns, both the electrotopological state (E-state) and molecular connectivity (chi indices) structure descriptors are developed and described. A summary of four QSAR models indicates the wide range of applicability of these structure descriptors and the predictive quality of QSAR models based on them: aqueous solubility (5535 chemically diverse compounds, 938 in external validation), percent oral absorption (%OA, 417 therapeutic drugs, 195 drugs in external validation testing), AMES mutagenicity (2963 compounds including 290 therapeutic drugs, 400 in external validation), fish toxicity (92 substituted phenols, anilines and substituted aromatics). These models are established independent of explicit three-dimensional (3-D) structure information and are directly interpretable in terms of the implicit structure information useful to the drug design process.
Aerodynamic stability analysis of NASA J85-13/planar pressure pulse generator installation
NASA Technical Reports Server (NTRS)
Chung, K.; Hosny, W. M.; Steenken, W. G.
1980-01-01
A digital computer simulation model for the J85-13/Planar Pressure Pulse Generator (P3 G) test installation was developed by modifying an existing General Electric compression system model. This modification included the incorporation of a novel method for describing the unsteady blade lift force. This approach significantly enhanced the capability of the model to handle unsteady flows. In addition, the frequency response characteristics of the J85-13/P3G test installation were analyzed in support of selecting instrumentation locations to avoid standing wave nodes within the test apparatus and thus, low signal levels. The feasibility of employing explicit analytical expression for surge prediction was also studied.
Improvements to the RADIOM non-LTE model
NASA Astrophysics Data System (ADS)
Busquet, M.; Colombant, D.; Klapisch, M.; Fyfe, D.; Gardner, J.
2009-12-01
In 1993, we proposed the RADIOM model [M. Busquet, Phys. Fluids 85 (1993) 4191] where an ionization temperature T z is used to derive non-LTE properties from LTE data. T z is obtained from an "extended Saha equation" where unbalanced transitions, like radiative decay, give the non-LTE behavior. Since then, major improvements have been made. T z has been shown to be more than a heuristic value, but describes the actual distribution of excited and ionized states and can be understood as an "effective temperature". Therefore we complement the extended Saha equation by introducing explicitly the auto-ionization/dielectronic capture. Also we use the SCROLL model to benchmark the computed values of T z.
NASA Technical Reports Server (NTRS)
Pereira, J. M.; Revilock, D. M.
2004-01-01
Under the Federal Aviation Administration's Airworthiness Assurance Center of Excellence and the Aircraft Catastrophic Failure Prevention Program, National Aeronautics and Space Administration Glenn Research Center collaborated with Arizona State University, Honeywell Engines, Systems and Services, and SRI International to develop improved computational models for designing fabric-based engine containment systems. In the study described in this report, ballistic impact tests were conducted on layered dry fabric rings to provide impact response data for calibrating and verifying the improved numerical models. This report provides data on projectile velocity, impact and residual energy, and fabric deformation for a number of different test conditions.
Current Status and Challenges of Atmospheric Data Assimilation
NASA Astrophysics Data System (ADS)
Atlas, R. M.; Gelaro, R.
2016-12-01
The issues of modern atmospheric data assimilation are fairly simple to comprehend but difficult to address, involving the combination of literally billions of model variables and tens of millions of observations daily. In addition to traditional meteorological variables such as wind, temperature pressure and humidity, model state vectors are being expanded to include explicit representation of precipitation, clouds, aerosols and atmospheric trace gases. At the same time, model resolutions are approaching single-kilometer scales globally and new observation types have error characteristics that are increasingly non-Gaussian. This talk describes the current status and challenges of atmospheric data assimilation, including an overview of current methodologies, the difficulty of estimating error statistics, and progress toward coupled earth system analyses.
Object-oriented biomedical system modelling--the language.
Hakman, M; Groth, T
1999-11-01
The paper describes a new object-oriented biomedical continuous system modelling language (OOBSML). It is fully object-oriented and supports model inheritance, encapsulation, and model component instantiation and behaviour polymorphism. Besides the traditional differential and algebraic equation expressions the language includes also formal expressions for documenting models and defining model quantity types and quantity units. It supports explicit definition of model input-, output- and state quantities, model components and component connections. The OOBSML model compiler produces self-contained, independent, executable model components that can be instantiated and used within other OOBSML models and/or stored within model and model component libraries. In this way complex models can be structured as multilevel, multi-component model hierarchies. Technically the model components produced by the OOBSML compiler are executable computer code objects based on distributed object and object request broker technology. This paper includes both the language tutorial and the formal language syntax and semantic description.
Global asymptotic stability and hopf bifurcation for a blood cell production model.
Crauste, Fabien
2006-04-01
We analyze the asymptotic stability of a nonlinear system of two differential equations with delay, describing the dynamics of blood cell produc- tion. This process takes place in the bone marrow, where stem cells differen- tiate throughout division in blood cells. Taking into account an explicit role of the total population of hematopoietic stem cells in the introduction of cells in cycle, we are led to study a characteristic equation with delay-dependent coefficients. We determine a necessary and sufficient condition for the global stability of the first steady state of our model, which describes the popula- tion's dying out, and we obtain the existence of a Hopf bifurcation for the only nontrivial positive steady state, leading to the existence of periodic solutions. These latter are related to dynamical diseases affecting blood cells known for their cyclic nature.
Explicit least squares system parameter identification for exact differential input/output models
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1993-01-01
The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.
Generalized reproduction numbers and the prediction of patterns in waterborne disease
Gatto, Marino; Mari, Lorenzo; Bertuzzo, Enrico; Casagrandi, Renato; Righetto, Lorenzo; Rodriguez-Iturbe, Ignacio; Rinaldo, Andrea
2012-01-01
Understanding, predicting, and controlling outbreaks of waterborne diseases are crucial goals of public health policies, but pose challenging problems because infection patterns are influenced by spatial structure and temporal asynchrony. Although explicit spatial modeling is made possible by widespread data mapping of hydrology, transportation infrastructure, population distribution, and sanitation, the precise condition under which a waterborne disease epidemic can start in a spatially explicit setting is still lacking. Here we show that the requirement that all the local reproduction numbers be larger than unity is neither necessary nor sufficient for outbreaks to occur when local settlements are connected by networks of primary and secondary infection mechanisms. To determine onset conditions, we derive general analytical expressions for a reproduction matrix , explicitly accounting for spatial distributions of human settlements and pathogen transmission via hydrological and human mobility networks. At disease onset, a generalized reproduction number (the dominant eigenvalue of ) must be larger than unity. We also show that geographical outbreak patterns in complex environments are linked to the dominant eigenvector and to spectral properties of . Tests against data and computations for the 2010 Haiti and 2000 KwaZulu-Natal cholera outbreaks, as well as against computations for metapopulation networks, demonstrate that eigenvectors of provide a synthetic and effective tool for predicting the disease course in space and time. Networked connectivity models, describing the interplay between hydrology, epidemiology, and social behavior sustaining human mobility, thus prove to be key tools for emergency management of waterborne infections. PMID:23150538
Optimizing Environmental Flow Operation Rules based on Explicit IHA Constraints
NASA Astrophysics Data System (ADS)
Dongnan, L.; Wan, W.; Zhao, J.
2017-12-01
Multi-objective operation of reservoirs are increasingly asked to consider the environmental flow to support ecosystem health. Indicators of Hydrologic Alteration (IHA) is widely used to describe environmental flow regimes, but few studies have explicitly formulated it into optimization models and thus is difficult to direct reservoir release. In an attempt to incorporate the benefit of environmental flow into economic achievement, a two-objective reservoir optimization model is developed and all 33 hydrologic parameters of IHA are explicitly formulated into constraints. The benefit of economic is defined by Hydropower Production (HP) while the benefit of environmental flow is transformed into Eco-Index (EI) that combined 5 of the 33 IHA parameters chosen by principal component analysis method. Five scenarios (A to E) with different constraints are tested and solved by nonlinear programming. The case study of Jing Hong reservoir, located in the upstream of Mekong basin, China, shows: 1. A Pareto frontier is formed by maximizing on only HP objective in scenario A and on only EI objective in scenario B. 2. Scenario D using IHA parameters as constraints obtains the optimal benefits of both economic and ecological. 3. A sensitive weight coefficient is found in scenario E, but the trade-offs between HP and EI objectives are not within the Pareto frontier. 4. When the fraction of reservoir utilizable capacity reaches 0.8, both HP and EI capture acceptable values. At last, to make this modelmore conveniently applied to everyday practice, a simplified operation rule curve is extracted.
Awareness-based game-theoretic space resource management
NASA Astrophysics Data System (ADS)
Chen, Genshe; Chen, Huimin; Pham, Khanh; Blasch, Erik; Cruz, Jose B., Jr.
2009-05-01
Over recent decades, the space environment becomes more complex with a significant increase in space debris and a greater density of spacecraft, which poses great difficulties to efficient and reliable space operations. In this paper we present a Hierarchical Sensor Management (HSM) method to space operations by (a) accommodating awareness modeling and updating and (b) collaborative search and tracking space objects. The basic approach is described as follows. Firstly, partition the relevant region of interest into district cells. Second, initialize and model the dynamics of each cell with awareness and object covariance according to prior information. Secondly, explicitly assign sensing resources to objects with user specified requirements. Note that when an object has intelligent response to the sensing event, the sensor assigned to observe an intelligent object may switch from time-to-time between a strong, active signal mode and a passive mode to maximize the total amount of information to be obtained over a multi-step time horizon and avoid risks. Thirdly, if all explicitly specified requirements are satisfied and there are still more sensing resources available, we assign the additional sensing resources to objects without explicitly specified requirements via an information based approach. Finally, sensor scheduling is applied to each sensor-object or sensor-cell pair according to the object type. We demonstrate our method with realistic space resources management scenario using NASA's General Mission Analysis Tool (GMAT) for space object search and track with multiple space borne observers.
A new approach for describing glass transition kinetics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vasin, N. M.; Shchelkachev, M. G.; Vinokur, V. M.
2010-04-01
We use a functional integral technique generalizing the Keldysh diagram technique to describe glass transition kinetics. We show that the Keldysh functional approach takes the dynamical determinant arising in the glass dynamics into account exactly and generalizes the traditional approach based on using the supersymmetric dynamic generating functional method. In contrast to the supersymmetric method, this approach allows avoiding additional Grassmannian fields and tracking the violation of the fluctuation-dissipation theorem explicitly. We use this method to describe the dynamics of an Edwards-Anderson soft spin-glass-type model near the paramagnet-glass transition. We show that a Vogel-Fulcher-type dynamics arises in the fluctuation regionmore » only if the fluctuation-dissipation theorem is violated in the process of dynamical renormalization of the Keldysh action in the replica space.« less
Common Data Model for Neuroscience Data and Data Model Exchange
Gardner, Daniel; Knuth, Kevin H.; Abato, Michael; Erde, Steven M.; White, Thomas; DeBellis, Robert; Gardner, Esther P.
2001-01-01
Objective: Generalizing the data models underlying two prototype neurophysiology databases, the authors describe and propose the Common Data Model (CDM) as a framework for federating a broad spectrum of disparate neuroscience information resources. Design: Each component of the CDM derives from one of five superclasses—data, site, method, model, and reference—or from relations defined between them. A hierarchic attribute-value scheme for metadata enables interoperability with variable tree depth to serve specific intra- or broad inter-domain queries. To mediate data exchange between disparate systems, the authors propose a set of XML-derived schema for describing not only data sets but data models. These include biophysical description markup language (BDML), which mediates interoperability between data resources by providing a meta-description for the CDM. Results: The set of superclasses potentially spans data needs of contemporary neuroscience. Data elements abstracted from neurophysiology time series and histogram data represent data sets that differ in dimension and concordance. Site elements transcend neurons to describe subcellular compartments, circuits, regions, or slices; non-neuroanatomic sites include sequences to patients. Methods and models are highly domain-dependent. Conclusions: True federation of data resources requires explicit public description, in a metalanguage, of the contents, query methods, data formats, and data models of each data resource. Any data model that can be derived from the defined superclasses is potentially conformant and interoperability can be enabled by recognition of BDML-described compatibilities. Such metadescriptions can buffer technologic changes. PMID:11141510
Flexible explicit but rigid implicit learning in a visuomotor adaptation task
Bond, Krista M.
2015-01-01
There is mounting evidence for the idea that performance in a visuomotor rotation task can be supported by both implicit and explicit forms of learning. The implicit component of learning has been well characterized in previous experiments and is thought to arise from the adaptation of an internal model driven by sensorimotor prediction errors. However, the role of explicit learning is less clear, and previous investigations aimed at characterizing the explicit component have relied on indirect measures such as dual-task manipulations, posttests, and descriptive computational models. To address this problem, we developed a new method for directly assaying explicit learning by having participants verbally report their intended aiming direction on each trial. While our previous research employing this method has demonstrated the possibility of measuring explicit learning over the course of training, it was only tested over a limited scope of manipulations common to visuomotor rotation tasks. In the present study, we sought to better characterize explicit and implicit learning over a wider range of task conditions. We tested how explicit and implicit learning change as a function of the specific visual landmarks used to probe explicit learning, the number of training targets, and the size of the rotation. We found that explicit learning was remarkably flexible, responding appropriately to task demands. In contrast, implicit learning was strikingly rigid, with each task condition producing a similar degree of implicit learning. These results suggest that explicit learning is a fundamental component of motor learning and has been overlooked or conflated in previous visuomotor tasks. PMID:25855690
Concepts and Methods of Explicit Marital Negotiation Training with the Marriage Contract Game.
ERIC Educational Resources Information Center
Blechman, Elaine A.; Rabin, Claire
1982-01-01
Describes the Marriage Contract Game, designed to help couples negotiate relationship and task problems in an explicit, rational manner. Discusses the game's conceptual ties to modes of behavioral family intervention and to the social psychology of bargaining. Concludes with an example of the game's application to a distressed couple. (Author/JAC)
Skin-electrode circuit model for use in optimizing energy transfer in volume conduction systems.
Hackworth, Steven A; Sun, Mingui; Sclabassi, Robert J
2009-01-01
The X-Delta model for through-skin volume conduction systems is introduced and analyzed. This new model has advantages over our previous X model in that it explicitly represents current pathways in the skin. A vector network analyzer is used to take measurements on pig skin to obtain data for use in finding the model's impedance parameters. An optimization method for obtaining this more complex model's parameters is described. Results show the model to accurately represent the impedance behavior of the skin system with error of generally less than one percent. Uses for the model include optimizing energy transfer across the skin in a volume conduction system with appropriate current exposure constraints, and exploring non-linear behavior of the electrode-skin system at moderate voltages (below ten) and frequencies (kilohertz to megahertz).
Hydrologic controls on equilibrium soil depths
NASA Astrophysics Data System (ADS)
Nicótina, L.; Tarboton, D. G.; Tesfa, T. K.; Rinaldo, A.
2011-04-01
This paper deals with modeling the mutual feedbacks between runoff production and geomorphological processes and attributes that lead to patterns of equilibrium soil depth. Our primary goal is an attempt to describe spatial patterns of soil depth resulting from long-term interactions between hydrologic forcings and soil production, erosion, and sediment transport processes under the framework of landscape dynamic equilibrium. Another goal is to set the premises for exploiting the role of soil depths in shaping the hydrologic response of a catchment. The relevance of the study stems from the massive improvement in hydrologic predictions for ungauged basins that would be achieved by using directly soil depths derived from geomorphic features remotely measured and objectively manipulated. Hydrological processes are here described by explicitly accounting for local soil depths and detailed catchment topography. Geomorphological processes are described by means of well-studied geomorphic transport laws. The modeling approach is applied to the semiarid Dry Creek Experimental Watershed, located near Boise, Idaho. Modeled soil depths are compared with field data obtained from an extensive survey of the catchment. Our results show the ability of the model to describe properly the mean soil depth and the broad features of the distribution of measured data. However, local comparisons show significant scatter whose origins are discussed.
Bayesian inference in camera trapping studies for a class of spatial capture-recapture models
Royle, J. Andrew; Karanth, K. Ullas; Gopalaswamy, Arjun M.; Kumar, N. Samba
2009-01-01
We develop a class of models for inference about abundance or density using spatial capture-recapture data from studies based on camera trapping and related methods. The model is a hierarchical model composed of two components: a point process model describing the distribution of individuals in space (or their home range centers) and a model describing the observation of individuals in traps. We suppose that trap- and individual-specific capture probabilities are a function of distance between individual home range centers and trap locations. We show that the models can be regarded as generalized linear mixed models, where the individual home range centers are random effects. We adopt a Bayesian framework for inference under these models using a formulation based on data augmentation. We apply the models to camera trapping data on tigers from the Nagarahole Reserve, India, collected over 48 nights in 2006. For this study, 120 camera locations were used, but cameras were only operational at 30 locations during any given sample occasion. Movement of traps is common in many camera-trapping studies and represents an important feature of the observation model that we address explicitly in our application.
NASA Astrophysics Data System (ADS)
Chakraborty, Arup
No medical procedure has saved more lives than vaccination. But, today, some pathogens have evolved which have defied successful vaccination using the empirical paradigms pioneered by Pasteur and Jenner. One characteristic of many pathogens for which successful vaccines do not exist is that they present themselves in various guises. HIV is an extreme example because of its high mutability. This highly mutable virus can evade natural or vaccine induced immune responses, often by mutating at multiple sites linked by compensatory interactions. I will describe first how by bringing to bear ideas from statistical physics (e.g., maximum entropy models, Hopfield models, Feynman variational theory) together with in vitro experiments and clinical data, the fitness landscape of HIV is beginning to be defined with explicit account for collective mutational pathways. I will describe how this knowledge can be harnessed for vaccine design. Finally, I will describe how ideas at the intersection of evolutionary biology, immunology, and statistical physics can help guide the design of strategies that may be able to induce broadly neutralizing antibodies.
EPRL/FK asymptotics and the flatness problem
NASA Astrophysics Data System (ADS)
Oliveira, José Ricardo
2018-05-01
Spin foam models are an approach to quantum gravity based on the concept of sum over states, which aims to describe quantum spacetime dynamics in a way that its parent framework, loop quantum gravity, has not as of yet succeeded. Since these models’ relation to classical Einstein gravity is not explicit, an important test of their viabilitiy is the study of asymptotics—the classical theory should be obtained in a limit where quantum effects are negligible, taken to be the limit of large triangle areas in a triangulated manifold with boundary. In this paper we will briefly introduce the EPRL/FK spin foam model and known results about its asymptotics, proceeding then to describe a practical computation of spin foam and semiclassical geometric data for a simple triangulation with only one interior triangle. The results are used to comment on the ‘flatness problem’—a hypothesis raised by Bonzom (2009 Phys. Rev. D 80 064028) suggesting that EPRL/FK’s classical limit only describes flat geometries in vacuum.
Thickness-shear mode quartz crystal resonators in viscoelastic fluid media
NASA Astrophysics Data System (ADS)
Arnau, A.; Jiménez, Y.; Sogorb, T.
2000-10-01
An extended Butterworth-Van Dyke (EBVD) model to characterize a thickness-shear mode quartz crystal resonator in a semi-infinite viscoelastic medium is derived by means of analysis of the lumped elements model described by Cernosek et al. [R. W. Cernosek, S. J. Martin, A. R. Hillman, and H. L. Bandey, IEEE Trans. Ultrason. Ferroelectr. Freq. Control 45, 1399 (1998)]. The EBVD model parameters are related to the viscoelastic properties of the medium. A capacitance added to the motional branch of the EBVD model has to be included when the elastic properties of the fluid are considered. From this model, an explicit expression for the frequency shift of a quartz crystal sensor in viscoelastic media is obtained. By combining the expressions for shifts in the motional series resonant frequency and in the motional resistance, a simple equation that relates only one unknown (the loss factor of the fluid) to those measurable quantities, and two simple explicit expressions for determining the viscoelastic properties of semi-infinite fluid media have been derived. The proposed expression for the parameter Δf/ΔR is compared with the corresponding ratio obtained with data computed from the complete admittance model. Relative errors below 4.5%, 3%, and 1.2% (for the ratios of the load surface mechanical impedance to the quartz shear characteristic impedance of 0.3, 0.25, and 0.1, respectively), are obtained in the range of the cases analyzed. Experimental data from the literature are used to validate the model.
NASA Astrophysics Data System (ADS)
Finger, Flavio; Knox, Allyn; Bertuzzo, Enrico; Mari, Lorenzo; Bompangue, Didier; Gatto, Marino; Rodriguez-Iturbe, Ignacio; Rinaldo, Andrea
2014-07-01
Mathematical models of cholera dynamics can not only help in identifying environmental drivers and processes that influence disease transmission, but may also represent valuable tools for the prediction of the epidemiological patterns in time and space as well as for the allocation of health care resources. Cholera outbreaks have been reported in the Democratic Republic of the Congo since the 1970s. They have been ravaging the shore of Lake Kivu in the east of the country repeatedly during the last decades. Here we employ a spatially explicit, inhomogeneous Markov chain model to describe cholera incidence in eight health zones on the shore of the lake. Remotely sensed data sets of chlorophyll a concentration in the lake, precipitation and indices of global climate anomalies are used as environmental drivers in addition to baseline seasonality. The effect of human mobility is also modelled mechanistically. We test several models on a multiyear data set of reported cholera cases. The best fourteen models, accounting for different environmental drivers, and selected using the Akaike information criterion, are formally compared via proper cross validation. Among these, the one accounting for seasonality, El Niño Southern Oscillation, precipitation and human mobility outperforms the others in cross validation. Some drivers (such as human mobility and rainfall) are retained only by a few models, possibly indicating that the mechanisms through which they influence cholera dynamics in the area will have to be investigated further.
Cockrell, Robert Chase; Christley, Scott; Chang, Eugene; An, Gary
2015-01-01
Perhaps the greatest challenge currently facing the biomedical research community is the ability to integrate highly detailed cellular and molecular mechanisms to represent clinical disease states as a pathway to engineer effective therapeutics. This is particularly evident in the representation of organ-level pathophysiology in terms of abnormal tissue structure, which, through histology, remains a mainstay in disease diagnosis and staging. As such, being able to generate anatomic scale simulations is a highly desirable goal. While computational limitations have previously constrained the size and scope of multi-scale computational models, advances in the capacity and availability of high-performance computing (HPC) resources have greatly expanded the ability of computational models of biological systems to achieve anatomic, clinically relevant scale. Diseases of the intestinal tract are exemplary examples of pathophysiological processes that manifest at multiple scales of spatial resolution, with structural abnormalities present at the microscopic, macroscopic and organ-levels. In this paper, we describe a novel, massively parallel computational model of the gut, the Spatially Explicitly General-purpose Model of Enteric Tissue_HPC (SEGMEnT_HPC), which extends an existing model of the gut epithelium, SEGMEnT, in order to create cell-for-cell anatomic scale simulations. We present an example implementation of SEGMEnT_HPC that simulates the pathogenesis of ileal pouchitis, and important clinical entity that affects patients following remedial surgery for ulcerative colitis. PMID:25806784
Baker, Nathan A.; McCammon, J. Andrew
2008-01-01
The solvent reaction field potential of an uncharged protein immersed in Simple Point Charge/Extended (SPC/E) explicit solvent was computed over a series of molecular dynamics trajectories, intotal 1560 ns of simulation time. A finite, positive potential of 13 to 24 kbTec−1 (where T = 300K), dependent on the geometry of the solvent-accessible surface, was observed inside the biomolecule. The primary contribution to this potential arose from a layer of positive charge density 1.0 Å from the solute surface, on average 0.008 ec/Å3, which we found to be the product of a highly ordered first solvation shell. Significant second solvation shell effects, including additional layers of charge density and a slight decrease in the short-range solvent-solvent interaction strength, were also observed. The impact of these findings on implicit solvent models was assessed by running similar explicit-solvent simulations on the fully charged protein system. When the energy due to the solvent reaction field in the uncharged system is accounted for, correlation between per-atom electrostatic energies for the explicit solvent model and a simple implicit (Poisson) calculation is 0.97, and correlation between per-atom energies for the explicit solvent model and a previously published, optimized Poisson model is 0.99. PMID:17949217
NASA Astrophysics Data System (ADS)
Cerutti, David S.; Baker, Nathan A.; McCammon, J. Andrew
2007-10-01
The solvent reaction field potential of an uncharged protein immersed in simple point charge/extended explicit solvent was computed over a series of molecular dynamics trajectories, in total 1560ns of simulation time. A finite, positive potential of 13-24 kbTec-1 (where T =300K), dependent on the geometry of the solvent-accessible surface, was observed inside the biomolecule. The primary contribution to this potential arose from a layer of positive charge density 1.0Å from the solute surface, on average 0.008ec/Å3, which we found to be the product of a highly ordered first solvation shell. Significant second solvation shell effects, including additional layers of charge density and a slight decrease in the short-range solvent-solvent interaction strength, were also observed. The impact of these findings on implicit solvent models was assessed by running similar explicit solvent simulations on the fully charged protein system. When the energy due to the solvent reaction field in the uncharged system is accounted for, correlation between per-atom electrostatic energies for the explicit solvent model and a simple implicit (Poisson) calculation is 0.97, and correlation between per-atom energies for the explicit solvent model and a previously published, optimized Poisson model is 0.99.
NASA Astrophysics Data System (ADS)
Clark, Martyn P.; Kavetski, Dmitri
2010-10-01
A major neglected weakness of many current hydrological models is the numerical method used to solve the governing model equations. This paper thoroughly evaluates several classes of time stepping schemes in terms of numerical reliability and computational efficiency in the context of conceptual hydrological modeling. Numerical experiments are carried out using 8 distinct time stepping algorithms and 6 different conceptual rainfall-runoff models, applied in a densely gauged experimental catchment, as well as in 12 basins with diverse physical and hydroclimatic characteristics. Results show that, over vast regions of the parameter space, the numerical errors of fixed-step explicit schemes commonly used in hydrology routinely dwarf the structural errors of the model conceptualization. This substantially degrades model predictions, but also, disturbingly, generates fortuitously adequate performance for parameter sets where numerical errors compensate for model structural errors. Simply running fixed-step explicit schemes with shorter time steps provides a poor balance between accuracy and efficiency: in some cases daily-step adaptive explicit schemes with moderate error tolerances achieved comparable or higher accuracy than 15 min fixed-step explicit approximations but were nearly 10 times more efficient. From the range of simple time stepping schemes investigated in this work, the fixed-step implicit Euler method and the adaptive explicit Heun method emerge as good practical choices for the majority of simulation scenarios. In combination with the companion paper, where impacts on model analysis, interpretation, and prediction are assessed, this two-part study vividly highlights the impact of numerical errors on critical performance aspects of conceptual hydrological models and provides practical guidelines for robust numerical implementation.
Skinner, Harvey A; Maley, Oonagh; Norman, Cameron D
2006-10-01
Health education and health promotion have a tradition of using information and communication technology (ICT). In recent years, the rapid growth of the Internet has created innovative opportunities for Web-based health education and behavior change applications-termed eHealth promotion. However, many eHealth promotion applications are developed without an explicit model to guide the design, evaluation, and ongoing improvement of the program. The spiral technology action research (STAR) model was developed to address this need. The model comprises five cycles (listen, plan, do, study, act) that weave together technological development, community involvement, and continuous improvement. The model is illustrated by a case study describing the development of the Smoking Zine (www.SmokingZine.org), a youth smoking prevention and cessation Web site.
Habitat suitability index models: Black crappie
Edwards, Elizabeth A.; Krieger, Douglas A.; Bacteller, Mary; Maughan, O. Eugene
1982-01-01
Characteristics and habitat requirements of the black crappie (Pomoxis nigromaculatus) are described in a review of Habitat Suitability Index models. This is one in a series of publications to provide information on the habitat requirements of selected fish and wildlife species. Numerous literature sources have been consulted in an effort to consolidate scientific data on species-habitat relationships. These data have subsequently been synthesized into explicit Habitat Suitability Index (HSI) models. The models are based on suitability indices indicating habitat preferences. Indices have been formulated for variables found to affect the life cycle and survival of each species. Habitat Suitability Index (HSI) models are designed to provide information for use in impact assessment and habitat management activities. The HSI technique is a corollary to the U.S. Fish and Wildlife Service's Habitat Evaluation Procedures.
Empirical methods for modeling landscape change, ecosystem services, and biodiversity
David Lewis; Ralph Alig
2009-01-01
The purpose of this paper is to synthesize recent economics research aimed at integrating discrete-choice econometric models of land-use change with spatially-explicit landscape simulations and quantitative ecology. This research explicitly models changes in the spatial pattern of landscapes in two steps: 1) econometric estimation of parcel-scale transition...
Fourier-Legendre spectral methods for incompressible channel flow
NASA Technical Reports Server (NTRS)
Zang, T. A.; Hussaini, M. Y.
1984-01-01
An iterative collocation technique is described for modeling implicit viscosity in three-dimensional incompressible wall bounded shear flow. The viscosity can vary temporally and in the vertical direction. Channel flow is modeled with a Fourier-Legendre approximation and the mean streamwise advection is treated implicitly. Explicit terms are handled with an Adams-Bashforth method to increase the allowable time-step for calculation of the implicit terms. The algorithm is applied to low amplitude unstable waves in a plane Poiseuille flow at an Re of 7500. Comparisons are made between results using the Legendre method and with Chebyshev polynomials. Comparable accuracy is obtained for the perturbation kinetic energy predicted using both discretizations.
A Verification System for Distributed Objects with Asynchronous Method Calls
NASA Astrophysics Data System (ADS)
Ahrendt, Wolfgang; Dylla, Maximilian
We present a verification system for Creol, an object-oriented modeling language for concurrent distributed applications. The system is an instance of KeY, a framework for object-oriented software verification, which has so far been applied foremost to sequential Java. Building on KeY characteristic concepts, like dynamic logic, sequent calculus, explicit substitutions, and the taclet rule language, the system presented in this paper addresses functional correctness of Creol models featuring local cooperative thread parallelism and global communication via asynchronous method calls. The calculus heavily operates on communication histories which describe the interfaces of Creol units. Two example scenarios demonstrate the usage of the system.
NASA Astrophysics Data System (ADS)
Romanova, V.; Balokhonov, R.; Batukhtina, E.; Zinovieva, O.; Bezmozgiy, I.
2015-10-01
The results of a numerical analysis of the mesoscale surface roughening in a polycrystalline aluminum alloy exposed to uniaxial tension are presented. A 3D finite-element model taking an explicit account of grain structure is developed. The model describes a constitutive behavior of the material on the grain scale, using anisotropic elasticity and crystal plasticity theory. The effects of the grain shape and texture on the deformation-induced roughening are investigated. Calculation results have shown that surface roughness is much higher and develops at the highest rate in a polycrystal with equiaxed grains where both the micro- and mesoscale surface displacements are observed.
Integrable models of quantum optics
NASA Astrophysics Data System (ADS)
Yudson, Vladimir; Makarov, Aleksander
2017-10-01
We give an overview of exactly solvable many-body models of quantum optics. Among them is a system of two-level atoms which interact with photons propagating in a one-dimensional (1D) chiral waveguide; exact eigenstates of this system can be explicitly constructed. This approach is used also for a system of closely located atoms in the usual (non-chiral) waveguide or in 3D space. Moreover, it is shown that for an arbitrary atomic system with a cascade spontaneous radiative decay, the fluorescence spectrum can be described by an exact analytic expression which accounts for interference of emitted photons. Open questions related with broken integrability are discussed.
A Galilean Invariant Explicit Algebraic Reynolds Stress Model for Curved Flows
NASA Technical Reports Server (NTRS)
Girimaji, Sharath
1996-01-01
A Galilean invariant weak-equilbrium hypothesis that is sensitive to streamline curvature is proposed. The hypothesis leads to an algebraic Reynolds stress model for curved flows that is fully explicit and self-consistent. The model is tested in curved homogeneous shear flow: the agreement is excellent with Reynolds stress closure model and adequate with available experimental data.
A functional-dynamic reflection on participatory processes in modeling projects.
Seidl, Roman
2015-12-01
The participation of nonscientists in modeling projects/studies is increasingly employed to fulfill different functions. However, it is not well investigated if and how explicitly these functions and the dynamics of a participatory process are reflected by modeling projects in particular. In this review study, I explore participatory modeling projects from a functional-dynamic process perspective. The main differences among projects relate to the functions of participation-most often, more than one per project can be identified, along with the degree of explicit reflection (i.e., awareness and anticipation) on the dynamic process perspective. Moreover, two main approaches are revealed: participatory modeling covering diverse approaches and companion modeling. It becomes apparent that the degree of reflection on the participatory process itself is not always explicit and perfectly visible in the descriptions of the modeling projects. Thus, the use of common protocols or templates is discussed to facilitate project planning, as well as the publication of project results. A generic template may help, not in providing details of a project or model development, but in explicitly reflecting on the participatory process. It can serve to systematize the particular project's approach to stakeholder collaboration, and thus quality management.
Need for speed: An optimized gridding approach for spatially explicit disease simulations.
Sellman, Stefan; Tsao, Kimberly; Tildesley, Michael J; Brommesson, Peter; Webb, Colleen T; Wennergren, Uno; Keeling, Matt J; Lindström, Tom
2018-04-01
Numerical models for simulating outbreaks of infectious diseases are powerful tools for informing surveillance and control strategy decisions. However, large-scale spatially explicit models can be limited by the amount of computational resources they require, which poses a problem when multiple scenarios need to be explored to provide policy recommendations. We introduce an easily implemented method that can reduce computation time in a standard Susceptible-Exposed-Infectious-Removed (SEIR) model without introducing any further approximations or truncations. It is based on a hierarchical infection process that operates on entire groups of spatially related nodes (cells in a grid) in order to efficiently filter out large volumes of susceptible nodes that would otherwise have required expensive calculations. After the filtering of the cells, only a subset of the nodes that were originally at risk are then evaluated for actual infection. The increase in efficiency is sensitive to the exact configuration of the grid, and we describe a simple method to find an estimate of the optimal configuration of a given landscape as well as a method to partition the landscape into a grid configuration. To investigate its efficiency, we compare the introduced methods to other algorithms and evaluate computation time, focusing on simulated outbreaks of foot-and-mouth disease (FMD) on the farm population of the USA, the UK and Sweden, as well as on three randomly generated populations with varying degree of clustering. The introduced method provided up to 500 times faster calculations than pairwise computation, and consistently performed as well or better than other available methods. This enables large scale, spatially explicit simulations such as for the entire continental USA without sacrificing realism or predictive power.
Need for speed: An optimized gridding approach for spatially explicit disease simulations
Tildesley, Michael J.; Brommesson, Peter; Webb, Colleen T.; Wennergren, Uno; Lindström, Tom
2018-01-01
Numerical models for simulating outbreaks of infectious diseases are powerful tools for informing surveillance and control strategy decisions. However, large-scale spatially explicit models can be limited by the amount of computational resources they require, which poses a problem when multiple scenarios need to be explored to provide policy recommendations. We introduce an easily implemented method that can reduce computation time in a standard Susceptible-Exposed-Infectious-Removed (SEIR) model without introducing any further approximations or truncations. It is based on a hierarchical infection process that operates on entire groups of spatially related nodes (cells in a grid) in order to efficiently filter out large volumes of susceptible nodes that would otherwise have required expensive calculations. After the filtering of the cells, only a subset of the nodes that were originally at risk are then evaluated for actual infection. The increase in efficiency is sensitive to the exact configuration of the grid, and we describe a simple method to find an estimate of the optimal configuration of a given landscape as well as a method to partition the landscape into a grid configuration. To investigate its efficiency, we compare the introduced methods to other algorithms and evaluate computation time, focusing on simulated outbreaks of foot-and-mouth disease (FMD) on the farm population of the USA, the UK and Sweden, as well as on three randomly generated populations with varying degree of clustering. The introduced method provided up to 500 times faster calculations than pairwise computation, and consistently performed as well or better than other available methods. This enables large scale, spatially explicit simulations such as for the entire continental USA without sacrificing realism or predictive power. PMID:29624574
Observations and Modeling of the Green Ocean Amazon 2014/15. CHUVA Field Campaign Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Machado, L. A. T.
2016-03-01
The physical processes inside clouds are one of the most unknown components of weather and climate systems. A description of cloud processes through the use of standard meteorological parameters in numerical models has to be strongly improved to accurately describe the characteristics of hydrometeors, latent heating profiles, radiative balance, air entrainment, and cloud updrafts and downdrafts. Numerical models have been improved to run at higher spatial resolutions where it is necessary to explicitly describe these cloud processes. For instance, to analyze the effects of global warming in a given region it is necessary to perform simulations taking into account allmore » of the cloud processes described above. Another important application that requires this knowledge is satellite precipitation estimation. The analysis will be performed focusing on the microphysical evolution and cloud life cycle, different precipitation estimation algorithms, the development of thunderstorms and lightning formation, processes in the boundary layer, and cloud microphysical modeling. This project intends to extend the knowledge of these cloud processes to reduce the uncertainties in precipitation estimation, mainly from warm clouds, and, consequently, improve knowledge of the water and energy budget and cloud microphysics.« less
In-flight simulation investigation of rotorcraft pitch-roll cross coupling
NASA Technical Reports Server (NTRS)
Watson, Douglas C.; Hindson, William S.
1988-01-01
An in-flight simulation experiment investigating the handling qualities effects of the pitch-roll cross-coupling characteristic of single-main-rotor helicopters is described. The experiment was conducted using the NASA/Army CH-47B variable stability helicopter with an explicit-model-following control system. The research is an extension of an earlier ground-based investigation conducted on the NASA Ames Research Center's Vertical Motion Simulator. The model developed for the experiment is for an unaugmented helicopter with cross-coupling implemented using physical rotor parameters. The details of converting the model from the simulation to use in flight are described. A frequency-domain comparison of the model and actual aircraft responses showing the fidelity of the in-flight simulation is described. The evaluation task was representative of nap-of-the-Earth maneuvering flight. The results indicate that task demands are important in determining allowable levels of coupling. In addition, on-axis damping characteristics influence the frequency-dependent characteristics of coupling and affect the handling qualities. Pilot technique, in terms of learned control crossfeeds, can improve performance and lower workload for particular types of coupling. The results obtained in flight corroborated the simulation results.
Similarity in form and function of the hippocampus in rodents, monkeys, and humans.
Clark, Robert E; Squire, Larry R
2013-06-18
We begin by describing an historical scientific debate in which the fundamental idea that species are related by evolutionary descent was challenged. The challenge was based on supposed neuroanatomical differences between humans and other primates with respect to a structure known then as the hippocampus minor. The debate took place in the early 1860 s, just after the publication of Darwin's famous book. We then recount the difficult road that was traveled to develop an animal model of human memory impairment, a matter that also turned on questions about similarities and differences between humans and other primates. We then describe how the insight that there are multiple memory systems helped to secure the animal model and how the animal model was ultimately used to identify the neuroanatomy of long-term declarative memory (sometimes termed explicit memory). Finally, we describe a challenge to the animal model and to cross-species comparisons by considering the case of the concurrent discrimination task, drawing on findings from humans and monkeys. We suggest that analysis of such cases, based on the understanding that there are multiple memory systems with different properties, has served to emphasize the similarities in memory function across mammalian species.
High Performance Programming Using Explicit Shared Memory Model on Cray T3D1
NASA Technical Reports Server (NTRS)
Simon, Horst D.; Saini, Subhash; Grassi, Charles
1994-01-01
The Cray T3D system is the first-phase system in Cray Research, Inc.'s (CRI) three-phase massively parallel processing (MPP) program. This system features a heterogeneous architecture that closely couples DEC's Alpha microprocessors and CRI's parallel-vector technology, i.e., the Cray Y-MP and Cray C90. An overview of the Cray T3D hardware and available programming models is presented. Under Cray Research adaptive Fortran (CRAFT) model four programming methods (data parallel, work sharing, message-passing using PVM, and explicit shared memory model) are available to the users. However, at this time data parallel and work sharing programming models are not available to the user community. The differences between standard PVM and CRI's PVM are highlighted with performance measurements such as latencies and communication bandwidths. We have found that the performance of neither standard PVM nor CRI s PVM exploits the hardware capabilities of the T3D. The reasons for the bad performance of PVM as a native message-passing library are presented. This is illustrated by the performance of NAS Parallel Benchmarks (NPB) programmed in explicit shared memory model on Cray T3D. In general, the performance of standard PVM is about 4 to 5 times less than obtained by using explicit shared memory model. This degradation in performance is also seen on CM-5 where the performance of applications using native message-passing library CMMD on CM-5 is also about 4 to 5 times less than using data parallel methods. The issues involved (such as barriers, synchronization, invalidating data cache, aligning data cache etc.) while programming in explicit shared memory model are discussed. Comparative performance of NPB using explicit shared memory programming model on the Cray T3D and other highly parallel systems such as the TMC CM-5, Intel Paragon, Cray C90, IBM-SP1, etc. is presented.
An explicit plate kinematic model for the orogeny in the southern Uralides
NASA Astrophysics Data System (ADS)
Görz, Ines; Hielscher, Peggy
2010-10-01
The Palaeozoic Uralides formed in a three plate constellation between Europe, Siberia and Kazakhstan-Tarim. Starting from the first plate tectonic concepts, it was controversially discussed, whether the Uralide orogeny was the result of a relative plate motion between Europe and Siberia or between Europe and Kazakhstan. In this study, we use a new approach to address this problem. We perform a structural analysis on the sphere, reconstruct the positions of the Euler poles of the relative plate rotation Siberia-Europe and Tarim-Europe and describe Uralide structures by their relation to small circles about the two Euler poles. Using this method, changes in the strike of tectonic elements that are caused by the spherical geometry of the Earth's surface are eliminated and structures that are compatible with one of the relative plate motions can be identified. We show that only two Euler poles controlled the Palaeozoic tectonic evolution in the whole West Siberian region, but that they acted diachronously in different regions. We provide an explicit model describing the tectonism in West Siberia by an Euler pole, a sense of rotation and an approximate rotation angle. In the southern Uralides, Devonian structures resulted from a plate rotation of Siberia with respect to Europe, while the Permian structures were caused by a relative plate motion of Kazakhstan-Tarim with respect to Europe. The tectonic pause in the Carboniferous period correlates with a reorganization of the plate kinematics.
Batch-mode Reinforcement Learning for improved hydro-environmental systems management
NASA Astrophysics Data System (ADS)
Castelletti, A.; Galelli, S.; Restelli, M.; Soncini-Sessa, R.
2010-12-01
Despite the great progresses made in the last decades, the optimal management of hydro-environmental systems still remains a very active and challenging research area. The combination of multiple, often conflicting interests, high non-linearities of the physical processes and the management objectives, strong uncertainties in the inputs, and high dimensional state makes the problem challenging and intriguing. Stochastic Dynamic Programming (SDP) is one of the most suitable methods for designing (Pareto) optimal management policies preserving the original problem complexity. However, it suffers from a dual curse, which, de facto, prevents its practical application to even reasonably complex water systems. (i) Computational requirement grows exponentially with state and control dimension (Bellman's curse of dimensionality), so that SDP can not be used with water systems where the state vector includes more than few (2-3) units. (ii) An explicit model of each system's component is required (curse of modelling) to anticipate the effects of the system transitions, i.e. any information included into the SDP framework can only be either a state variable described by a dynamic model or a stochastic disturbance, independent in time, with the associated pdf. Any exogenous information that could effectively improve the system operation cannot be explicitly considered in taking the management decision, unless a dynamic model is identified for each additional information, thus adding to the problem complexity through the curse of dimensionality (additional state variables). To mitigate this dual curse, the combined use of batch-mode Reinforcement Learning (bRL) and Dynamic Model Reduction (DMR) techniques is explored in this study. bRL overcomes the curse of modelling by replacing explicit modelling with an external simulator and/or historical observations. The curse of dimensionality is averted using a functional approximation of the SDP value function based on proper non-linear regressors. DMR reduces the complexity and the associated computational requirements of non-linear distributed process based models, making them suitable for being included into optimization schemes. Results from real world applications of the approach are also presented, including reservoir operation with both quality and quantity targets.
Identification of walking human model using agent-based modelling
NASA Astrophysics Data System (ADS)
Shahabpoor, Erfan; Pavic, Aleksandar; Racic, Vitomir
2018-03-01
The interaction of walking people with large vibrating structures, such as footbridges and floors, in the vertical direction is an important yet challenging phenomenon to describe mathematically. Several different models have been proposed in the literature to simulate interaction of stationary people with vibrating structures. However, the research on moving (walking) human models, explicitly identified for vibration serviceability assessment of civil structures, is still sparse. In this study, the results of a comprehensive set of FRF-based modal tests were used, in which, over a hundred test subjects walked in different group sizes and walking patterns on a test structure. An agent-based model was used to simulate discrete traffic-structure interactions. The occupied structure modal parameters found in tests were used to identify the parameters of the walking individual's single-degree-of-freedom (SDOF) mass-spring-damper model using 'reverse engineering' methodology. The analysis of the results suggested that the normal distribution with the average of μ = 2.85Hz and standard deviation of σ = 0.34Hz can describe human SDOF model natural frequency. Similarly, the normal distribution with μ = 0.295 and σ = 0.047 can describe the human model damping ratio. Compared to the previous studies, the agent-based modelling methodology proposed in this paper offers significant flexibility in simulating multi-pedestrian walking traffics, external forces and simulating different mechanisms of human-structure and human-environment interaction at the same time.
The importance of explicitly mapping instructional analogies in science education
NASA Astrophysics Data System (ADS)
Asay, Loretta Johnson
Analogies are ubiquitous during instruction in science classrooms, yet research about the effectiveness of using analogies has produced mixed results. An aspect seldom studied is a model of instruction when using analogies. The few existing models for instruction with analogies have not often been examined quantitatively. The Teaching With Analogies (TWA) model (Glynn, 1991) is one of the models frequently cited in the variety of research about analogies. The TWA model outlines steps for instruction, including the step of explicitly mapping the features of the source to the target. An experimental study was conducted to examine the effects of explicitly mapping the features of the source and target in an analogy during computer-based instruction about electrical circuits. Explicit mapping was compared to no mapping and to a control with no analogy. Participants were ninth- and tenth-grade biology students who were each randomly assigned to one of three conditions (no analogy module, analogy module, or explicitly mapped analogy module) for computer-based instruction. Subjects took a pre-test before the instruction, which was used to assign them to a level of previous knowledge about electrical circuits for analysis of any differential effects. After the instruction modules, students took a post-test about electrical circuits. Two weeks later, they took a delayed post-test. No advantage was found for explicitly mapping the analogy. Learning patterns were the same, regardless of the type of instruction. Those who knew the least about electrical circuits, based on the pre-test, made the most gains. After the two-week delay, this group maintained the largest amount of their gain. Implications exist for science education classrooms, as analogy use should be based on research about effective practices. Further studies are suggested to foster the building of research-based models for classroom instruction with analogies.
Jeff Jenness; J. Judson Wynne
2005-01-01
In the field of spatially explicit modeling, well-developed accuracy assessment methodologies are often poorly applied. Deriving model accuracy metrics have been possible for decades, but these calculations were made by hand or with the use of a spreadsheet application. Accuracy assessments may be useful for: (1) ascertaining the quality of a model; (2) improving model...
Moderators of the Relationship between Implicit and Explicit Evaluation
Nosek, Brian A.
2005-01-01
Automatic and controlled modes of evaluation sometimes provide conflicting reports of the quality of social objects. This paper presents evidence for four moderators of the relationship between automatic (implicit) and controlled (explicit) evaluations. Implicit and explicit preferences were measured for a variety of object pairs using a large sample. The average correlation was r = .36, and 52 of the 57 object pairs showed a significant positive correlation. Results of multilevel modeling analyses suggested that: (a) implicit and explicit preferences are related, (b) the relationship varies as a function of the objects assessed, and (c) at least four variables moderate the relationship – self-presentation, evaluative strength, dimensionality, and distinctiveness. The variables moderated implicit-explicit correspondence across individuals and accounted for much of the observed variation across content domains. The resulting model of the relationship between automatic and controlled evaluative processes is grounded in personal experience with the targets of evaluation. PMID:16316292
Perez-Sanchez, German; Chien, Szu -Chia; Gomes, Jose R. B.; ...
2016-04-04
A detailed theoretical understanding of the synthesis mechanism of periodic mesoporous silica has not yet been achieved. We present results of a multiscale simulation strategy that, for the first time, describes the molecular-level processes behind the formation of silica/surfactant mesophases in the synthesis of templated MCM-41 materials. The parameters of a new coarse-grained explicit-solvent model for the synthesis solution are calibrated with reference to a detailed atomistic model, which itself is based on quantum mechanical calculations. This approach allows us to reach the necessary time and length scales to explicitly simulate the spontaneous formation of mesophase structures while maintaining amore » level of realism that allows for direct comparison with experimental systems. Our model shows that silica oligomers are a necessary component in the formation of hexagonal liquid crystals from low-concentration surfactant solutions. Because they are multiply charged, silica oligomers are able to bridge adjacent micelles, thus allowing them to overcome their mutual repulsion and form aggregates. This leads the system to phase separate into a dilute solution and a silica/surfactant-rich mesophase, which leads to MCM-41 formation. Before extensive silica condensation takes place, the mesophase structure can be controlled by manipulation of the synthesis conditions. Our modeling results are in close agreement with experimental observations and strongly support a cooperative mechanism for synthesis of this class of materials. Furthermore, this work paves the way for tailored design of nanoporous materials using computational models.« less
Gene-centric approach to integrating environmental genomics and biogeochemical models.
Reed, Daniel C; Algar, Christopher K; Huber, Julie A; Dick, Gregory J
2014-02-04
Rapid advances in molecular microbial ecology have yielded an unprecedented amount of data about the evolutionary relationships and functional traits of microbial communities that regulate global geochemical cycles. Biogeochemical models, however, are trailing in the wake of the environmental genomics revolution, and such models rarely incorporate explicit representations of bacteria and archaea, nor are they compatible with nucleic acid or protein sequence data. Here, we present a functional gene-based framework for describing microbial communities in biogeochemical models by incorporating genomics data to provide predictions that are readily testable. To demonstrate the approach in practice, nitrogen cycling in the Arabian Sea oxygen minimum zone (OMZ) was modeled to examine key questions about cryptic sulfur cycling and dinitrogen production pathways in OMZs. Simulations support previous assertions that denitrification dominates over anammox in the central Arabian Sea, which has important implications for the loss of fixed nitrogen from the oceans. Furthermore, cryptic sulfur cycling was shown to attenuate the secondary nitrite maximum often observed in OMZs owing to changes in the composition of the chemolithoautotrophic community and dominant metabolic pathways. Results underscore the need to explicitly integrate microbes into biogeochemical models rather than just the metabolisms they mediate. By directly linking geochemical dynamics to the genetic composition of microbial communities, the method provides a framework for achieving mechanistic insights into patterns and biogeochemical consequences of marine microbes. Such an approach is critical for informing our understanding of the key role microbes play in modulating Earth's biogeochemistry.
Towards a physically-based multi-scale ecohydrological simulator for semi-arid regions
NASA Astrophysics Data System (ADS)
Caviedes-Voullième, Daniel; Josefik, Zoltan; Hinz, Christoph
2017-04-01
The use of numerical models as tools for describing and understanding complex ecohydrological systems has enabled to test hypothesis and propose fundamental, process-based explanations of the system system behaviour as a whole as well as its internal dynamics. Reaction-diffusion equations have been used to describe and generate organized pattern such as bands, spots, and labyrinths using simple feedback mechanisms and boundary conditions. Alternatively, pattern-matching cellular automaton models have been used to generate vegetation self-organization in arid and semi-arid regions also using simple description of surface hydrological processes. A key question is: How much physical realism is needed in order to adequately capture the pattern formation processes in semi-arid regions while reliably representing the water balance dynamics at the relevant time scales? In fact, redistribution of water by surface runoff at the hillslope scale occurs at temporal resolution of minutes while the vegetation development requires much lower temporal resolution and longer times spans. This generates a fundamental spatio-temporal multi-scale problem to be solved, for which high resolution rainfall and surface topography are required. Accordingly, the objective of this contribution is to provide proof-of-concept that governing processes can be described numerically at those multiple scales. The requirements for a simulating ecohydrological processes and pattern formation with increased physical realism are, amongst others: i. high resolution rainfall that adequately captures the triggers of growth as vegetation dynamics of arid regions respond as pulsed systems. ii. complex, natural topography in order to accurately model drainage patterns, as surface water redistribution is highly sensitive to topographic features. iii. microtopography and hydraulic roughness, as small scale variations do impact on large scale hillslope behaviour iv. moisture dependent infiltration as temporal dynamics of infiltration affects water storage under vegetation and in bare soil Despite the volume of research in this field, fundamental limitations still exist in the models regarding the aforementioned issues. Topography and hydrodynamics have been strongly simplified. Infiltration has been modelled as dependent on depth but independent of soil moisture. Temporal rainfall variability has only been addressed for seasonal rain. Spatial heterogenity of the topography as well as roughness and infiltration properties, has not been fully and explicitly represented. We hypothesize that physical processes must be robustly modelled and the drivers of complexity must be present with as much resolution as possible in order to provide the necessary realism to improve transient simulations, perhaps leading the way to virtual laboratories and, arguably, predictive tools. This work provides a first approach into a model with explicit hydrological processes represented by physically-based hydrodynamic models, coupled with well-accepted vegetation models. The model aims to enable new possibilities relating to spatiotemporal variability, arbitrary topography and representation of spatial heterogeneity, including sub-daily (in fact, arbitrary) temporal variability of rain as the main forcing of the model, explicit representation of infiltration processes, and various feedback mechanisms between the hydrodynamics and the vegetation. Preliminary testing strongly suggests that the model is viable, has the potential of producing new information of internal dynamics of the system, and allows to successfully aggregate many of the sources of complexity. Initial benchmarking of the model also reveals strengths to be exploited, thus providing an interesting research outlook, as well as weaknesses to be addressed in the immediate future.
Black-box Brain Experiments, Causal Mathematical Logic, and the Thermodynamics of Intelligence
NASA Astrophysics Data System (ADS)
Pissanetzky, Sergio; Lanzalaco, Felix
2013-12-01
Awareness of the possible existence of a yet-unknown principle of Physics that explains cognition and intelligence does exist in several projects of emulation, simulation, and replication of the human brain currently under way. Brain simulation projects define their success partly in terms of the emergence of non-explicitly programmed biophysical signals such as self-oscillation and spreading cortical waves. We propose that a recently discovered theory of Physics known as Causal Mathematical Logic (CML) that links intelligence with causality and entropy and explains intelligent behavior from first principles, is the missing link. We further propose the theory as a roadway to understanding more complex biophysical signals, and to explain the set of intelligence principles. The new theory applies to information considered as an entity by itself. The theory proposes that any device that processes information and exhibits intelligence must satisfy certain theoretical conditions irrespective of the substrate where it is being processed. The substrate can be the human brain, a part of it, a worm's brain, a motor protein that self-locomotes in response to its environment, a computer. Here, we propose to extend the causal theory to systems in Neuroscience, because of its ability to model complex systems without heuristic approximations, and to predict emerging signals of intelligence directly from the models. The theory predicts the existence of a large number of observables (or "signals"), all of which emerge and can be directly and mathematically calculated from non-explicitly programmed detailed causal models. This approach is aiming for a universal and predictive language for Neuroscience and AGI based on causality and entropy, detailed enough to describe the finest structures and signals of the brain, yet general enough to accommodate the versatility and wholeness of intelligence. Experiments are focused on a black-box as one of the devices described above of which both the input and the output are precisely known, but not the internal implementation. The same input is separately supplied to a causal virtual machine, and the calculated output is compared with the measured output. The virtual machine, described in a previous paper, is a computer implementation of CML, fixed for all experiments and unrelated to the device in the black box. If the two outputs are equivalent, then the experiment has quantitatively succeeded and conclusions can be drawn regarding details of the internal implementation of the device. Several small black-box experiments were successfully performed and demonstrated the emergence of non-explicitly programmed cognitive function in each case
Chonggang Xu; Hong S. He; Yuanman Hu; Yu Chang; Xiuzhen Li; Rencang Bu
2005-01-01
Geostatistical stochastic simulation is always combined with Monte Carlo method to quantify the uncertainty in spatial model simulations. However, due to the relatively long running time of spatially explicit forest models as a result of their complexity, it is always infeasible to generate hundreds or thousands of Monte Carlo simulations. Thus, it is of great...
Thermodynamic Modeling of Gas Transport in Glassy Polymeric Membranes.
Minelli, Matteo; Sarti, Giulio Cesare
2017-08-19
Solubility and permeability of gases in glassy polymers have been considered with the aim of illustrating the applicability of thermodynamically-based models for their description and prediction. The solubility isotherms are described by using the nonequilibrium lattice fluid (NELF) (model, already known to be appropriate for nonequilibrium glassy polymers, while the permeability isotherms are described through a general transport model in which diffusivity is the product of a purely kinetic factor, the mobility coefficient, and a thermodynamic factor. The latter is calculated from the NELF model and mobility is considered concentration-dependent through an exponential relationship containing two parameters only. The models are tested explicitly considering solubility and permeability data of various penetrants in three glassy polymers, PSf, PPh and 6FDA-6FpDA, selected as the reference for different behaviors. It is shown that the models are able to calculate the different behaviors observed, and in particular the permeability dependence on upstream pressure, both when it is decreasing as well as when it is increasing, with no need to invoke the onset of additional plasticization phenomena. The correlations found between polymer and penetrant properties with the two parameters of the mobility coefficient also lead to the predictive ability of the transport model.
Thermodynamic Modeling of Gas Transport in Glassy Polymeric Membranes
Minelli, Matteo; Sarti, Giulio Cesare
2017-01-01
Solubility and permeability of gases in glassy polymers have been considered with the aim of illustrating the applicability of thermodynamically-based models for their description and prediction. The solubility isotherms are described by using the nonequilibrium lattice fluid (NELF) (model, already known to be appropriate for nonequilibrium glassy polymers, while the permeability isotherms are described through a general transport model in which diffusivity is the product of a purely kinetic factor, the mobility coefficient, and a thermodynamic factor. The latter is calculated from the NELF model and mobility is considered concentration-dependent through an exponential relationship containing two parameters only. The models are tested explicitly considering solubility and permeability data of various penetrants in three glassy polymers, PSf, PPh and 6FDA-6FpDA, selected as the reference for different behaviors. It is shown that the models are able to calculate the different behaviors observed, and in particular the permeability dependence on upstream pressure, both when it is decreasing as well as when it is increasing, with no need to invoke the onset of additional plasticization phenomena. The correlations found between polymer and penetrant properties with the two parameters of the mobility coefficient also lead to the predictive ability of the transport model. PMID:28825619
Seroussi, Inbar; Grebenkov, Denis S.; Pasternak, Ofer; Sochen, Nir
2017-01-01
In order to bridge microscopic molecular motion with macroscopic diffusion MR signal in complex structures, we propose a general stochastic model for molecular motion in a magnetic field. The Fokker-Planck equation of this model governs the probability density function describing the diffusion-magnetization propagator. From the propagator we derive a generalized version of the Bloch-Torrey equation and the relation to the random phase approach. This derivation does not require assumptions such as a spatially constant diffusion coefficient, or ad-hoc selection of a propagator. In particular, the boundary conditions that implicitly incorporate the microstructure into the diffusion MR signal can now be included explicitly through a spatially varying diffusion coefficient. While our generalization is reduced to the conventional Bloch-Torrey equation for piecewise constant diffusion coefficients, it also predicts scenarios in which an additional term to the equation is required to fully describe the MR signal. PMID:28242566
Implementation and application of a gradient enhanced crystal plasticity model
NASA Astrophysics Data System (ADS)
Soyarslan, C.; Perdahcıoǧlu, E. S.; Aşık, E. E.; van den Boogaard, A. H.; Bargmann, S.
2017-10-01
A rate-independent crystal plasticity model is implemented in which description of the hardening of the material is given as a function of the total dislocation density. The evolution of statistically stored dislocations (SSDs) is described using a saturating type evolution law. The evolution of geometrically necessary dislocations (GNDs) on the other hand is described using the gradient of the plastic strain tensor in a non-local manner. The gradient of the incremental plastic strain tensor is computed explicitly during an implicit FE simulation after each converged step. Using the plastic strain tensor stored as state variables at each integration point and an efficient numerical algorithm to find the gradients, the GND density is obtained. This results in a weak coupling of the equilibrium solution and the gradient enhancement. The algorithm is applied to an academic test problem which considers growth of a cylindrical void in a single crystal matrix.
NASA Technical Reports Server (NTRS)
Verstraete, Michel M.
1987-01-01
Understanding the details of the interaction between the radiation field and plant structures is important climatically because of the influence of vegetation on the surface water and energy balance, but also biologically, since solar radiation provides the energy necessary for photosynthesis. The problem is complex because of the extreme variety of vegetation forms in space and time, as well as within and across plant species. This one-dimensional vertical multilayer model describes the transfer of direct solar radiation through a leaf canopy, accounting explicitly for the vertical inhomogeneities of a plant stand and leaf orientation, as well as heliotropic plant behavior. This model reproduces observational results on homogeneous canopies, but it is also well adapted to describe vertically inhomogeneous canopies. Some of the implications of leaf orientation and plant structure as far as light collection is concerned are briefly reviewed.
The Layer-Oriented Approach to Declarative Languages for Biological Modeling
Raikov, Ivan; De Schutter, Erik
2012-01-01
We present a new approach to modeling languages for computational biology, which we call the layer-oriented approach. The approach stems from the observation that many diverse biological phenomena are described using a small set of mathematical formalisms (e.g. differential equations), while at the same time different domains and subdomains of computational biology require that models are structured according to the accepted terminology and classification of that domain. Our approach uses distinct semantic layers to represent the domain-specific biological concepts and the underlying mathematical formalisms. Additional functionality can be transparently added to the language by adding more layers. This approach is specifically concerned with declarative languages, and throughout the paper we note some of the limitations inherent to declarative approaches. The layer-oriented approach is a way to specify explicitly how high-level biological modeling concepts are mapped to a computational representation, while abstracting away details of particular programming languages and simulation environments. To illustrate this process, we define an example language for describing models of ionic currents, and use a general mathematical notation for semantic transformations to show how to generate model simulation code for various simulation environments. We use the example language to describe a Purkinje neuron model and demonstrate how the layer-oriented approach can be used for solving several practical issues of computational neuroscience model development. We discuss the advantages and limitations of the approach in comparison with other modeling language efforts in the domain of computational biology and outline some principles for extensible, flexible modeling language design. We conclude by describing in detail the semantic transformations defined for our language. PMID:22615554
The layer-oriented approach to declarative languages for biological modeling.
Raikov, Ivan; De Schutter, Erik
2012-01-01
We present a new approach to modeling languages for computational biology, which we call the layer-oriented approach. The approach stems from the observation that many diverse biological phenomena are described using a small set of mathematical formalisms (e.g. differential equations), while at the same time different domains and subdomains of computational biology require that models are structured according to the accepted terminology and classification of that domain. Our approach uses distinct semantic layers to represent the domain-specific biological concepts and the underlying mathematical formalisms. Additional functionality can be transparently added to the language by adding more layers. This approach is specifically concerned with declarative languages, and throughout the paper we note some of the limitations inherent to declarative approaches. The layer-oriented approach is a way to specify explicitly how high-level biological modeling concepts are mapped to a computational representation, while abstracting away details of particular programming languages and simulation environments. To illustrate this process, we define an example language for describing models of ionic currents, and use a general mathematical notation for semantic transformations to show how to generate model simulation code for various simulation environments. We use the example language to describe a Purkinje neuron model and demonstrate how the layer-oriented approach can be used for solving several practical issues of computational neuroscience model development. We discuss the advantages and limitations of the approach in comparison with other modeling language efforts in the domain of computational biology and outline some principles for extensible, flexible modeling language design. We conclude by describing in detail the semantic transformations defined for our language.
SoftWAXS: a computational tool for modeling wide-angle X-ray solution scattering from biomolecules.
Bardhan, Jaydeep; Park, Sanghyun; Makowski, Lee
2009-10-01
This paper describes a computational approach to estimating wide-angle X-ray solution scattering (WAXS) from proteins, which has been implemented in a computer program called SoftWAXS. The accuracy and efficiency of SoftWAXS are analyzed for analytically solvable model problems as well as for proteins. Key features of the approach include a numerical procedure for performing the required spherical averaging and explicit representation of the solute-solvent boundary and the surface of the hydration layer. These features allow the Fourier transform of the excluded volume and hydration layer to be computed directly and with high accuracy. This approach will allow future investigation of different treatments of the electron density in the hydration shell. Numerical results illustrate the differences between this approach to modeling the excluded volume and a widely used model that treats the excluded-volume function as a sum of Gaussians representing the individual atomic excluded volumes. Comparison of the results obtained here with those from explicit-solvent molecular dynamics clarifies shortcomings inherent to the representation of solvent as a time-averaged electron-density profile. In addition, an assessment is made of how the calculated scattering patterns depend on input parameters such as the solute-atom radii, the width of the hydration shell and the hydration-layer contrast. These results suggest that obtaining predictive calculations of high-resolution WAXS patterns may require sophisticated treatments of solvent.
Modeling SOA formation from the oxidation of intermediate volatility n-alkanes
NASA Astrophysics Data System (ADS)
Aumont, B.; Valorso, R.; Mouchel-Vallon, C.; Camredon, M.; Lee-Taylor, J.; Madronich, S.
2012-08-01
The chemical mechanism leading to SOA formation and ageing is expected to be a multigenerational process, i.e. a successive formation of organic compounds with higher oxidation degree and lower vapor pressure. This process is here investigated with the explicit oxidation model GECKO-A (Generator of Explicit Chemistry and Kinetics of Organics in the Atmosphere). Gas phase oxidation schemes are generated for the C8-C24 series of n-alkanes. Simulations are conducted to explore the time evolution of organic compounds and the behavior of secondary organic aerosol (SOA) formation for various preexisting organic aerosol concentration (COA). As expected, simulation results show that (i) SOA yield increases with the carbon chain length of the parent hydrocarbon, (ii) SOA yield decreases with decreasing COA, (iii) SOA production rates increase with increasing COA and (iv) the number of oxidation steps (i.e. generations) needed to describe SOA formation and evolution grows when COA decreases. The simulated oxidative trajectories are examined in a two dimensional space defined by the mean carbon oxidation state and the volatility. Most SOA contributors are not oxidized enough to be categorized as highly oxygenated organic aerosols (OOA) but reduced enough to be categorized as hydrocarbon like organic aerosols (HOA), suggesting that OOA may underestimate SOA. Results show that the model is unable to produce highly oxygenated aerosols (OOA) with large yields. The limitations of the model are discussed.
Modeling SOA formation from the oxidation of intermediate volatility n-alkanes
NASA Astrophysics Data System (ADS)
Aumont, B.; Valorso, R.; Mouchel-Vallon, C.; Camredon, M.; Lee-Taylor, J.; Madronich, S.
2012-06-01
The chemical mechanism leading to SOA formation and ageing is expected to be a multigenerational process, i.e. a successive formation of organic compounds with higher oxidation degree and lower vapor pressure. This process is here investigated with the explicit oxidation model GECKO-A (Generator of Explicit Chemistry and Kinetics of Organics in the Atmosphere). Gas phase oxidation schemes are generated for the C8-C24 series of n-alkanes. Simulations are conducted to explore the time evolution of organic compounds and the behavior of secondary organic aerosol (SOA) formation for various preexisting organic aerosol concentration (COA). As expected, simulation results show that (i) SOA yield increases with the carbon chain length of the parent hydrocarbon, (ii) SOA yield decreases with decreasing COA, (iii) SOA production rates increase with increasing COA and (iv) the number of oxidation steps (i.e. generations) needed to describe SOA formation and evolution grows when COA decreases. The simulated oxidative trajectories are examined in a two dimensional space defined by the mean carbon oxidation state and the volatility. Most SOA contributors are not oxidized enough to be categorized as highly oxygenated organic aerosols (OOA) but reduced enough to be categorized as hydrocarbon like organic aerosols (HOA), suggesting that OOA may underestimate SOA. Results show that the model is unable to produce highly oxygenated aerosols (OOA) with large yields. The limitations of the model are discussed.
Nelson, Kimberly M.; Pantalone, David W.; Gamarel, Kristi E.; Simoni, Jane M.
2016-01-01
Men who have sex with men (MSM) frequently consume sexually explicit online media (SEOM), yet little is known about its influence on their sexual behaviors. We describe a sequence of four studies to develop and psychometrically validate a measure of the perceived influence of sexually explicit online media (PI-SEOM) on the sexual behaviors of MSM. Study 1 involved qualitative interviews (N = 28) and a quantitative survey (N = 100) to develop a preliminary measure. Using an Internet sample of MSM (N = 1,170), we assessed its factor structure and reliability in Studies 2-3 as well as convergent validity and associations with HIV-related sexual risk in Study 4. Based on findings the measure was divided into two subscales: influences on (1) self and (2) other MSM. Factor analyses confirmed a two-factor model for each subscale, measuring perceived influences on (a) general sexual scripts and (b) condomless sex scripts. Survey results indicated the more men perceived SEOM influencing their own condomless sex scripts, the more likely they were to report engaging in sexual risk behaviors. The developed measure holds promise for assessing the influence of SEOM on the sexual behaviors of MSM and may prove a useful for HIV prevention research. PMID:26479019
Nelson, Kimberly M; Pantalone, David W; Gamarel, Kristi E; Simoni, Jane M
2016-01-01
Men who have sex with men (MSM) frequently consume sexually explicit online media (SEOM), yet little is known about its influence on their sexual behaviors. We describe a sequence of four studies to develop and psychometrically validate a measure of the perceived influence of sexually explicit online media (PI-SEOM) on the sexual behaviors of MSM. Study 1 involved qualitative interviews (N = 28) and a quantitative survey (N = 100) to develop a preliminary measure. Using an Internet sample of MSM (N = 1,170), we assessed its factor structure and reliability in Studies 2 and 3 as well as convergent validity and associations with HIV-related sexual risk in Study 4. Based on findings the measure was divided into two subscales: influences on (1) self and (2) other MSM. Factor analyses confirmed a two-factor model for each subscale, measuring perceived influences on (a) general sexual scripts and (b) condomless sex scripts. Survey results indicated that the more men perceived SEOM influencing their own condomless sex scripts, the more likely they were to report engaging in sexual risk behaviors. The developed measure holds promise for assessing the influence of SEOM on the sexual behaviors of MSM and may prove useful for HIV-prevention research.
Afonine, Pavel V.; Adams, Paul D.; Urzhumtsev, Alexandre
2018-06-08
TLS modelling was developed by Schomaker and Trueblood to describe atomic displacement parameters through concerted (rigid-body) harmonic motions of an atomic group [Schomaker & Trueblood (1968), Acta Cryst. B 24 , 63–76]. The results of a TLS refinement are T , L and S matrices that provide individual anisotropic atomic displacement parameters (ADPs) for all atoms belonging to the group. These ADPs can be calculated analytically using a formula that relates the elements of the TLS matrices to atomic parameters. Alternatively, ADPs can be obtained numerically from the parameters of concerted atomic motions corresponding to the TLS matrices. Both proceduresmore » are expected to produce the same ADP values and therefore can be used to assess the results of TLS refinement. Here, the implementation of this approach in PHENIX is described and several illustrations, including the use of all models from the PDB that have been subjected to TLS refinement, are provided.« less
Best Practices for Crash Modeling and Simulation
NASA Technical Reports Server (NTRS)
Fasanella, Edwin L.; Jackson, Karen E.
2002-01-01
Aviation safety can be greatly enhanced by the expeditious use of computer simulations of crash impact. Unlike automotive impact testing, which is now routine, experimental crash tests of even small aircraft are expensive and complex due to the high cost of the aircraft and the myriad of crash impact conditions that must be considered. Ultimately, the goal is to utilize full-scale crash simulations of aircraft for design evaluation and certification. The objective of this publication is to describe "best practices" for modeling aircraft impact using explicit nonlinear dynamic finite element codes such as LS-DYNA, DYNA3D, and MSC.Dytran. Although "best practices" is somewhat relative, it is hoped that the authors' experience will help others to avoid some of the common pitfalls in modeling that are not documented in one single publication. In addition, a discussion of experimental data analysis, digital filtering, and test-analysis correlation is provided. Finally, some examples of aircraft crash simulations are described in several appendices following the main report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afonine, Pavel V.; Adams, Paul D.; Urzhumtsev, Alexandre
TLS modelling was developed by Schomaker and Trueblood to describe atomic displacement parameters through concerted (rigid-body) harmonic motions of an atomic group [Schomaker & Trueblood (1968), Acta Cryst. B 24 , 63–76]. The results of a TLS refinement are T , L and S matrices that provide individual anisotropic atomic displacement parameters (ADPs) for all atoms belonging to the group. These ADPs can be calculated analytically using a formula that relates the elements of the TLS matrices to atomic parameters. Alternatively, ADPs can be obtained numerically from the parameters of concerted atomic motions corresponding to the TLS matrices. Both proceduresmore » are expected to produce the same ADP values and therefore can be used to assess the results of TLS refinement. Here, the implementation of this approach in PHENIX is described and several illustrations, including the use of all models from the PDB that have been subjected to TLS refinement, are provided.« less
Implicit and explicit ethnocentrism: revisiting the ideologies of prejudice.
Cunningham, William A; Nezlek, John B; Banaji, Mahzarin R
2004-10-01
Two studies investigated relationships among individual differences in implicit and explicit prejudice, right-wing ideology, and rigidity in thinking. The first study examined these relationships focusing on White Americans' prejudice toward Black Americans. The second study provided the first test of implicit ethnocentrism and its relationship to explicit ethnocentrism by studying the relationship between attitudes toward five social groups. Factor analyses found support for both implicit and explicit ethnocentrism. In both studies, mean explicit attitudes toward out groups were positive, whereas implicit attitudes were negative, suggesting that implicit and explicit prejudices are distinct; however, in both studies, implicit and explicit attitudes were related (r = .37, .47). Latent variable modeling indicates a simple structure within this ethnocentric system, with variables organized in order of specificity. These results lead to the conclusion that (a) implicit ethnocentrism exists and (b) it is related to and distinct from explicit ethnocentrism.
A BRST formulation for the conic constrained particle
NASA Astrophysics Data System (ADS)
Barbosa, Gabriel D.; Thibes, Ronaldo
2018-04-01
We describe the gauge invariant BRST formulation of a particle constrained to move in a general conic. The model considered constitutes an explicit example of an originally second-class system which can be quantized within the BRST framework. We initially impose the conic constraint by means of a Lagrange multiplier leading to a consistent second-class system which generalizes previous models studied in the literature. After calculating the constraint structure and the corresponding Dirac brackets, we introduce a suitable first-order Lagrangian, the resulting modified system is then shown to be gauge invariant. We proceed to the extended phase space introducing fermionic ghost variables, exhibiting the BRST symmetry transformations and writing the Green’s function generating functional for the BRST quantized model.
NASA Astrophysics Data System (ADS)
Harmon, Michael; Gamba, Irene M.; Ren, Kui
2016-12-01
This work concerns the numerical solution of a coupled system of self-consistent reaction-drift-diffusion-Poisson equations that describes the macroscopic dynamics of charge transport in photoelectrochemical (PEC) solar cells with reactive semiconductor and electrolyte interfaces. We present three numerical algorithms, mainly based on a mixed finite element and a local discontinuous Galerkin method for spatial discretization, with carefully chosen numerical fluxes, and implicit-explicit time stepping techniques, for solving the time-dependent nonlinear systems of partial differential equations. We perform computational simulations under various model parameters to demonstrate the performance of the proposed numerical algorithms as well as the impact of these parameters on the solution to the model.
A thought construction of working perpetuum mobile of the second kind
NASA Astrophysics Data System (ADS)
Čápek, V.; Bok, J.
1999-12-01
The previously published model of the isothermal Maxwell demon as one of models of open quantum systems endowed with the faculty of selforganization is reconstructed here. It describes an open quantum system interacting with a single thermodynamic bath but otherwise not aided from outside. Its activity is given by the standard linear Liouville equation for the system and bath. Owing to its selforganization property, the model then yields cyclic conversion of heat from the bath into mechanical work without compensation. Hence, it provides an explicit thought construction of perpetuum mobile of the second kind, contradicting thus the Thomson formulation of the second law of thermodynamics. No approximation is involved as a special scaling procedure is used which makes the employed kinetic equations exact.
Low-energy effective action in two-dimensional SQED: a two-loop analysis
NASA Astrophysics Data System (ADS)
Samsonov, I. B.
2017-07-01
We study two-loop quantum corrections to the low-energy effective actions in N=(2,2) and N=(4,4) SQED on the Coulomb branch. In the latter model, the low-energy effective action is described by a generalized Kähler potential which depends on both chiral and twisted chiral superfields. We demonstrate that this generalized Kähler potential is one-loop exact and corresponds to the N=(4,4) sigma-model with torsion presented by Roček, Schoutens and Sevrin [1]. In the N=(2,2) SQED, the effective Kähler potential is not protected against higher-loop quantum corrections. The two-loop quantum corrections to this potential and the corresponding sigma-model metric are explicitly found.
Andrews, Casey T; Elcock, Adrian H
2014-11-11
We describe the derivation of a set of bonded and nonbonded coarse-grained (CG) potential functions for use in implicit-solvent Brownian dynamics (BD) simulations of proteins derived from all-atom explicit-solvent molecular dynamics (MD) simulations of amino acids. Bonded potential functions were derived from 1 μs MD simulations of each of the 20 canonical amino acids, with histidine modeled in both its protonated and neutral forms; nonbonded potential functions were derived from 1 μs MD simulations of every possible pairing of the amino acids (231 different systems). The angle and dihedral probability distributions and radial distribution functions sampled during MD were used to optimize a set of CG potential functions through use of the iterative Boltzmann inversion (IBI) method. The optimized set of potential functions-which we term COFFDROP (COarse-grained Force Field for Dynamic Representation Of Proteins)-quantitatively reproduced all of the "target" MD distributions. In a first test of the force field, it was used to predict the clustering behavior of concentrated amino acid solutions; the predictions were directly compared with the results of corresponding all-atom explicit-solvent MD simulations and found to be in excellent agreement. In a second test, BD simulations of the small protein villin headpiece were carried out at concentrations that have recently been studied in all-atom explicit-solvent MD simulations by Petrov and Zagrovic ( PLoS Comput. Biol. 2014 , 5 , e1003638). The anomalously strong intermolecular interactions seen in the MD study were reproduced in the COFFDROP simulations; a simple scaling of COFFDROP's nonbonded parameters, however, produced results in better accordance with experiment. Overall, our results suggest that potential functions derived from simulations of pairwise amino acid interactions might be of quite broad applicability, with COFFDROP likely to be especially useful for modeling unfolded or intrinsically disordered proteins.
2015-01-01
We describe the derivation of a set of bonded and nonbonded coarse-grained (CG) potential functions for use in implicit-solvent Brownian dynamics (BD) simulations of proteins derived from all-atom explicit-solvent molecular dynamics (MD) simulations of amino acids. Bonded potential functions were derived from 1 μs MD simulations of each of the 20 canonical amino acids, with histidine modeled in both its protonated and neutral forms; nonbonded potential functions were derived from 1 μs MD simulations of every possible pairing of the amino acids (231 different systems). The angle and dihedral probability distributions and radial distribution functions sampled during MD were used to optimize a set of CG potential functions through use of the iterative Boltzmann inversion (IBI) method. The optimized set of potential functions—which we term COFFDROP (COarse-grained Force Field for Dynamic Representation Of Proteins)—quantitatively reproduced all of the “target” MD distributions. In a first test of the force field, it was used to predict the clustering behavior of concentrated amino acid solutions; the predictions were directly compared with the results of corresponding all-atom explicit-solvent MD simulations and found to be in excellent agreement. In a second test, BD simulations of the small protein villin headpiece were carried out at concentrations that have recently been studied in all-atom explicit-solvent MD simulations by Petrov and Zagrovic (PLoS Comput. Biol.2014, 5, e1003638). The anomalously strong intermolecular interactions seen in the MD study were reproduced in the COFFDROP simulations; a simple scaling of COFFDROP’s nonbonded parameters, however, produced results in better accordance with experiment. Overall, our results suggest that potential functions derived from simulations of pairwise amino acid interactions might be of quite broad applicability, with COFFDROP likely to be especially useful for modeling unfolded or intrinsically disordered proteins. PMID:25400526
NASA Astrophysics Data System (ADS)
Zilletti, Michele; Marker, Arthur; Elliott, Stephen John; Holland, Keith
2017-05-01
In this study model identification of the nonlinear dynamics of a micro-speaker is carried out by purely electrical measurements, avoiding any explicit vibration measurements. It is shown that a dynamic model of the micro-speaker, which takes into account the nonlinear damping characteristic of the device, can be identified by measuring the response between the voltage input and the current flowing into the coil. An analytical formulation of the quasi-linear model of the micro-speaker is first derived and an optimisation method is then used to identify a polynomial function which describes the mechanical damping behaviour of the micro-speaker. The analytical results of the quasi-linear model are compared with numerical results. This study potentially opens up the possibility of efficiently implementing nonlinear echo cancellers.
NASA Astrophysics Data System (ADS)
Thibes, Ronaldo
2017-02-01
We perform the canonical and path integral quantizations of a lower-order derivatives model describing Podolsky's generalized electrodynamics. The physical content of the model shows an auxiliary massive vector field coupled to the usual electromagnetic field. The equivalence with Podolsky's original model is studied at classical and quantum levels. Concerning the dynamical time evolution, we obtain a theory with two first-class and two second-class constraints in phase space. We calculate explicitly the corresponding Dirac brackets involving both vector fields. We use the Senjanovic procedure to implement the second-class constraints and the Batalin-Fradkin-Vilkovisky path integral quantization scheme to deal with the symmetries generated by the first-class constraints. The physical interpretation of the results turns out to be simpler due to the reduced derivatives order permeating the equations of motion, Dirac brackets and effective action.
Multiscale modeling and characterization for performance and safety of lithium-ion batteries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pannala, Sreekanth; Turner, John A.; Allu, Srikanth
Lithium-ion batteries are highly complex electrochemical systems whose performance and safety are governed by coupled nonlinear electrochemical-electrical-thermal-mechanical processes over a range of spatiotemporal scales. In this paper we describe a new, open source computational framework for Lithium-ion battery simulations that is designed to support a variety of model types and formulations. This framework has been used to create three-dimensional cell and battery pack models that explicitly simulate all the battery components (current collectors, electrodes, and separator). The models are used to predict battery performance under normal operations and to study thermal and mechanical safety aspects under adverse conditions. The modelmore » development and validation are supported by experimental methods such as IR-imaging, X-ray tomography and micro-Raman mapping.« less
Multiscale modeling and characterization for performance and safety of lithium-ion batteries
Pannala, Sreekanth; Turner, John A.; Allu, Srikanth; ...
2015-08-19
Lithium-ion batteries are highly complex electrochemical systems whose performance and safety are governed by coupled nonlinear electrochemical-electrical-thermal-mechanical processes over a range of spatiotemporal scales. In this paper we describe a new, open source computational framework for Lithium-ion battery simulations that is designed to support a variety of model types and formulations. This framework has been used to create three-dimensional cell and battery pack models that explicitly simulate all the battery components (current collectors, electrodes, and separator). The models are used to predict battery performance under normal operations and to study thermal and mechanical safety aspects under adverse conditions. The modelmore » development and validation are supported by experimental methods such as IR-imaging, X-ray tomography and micro-Raman mapping.« less
Solvable Hydrodynamics of Quantum Integrable Systems
NASA Astrophysics Data System (ADS)
Bulchandani, Vir B.; Vasseur, Romain; Karrasch, Christoph; Moore, Joel E.
2017-12-01
The conventional theory of hydrodynamics describes the evolution in time of chaotic many-particle systems from local to global equilibrium. In a quantum integrable system, local equilibrium is characterized by a local generalized Gibbs ensemble or equivalently a local distribution of pseudomomenta. We study time evolution from local equilibria in such models by solving a certain kinetic equation, the "Bethe-Boltzmann" equation satisfied by the local pseudomomentum density. Explicit comparison with density matrix renormalization group time evolution of a thermal expansion in the XXZ model shows that hydrodynamical predictions from smooth initial conditions can be remarkably accurate, even for small system sizes. Solutions are also obtained in the Lieb-Liniger model for free expansion into vacuum and collisions between clouds of particles, which model experiments on ultracold one-dimensional Bose gases.
ERIC Educational Resources Information Center
Glock, Sabine; Beverborg, Arnoud Oude Groote; Müller, Barbara C. N.
2016-01-01
Obese children experience disadvantages in school and discrimination from their teachers. Teachers' implicit and explicit attitudes have been identified as contributing to these disadvantages. Drawing on dual process models, we investigated the nature of pre-service teachers' implicit and explicit attitudes, their motivation to respond without…
Stull, Laura G; McConnell, Haley; McGrew, John; Salyers, Michelle P
2017-01-01
While explicit negative stereotypes of mental illness are well established as barriers to recovery, implicit attitudes also may negatively impact outcomes. The current study is unique in its focus on both explicit and implicit stigma as predictors of recovery attitudes of mental health practitioners. Assertive Community Treatment practitioners (n = 154) from 55 teams completed online measures of stigma, recovery attitudes, and an Implicit Association Test (IAT). Three of four explicit stigma variables (perceptions of blameworthiness, helplessness, and dangerousness) and all three implicit stigma variables were associated with lower recovery attitudes. In a multivariate, hierarchical model, however, implicit stigma did not explain additional variance in recovery attitudes. In the overall model, perceptions of dangerousness and implicitly associating mental illness with "bad" were significant individual predictors of lower recovery attitudes. The current study demonstrates a need for interventions to lower explicit stigma, particularly perceptions of dangerousness, to increase mental health providers' expectations for recovery. The extent to which implicit and explicit stigma differentially predict outcomes, including recovery attitudes, needs further research.
Symmetry breaking in occupation number based slave-particle methods
NASA Astrophysics Data System (ADS)
Georgescu, Alexandru B.; Ismail-Beigi, Sohrab
2017-10-01
We describe a theoretical approach to finding spontaneously symmetry-broken electronic phases due to strong electronic interactions when using recently developed slave-particle (slave-boson) approaches based on occupation numbers. We describe why, to date, spontaneous symmetry breaking has proven difficult to achieve in such approaches. We then provide a total energy based approach for introducing auxiliary symmetry-breaking fields into the solution of the slave-particle problem that leads to lowered total energies for symmetry-broken phases. We point out that not all slave-particle approaches yield energy lowering: the slave-particle model being used must explicitly describe the degrees of freedom that break symmetry. Finally, our total energy approach permits us to greatly simplify the formalism used to achieve a self-consistent solution between spinon and slave modes while increasing the numerical stability and greatly speeding up the calculations.
Learning physical descriptors for materials science by compressed sensing
NASA Astrophysics Data System (ADS)
Ghiringhelli, Luca M.; Vybiral, Jan; Ahmetcik, Emre; Ouyang, Runhai; Levchenko, Sergey V.; Draxl, Claudia; Scheffler, Matthias
2017-02-01
The availability of big data in materials science offers new routes for analyzing materials properties and functions and achieving scientific understanding. Finding structure in these data that is not directly visible by standard tools and exploitation of the scientific information requires new and dedicated methodology based on approaches from statistical learning, compressed sensing, and other recent methods from applied mathematics, computer science, statistics, signal processing, and information science. In this paper, we explain and demonstrate a compressed-sensing based methodology for feature selection, specifically for discovering physical descriptors, i.e., physical parameters that describe the material and its properties of interest, and associated equations that explicitly and quantitatively describe those relevant properties. As showcase application and proof of concept, we describe how to build a physical model for the quantitative prediction of the crystal structure of binary compound semiconductors.
Depeursinge, Adrien; Kurtz, Camille; Beaulieu, Christopher; Napel, Sandy; Rubin, Daniel
2014-08-01
We describe a framework to model visual semantics of liver lesions in CT images in order to predict the visual semantic terms (VST) reported by radiologists in describing these lesions. Computational models of VST are learned from image data using linear combinations of high-order steerable Riesz wavelets and support vector machines (SVM). In a first step, these models are used to predict the presence of each semantic term that describes liver lesions. In a second step, the distances between all VST models are calculated to establish a nonhierarchical computationally-derived ontology of VST containing inter-term synonymy and complementarity. A preliminary evaluation of the proposed framework was carried out using 74 liver lesions annotated with a set of 18 VSTs from the RadLex ontology. A leave-one-patient-out cross-validation resulted in an average area under the ROC curve of 0.853 for predicting the presence of each VST. The proposed framework is expected to foster human-computer synergies for the interpretation of radiological images while using rotation-covariant computational models of VSTs to 1) quantify their local likelihood and 2) explicitly link them with pixel-based image content in the context of a given imaging domain.
A heuristic mathematical model for the dynamics of sensory conflict and motion sickness
NASA Technical Reports Server (NTRS)
Oman, C. M.
1982-01-01
By consideration of the information processing task faced by the central nervous system in estimating body spatial orientation and in controlling active body movement using an internal model referenced control strategy, a mathematical model for sensory conflict generation is developed. The model postulates a major dynamic functional role for sensory conflict signals in movement control, as well as in sensory-motor adaptation. It accounts for the role of active movement in creating motion sickness symptoms in some experimental circumstance, and in alleviating them in others. The relationship between motion sickness produced by sensory rearrangement and that resulting from external motion disturbances is explicitly defined. A nonlinear conflict averaging model is proposed which describes dynamic aspects of experimentally observed subjective discomfort sensation, and suggests resulting behaviours. The model admits several possibilities for adaptive mechanisms which do not involve internal model updating. Further systematic efforts to experimentally refine and validate the model are indicated.
Martin, Guillaume; Roques, Lionel
2016-01-01
Various models describe asexual evolution by mutation, selection, and drift. Some focus directly on fitness, typically modeling drift but ignoring or simplifying both epistasis and the distribution of mutation effects (traveling wave models). Others follow the dynamics of quantitative traits determining fitness (Fisher’s geometric model), imposing a complex but fixed form of mutation effects and epistasis, and often ignoring drift. In all cases, predictions are typically obtained in high or low mutation rate limits and for long-term stationary regimes, thus losing information on transient behaviors and the effect of initial conditions. Here, we connect fitness-based and trait-based models into a single framework, and seek explicit solutions even away from stationarity. The expected fitness distribution is followed over time via its cumulant generating function, using a deterministic approximation that neglects drift. In several cases, explicit trajectories for the full fitness distribution are obtained for arbitrary mutation rates and standing variance. For nonepistatic mutations, especially with beneficial mutations, this approximation fails over the long term but captures the early dynamics, thus complementing stationary stochastic predictions. The approximation also handles several diminishing returns epistasis models (e.g., with an optimal genotype); it can be applied at and away from equilibrium. General results arise at equilibrium, where fitness distributions display a “phase transition” with mutation rate. Beyond this phase transition, in Fisher’s geometric model, the full trajectory of fitness and trait distributions takes a simple form; robust to the details of the mutant phenotype distribution. Analytical arguments are explored regarding why and when the deterministic approximation applies. PMID:27770037
NASA Astrophysics Data System (ADS)
Wang, Ken Kang-Hsin; Busch, Theresa M.; Finlay, Jarod C.; Zhu, Timothy C.
2009-02-01
Singlet oxygen (1O2) is generally believed to be the major cytotoxic agent during photodynamic therapy (PDT), and the reaction between 1O2 and tumor cells define the treatment efficacy. From a complete set of the macroscopic kinetic equations which describe the photochemical processes of PDT, we can express the reacted 1O2 concentration, [1O2]rx, in a form related to time integration of the product of 1O2 quantum yield and the PDT dose rate. The production of [1O2]rx involves physiological and photophysical parameters which need to be determined explicitly for the photosensitizer of interest. Once these parameters are determined, we expect the computed [1O2]rx to be an explicit dosimetric indicator for clinical PDT. Incorporating the diffusion equation governing the light transport in turbid medium, the spatially and temporally-resolved [1O2]rx described by the macroscopic kinetic equations can be numerically calculated. A sudden drop of the calculated [1O2]rx along with the distance following the decrease of light fluence rate is observed. This suggests that a possible correlation between [1O2]rx and necrosis boundary may occur in the tumor subject to PDT irradiation. In this study, we have theoretically examined the sensitivity of the physiological parameter from two clinical related conditions: (1) collimated light source on semi-infinite turbid medium and (2) linear light source in turbid medium. In order to accurately determine the parameter in a clinical relevant environment, the results of the computed [1O2]rx are expected to be used to fit the experimentally-measured necrosis data obtained from an in vivo animal model.
Suslow, Thomas; Donges, Uta-Susan
2017-01-01
Alexithymia represents a multifaceted personality construct defined by difficulties in recognizing and verbalizing emotions and externally oriented thinking. According to clinical observations, experience of negative affects is exacerbated and experience of positive affects is decreased in alexithymia. Findings from research based on self-report indicate that all alexithymia facets are negatively associated with the experience of positive affects, whereas difficulties identifying and describing feelings are related to heightened negative affect. Implicit affectivity, which can be measured using indirect assessment methods, relates to processes of the impulsive system. The aim of the present study was to examine, for the first time, the relations between alexithymia components and implicit and explicit positive and negative affectivity in healthy adults. The 20-item Toronto Alexithymia Scale, the Implicit Positive and Negative Affect Test and the Positive and Negative Affect Schedule (PANAS) were administered to two hundred and forty-one healthy individuals along with measures of depression and trait anxiety. Difficulties identifying feelings were correlated with explicit negative trait affect, depressive mood and trait anxiety. Difficulties describing feelings showed smaller but also significant correlations with depressive mood and trait anxiety but were not correlated with explicit state or trait affect as assessed by the PANAS. Externally oriented thinking was not significantly correlated with any of the implicit and explicit affect measures. According to our findings, an externally oriented, concrete way of thinking appears to be generally unrelated to dispositions to develop positive or negative affects. Difficulties identifying feelings seem to be associated with increased conscious negative affects but not with a heightened disposition to develop negative affects at an automatic response level.
Adaptive Numerical Algorithms in Space Weather Modeling
NASA Technical Reports Server (NTRS)
Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.;
2010-01-01
Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical schemes. Depending on the application, we find that different time stepping methods are optimal. Several of the time integration schemes exploit the block-based granularity of the grid structure. The framework and the adaptive algorithms enable physics based space weather modeling and even forecasting.
Bunderson, Nathan E.; Bingham, Jeffrey T.; Sohn, M. Hongchul; Ting, Lena H.; Burkholder, Thomas J.
2015-01-01
Neuromusculoskeletal models solve the basic problem of determining how the body moves under the influence of external and internal forces. Existing biomechanical modeling programs often emphasize dynamics with the goal of finding a feed-forward neural program to replicate experimental data or of estimating force contributions or individual muscles. The computation of rigid-body dynamics, muscle forces, and activation of the muscles are often performed separately. We have developed an intrinsically forward computational platform (Neuromechanic, www.neuromechanic.com) that explicitly represents the interdependencies among rigid body dynamics, frictional contact, muscle mechanics, and neural control modules. This formulation has significant advantages for optimization and forward simulation, particularly with application to neural controllers with feedback or regulatory features. Explicit inclusion of all state dependencies allows calculation of system derivatives with respect to kinematic states as well as muscle and neural control states, thus affording a wealth of analytical tools, including linearization, stability analyses and calculation of initial conditions for forward simulations. In this review, we describe our algorithm for generating state equations and explain how they may be used in integration, linearization and stability analysis tools to provide structural insights into the neural control of movement. PMID:23027632
Efficient nonparametric n -body force fields from machine learning
NASA Astrophysics Data System (ADS)
Glielmo, Aldo; Zeni, Claudio; De Vita, Alessandro
2018-05-01
We provide a definition and explicit expressions for n -body Gaussian process (GP) kernels, which can learn any interatomic interaction occurring in a physical system, up to n -body contributions, for any value of n . The series is complete, as it can be shown that the "universal approximator" squared exponential kernel can be written as a sum of n -body kernels. These recipes enable the choice of optimally efficient force models for each target system, as confirmed by extensive testing on various materials. We furthermore describe how the n -body kernels can be "mapped" on equivalent representations that provide database-size-independent predictions and are thus crucially more efficient. We explicitly carry out this mapping procedure for the first nontrivial (three-body) kernel of the series, and we show that this reproduces the GP-predicted forces with meV /Å accuracy while being orders of magnitude faster. These results pave the way to using novel force models (here named "M-FFs") that are computationally as fast as their corresponding standard parametrized n -body force fields, while retaining the nonparametric character, the ease of training and validation, and the accuracy of the best recently proposed machine-learning potentials.
A thermodynamically consistent discontinuous Galerkin formulation for interface separation
Versino, Daniele; Mourad, Hashem M.; Dávila, Carlos G.; ...
2015-07-31
Our paper describes the formulation of an interface damage model, based on the discontinuous Galerkin (DG) method, for the simulation of failure and crack propagation in laminated structures. The DG formulation avoids common difficulties associated with cohesive elements. Specifically, it does not introduce any artificial interfacial compliance and, in explicit dynamic analysis, it leads to a stable time increment size which is unaffected by the presence of stiff massless interfaces. This proposed method is implemented in a finite element setting. Convergence and accuracy are demonstrated in Mode I and mixed-mode delamination in both static and dynamic analyses. Significantly, numerical resultsmore » obtained using the proposed interface model are found to be independent of the value of the penalty factor that characterizes the DG formulation. By contrast, numerical results obtained using a classical cohesive method are found to be dependent on the cohesive penalty stiffnesses. The proposed approach is shown to yield more accurate predictions pertaining to crack propagation under mixed-mode fracture because of the advantage. Furthermore, in explicit dynamic analysis, the stable time increment size calculated with the proposed method is found to be an order of magnitude larger than the maximum allowable value for classical cohesive elements.« less
Fluctuations of the gluon distribution from the small- x effective action
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumitru, Adrian; Skokov, Vladimir
The computation of observables in high-energy QCD involves an average over stochastic semiclassical small-x gluon fields. The weight of various configurations is determined by the effective action. We introduce a method to study fluctuations of observables, functionals of the small-x fields, which does not explicitly involve dipoles. We integrate out those fluctuations of the semiclassical gluon field under which a given observable is invariant. Thereby we obtain the effective potential for that observable describing its fluctuations about the average. Here, we determine explicitly the effective potential for the covariant gauge gluon distribution both for the McLerran-Venugopalan (MV) model and formore » a (nonlocal) Gaussian approximation for the small-x effective action. This provides insight into the correlation of fluctuations of the number of hard gluons versus their typical transverse momentum. We find that the spectral shape of the fluctuations of the gluon distribution is fundamentally different in the MV model, where there is a pileup of gluons near the saturation scale, versus the solution of the small-x JIMWLK renormalization group, which generates essentially scale-invariant fluctuations above the absorptive boundary set by the saturation scale.« less
Bunderson, Nathan E; Bingham, Jeffrey T; Sohn, M Hongchul; Ting, Lena H; Burkholder, Thomas J
2012-10-01
Neuromusculoskeletal models solve the basic problem of determining how the body moves under the influence of external and internal forces. Existing biomechanical modeling programs often emphasize dynamics with the goal of finding a feed-forward neural program to replicate experimental data or of estimating force contributions or individual muscles. The computation of rigid-body dynamics, muscle forces, and activation of the muscles are often performed separately. We have developed an intrinsically forward computational platform (Neuromechanic, www.neuromechanic.com) that explicitly represents the interdependencies among rigid body dynamics, frictional contact, muscle mechanics, and neural control modules. This formulation has significant advantages for optimization and forward simulation, particularly with application to neural controllers with feedback or regulatory features. Explicit inclusion of all state dependencies allows calculation of system derivatives with respect to kinematic states and muscle and neural control states, thus affording a wealth of analytical tools, including linearization, stability analyses and calculation of initial conditions for forward simulations. In this review, we describe our algorithm for generating state equations and explain how they may be used in integration, linearization, and stability analysis tools to provide structural insights into the neural control of movement. Copyright © 2012 John Wiley & Sons, Ltd.
Cavanagh, Patrick
2011-01-01
Visual cognition, high-level vision, mid-level vision and top-down processing all refer to decision-based scene analyses that combine prior knowledge with retinal input to generate representations. The label “visual cognition” is little used at present, but research and experiments on mid- and high-level, inference-based vision have flourished, becoming in the 21st century a significant, if often understated part, of current vision research. How does visual cognition work? What are its moving parts? This paper reviews the origins and architecture of visual cognition and briefly describes some work in the areas of routines, attention, surfaces, objects, and events (motion, causality, and agency). Most vision scientists avoid being too explicit when presenting concepts about visual cognition, having learned that explicit models invite easy criticism. What we see in the literature is ample evidence for visual cognition, but few or only cautious attempts to detail how it might work. This is the great unfinished business of vision research: at some point we will be done with characterizing how the visual system measures the world and we will have to return to the question of how vision constructs models of objects, surfaces, scenes, and events. PMID:21329719
Fluctuations of the gluon distribution from the small- x effective action
Dumitru, Adrian; Skokov, Vladimir
2017-09-29
The computation of observables in high-energy QCD involves an average over stochastic semiclassical small-x gluon fields. The weight of various configurations is determined by the effective action. We introduce a method to study fluctuations of observables, functionals of the small-x fields, which does not explicitly involve dipoles. We integrate out those fluctuations of the semiclassical gluon field under which a given observable is invariant. Thereby we obtain the effective potential for that observable describing its fluctuations about the average. Here, we determine explicitly the effective potential for the covariant gauge gluon distribution both for the McLerran-Venugopalan (MV) model and formore » a (nonlocal) Gaussian approximation for the small-x effective action. This provides insight into the correlation of fluctuations of the number of hard gluons versus their typical transverse momentum. We find that the spectral shape of the fluctuations of the gluon distribution is fundamentally different in the MV model, where there is a pileup of gluons near the saturation scale, versus the solution of the small-x JIMWLK renormalization group, which generates essentially scale-invariant fluctuations above the absorptive boundary set by the saturation scale.« less
NASA Astrophysics Data System (ADS)
Nashrulloh, Maulana Malik; Kurniawan, Nia; Rahardi, Brian
2017-11-01
The increasing availability of genetic sequence data associated with explicit geographic and environment (including biotic and abiotic components) information offers new opportunities to study the processes that shape biodiversity and its patterns. Developing phylogeography reconstruction, by integrating phylogenetic and biogeographic knowledge, provides richer and deeper visualization and information on diversification events than ever before. Geographical information systems such as QGIS provide an environment for spatial modeling, analysis, and dissemination by which phylogenetic models can be explicitly linked with their associated spatial data, and subsequently, they will be integrated with other related georeferenced datasets describing the biotic and abiotic environment. We are introducing PHYLOGEOrec, a QGIS plugin for building spatial phylogeographic reconstructions constructed from phylogenetic tree and geographical information data based on QGIS2threejs. By using PHYLOGEOrec, researchers can integrate existing phylogeny and geographical information data, resulting in three-dimensional geographic visualizations of phylogenetic trees in the Keyhole Markup Language (KML) format. Such formats can be overlaid on a map using QGIS and finally, spatially viewed in QGIS by means of a QGIS2threejs engine for further analysis. KML can also be viewed in reputable geobrowsers with KML-support (i.e., Google Earth).
Impacts of an offshore wind farm on the lower marine atmosphere
NASA Astrophysics Data System (ADS)
Volker, P. J.; Huang, H.; Capps, S. B.; Badger, J.; Hahmann, A. N.; Hall, A. D.
2013-12-01
Due to a continuing increase in energy demand and heightened environmental consciousness, the State of California is seeking out more environmentally-friendly energy resources. Strong and persistent winds along California's coast can be harnessed effectively by current wind turbine technology, providing a promising source of alternative energy. Using an advanced wind farm parameterization implemented in the Weather Research & Forecast model, we investigate the potential impacts of a large offshore wind farm on the lower marine atmosphere. Located offshore of the Sonoma Coast in northern California, this theoretical wind farm includes 200-7 megawatt, 125 m hub height wind turbines which are able to provide a total of 1.4 TW of power for use in neighboring cities. The wind turbine model (i.e., the Explicit Wake Parameterization originally developed at the Danish Technical University) acts as a source of drag where the sub-grid scale velocity deficit expansion is explicitly described. A swath consisting of hub-height velocity deficits and temperature and moisture anomalies extends more than 100 km downstream of the wind farm location. The presence of the large modern wind farm also creates flow distortion upstream in conjunction with an enhanced vertical momentum and scalar transport.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Massimo, F., E-mail: francesco.massimo@ensta-paristech.fr; Dipartimento SBAI, Università di Roma “La Sapienza“, Via A. Scarpa 14, 00161 Roma; Atzeni, S.
Architect, a time explicit hybrid code designed to perform quick simulations for electron driven plasma wakefield acceleration, is described. In order to obtain beam quality acceptable for applications, control of the beam-plasma-dynamics is necessary. Particle in Cell (PIC) codes represent the state-of-the-art technique to investigate the underlying physics and possible experimental scenarios; however PIC codes demand the necessity of heavy computational resources. Architect code substantially reduces the need for computational resources by using a hybrid approach: relativistic electron bunches are treated kinetically as in a PIC code and the background plasma as a fluid. Cylindrical symmetry is assumed for themore » solution of the electromagnetic fields and fluid equations. In this paper both the underlying algorithms as well as a comparison with a fully three dimensional particle in cell code are reported. The comparison highlights the good agreement between the two models up to the weakly non-linear regimes. In highly non-linear regimes the two models only disagree in a localized region, where the plasma electrons expelled by the bunch close up at the end of the first plasma oscillation.« less
Regulatory T cell effects in antitumor laser immunotherapy: a mathematical model and analysis
NASA Astrophysics Data System (ADS)
Dawkins, Bryan A.; Laverty, Sean M.
2016-03-01
Regulatory T cells (Tregs) have tremendous influence on treatment outcomes in patients receiving immunotherapy for cancerous tumors. We present a mathematical model incorporating the primary cellular and molecular components of antitumor laser immunotherapy. We explicitly model developmental classes of dendritic cells (DCs), cytotoxic T cells (CTLs), primary and metastatic tumor cells, and tumor antigen. Regulatory T cells have been shown to kill antigen presenting cells, to influence dendritic cell maturation and migration, to kill activated killer CTLs in the tumor microenvironment, and to influence CTL proliferation. Since Tregs affect explicitly modeled cells, but we do not explicitly model dynamics of Treg themselves, we use model parameters to analyze effects of Treg immunosuppressive activity. We will outline a systematic method for assigning clinical outcomes to model simulations and use this condition to associate simulated patient treatment outcome with Treg activity.
Nuthmann, Antje; Einhäuser, Wolfgang; Schütz, Immo
2017-01-01
Since the turn of the millennium, a large number of computational models of visual salience have been put forward. How best to evaluate a given model's ability to predict where human observers fixate in images of real-world scenes remains an open research question. Assessing the role of spatial biases is a challenging issue; this is particularly true when we consider the tendency for high-salience items to appear in the image center, combined with a tendency to look straight ahead ("central bias"). This problem is further exacerbated in the context of model comparisons, because some-but not all-models implicitly or explicitly incorporate a center preference to improve performance. To address this and other issues, we propose to combine a-priori parcellation of scenes with generalized linear mixed models (GLMM), building upon previous work. With this method, we can explicitly model the central bias of fixation by including a central-bias predictor in the GLMM. A second predictor captures how well the saliency model predicts human fixations, above and beyond the central bias. By-subject and by-item random effects account for individual differences and differences across scene items, respectively. Moreover, we can directly assess whether a given saliency model performs significantly better than others. In this article, we describe the data processing steps required by our analysis approach. In addition, we demonstrate the GLMM analyses by evaluating the performance of different saliency models on a new eye-tracking corpus. To facilitate the application of our method, we make the open-source Python toolbox "GridFix" available.
Multibody dynamics model building using graphical interfaces
NASA Technical Reports Server (NTRS)
Macala, Glenn A.
1989-01-01
In recent years, the extremely laborious task of manually deriving equations of motion for the simulation of multibody spacecraft dynamics has largely been eliminated. Instead, the dynamicist now works with commonly available general purpose dynamics simulation programs which generate the equations of motion either explicitly or implicitly via computer codes. The user interface to these programs has predominantly been via input data files, each with its own required format and peculiarities, causing errors and frustrations during program setup. Recent progress in a more natural method of data input for dynamics programs: the graphical interface, is described.
Rethinking the solar flare paradigm
NASA Astrophysics Data System (ADS)
D, B. MELROSE
2018-07-01
It is widely accepted that solar flares involve release of magnetic energy stored in the solar corona above an active region, but existing models do not include the explicitly time-dependent electrodynamics needed to describe such energy release. A flare paradigm is discussed that includes the electromotive force (EMF) as the driver of the flare, and the flare-associated current that links different regions where magnetic reconnection, electron acceleration, the acceleration of mass motions and current closure occur. The EMF becomes localized across regions where energy conversion occurs, and is involved in energy propagation between these regions.
Development and necessary norms of reasoning
Markovits, Henry
2014-01-01
The question of whether reasoning can, or should, be described by a single normative model is an important one. In the following, I combine epistemological considerations taken from Piaget’s notion of genetic epistemology, a hypothesis about the role of reasoning in communication and developmental data to argue that some basic logical principles are in fact highly normative. I argue here that explicit, analytic human reasoning, in contrast to intuitive reasoning, uniformly relies on a form of validity that allows distinguishing between valid and invalid arguments based on the existence of counterexamples to conclusions. PMID:24904501
The New Tropospheric Product of the International GNSS Service
NASA Technical Reports Server (NTRS)
Byun, Sung H.; Bar-Sever, Yoaz E.; Gendt, Gerd
2005-01-01
We compare this new approach for generating the IGS tropospheric products with the previous approach, which was based on explicit combination of total zenith delay contributions from the IGS ACs. The new approach enables the IGS to rapidly generate highly accurate and highly reliable total zenith delay time series for many hundreds of sites, thus increasing the utility of the products to weather modelers, climatologists, and GPS analysts. In this paper we describe this new method, and discuss issues of accuracy, quality control, utility of the new products and assess its benefits.
Energy consumption for shortcuts to adiabaticity
NASA Astrophysics Data System (ADS)
Torrontegui, E.; Lizuain, I.; González-Resines, S.; Tobalina, A.; Ruschhaupt, A.; Kosloff, R.; Muga, J. G.
2017-08-01
Shortcuts to adiabaticity let a system reach the results of a slow adiabatic process in a shorter time. We propose to quantify the "energy cost" of the shortcut by the energy consumption of the system enlarged by including the control device. A mechanical model where the dynamics of the system and control device can be explicitly described illustrates that a broad range of possible values for the consumption is possible, including zero (above the adiabatic energy increment) when friction is negligible and the energy given away as negative power is stored and reused by perfect regenerative braking.
The Development of a Physician Vitality Program: A Brief Report.
Hernandez, Barbara Couden; Thomas, Tamara L
2015-10-01
We describe the development of an innovative program to support physician vitality. We provide the context and process of program delivery which includes a number of experimental support programs. We discuss a model for intervention and methods used to enhance physician resilience, support work-life balance, and change the culture to one that explicitly addresses the physician's biopsychosocial-spiritual needs. Recommendations are given for marriage and family therapists (MFTs) who wish to develop similar support programs for healthcare providers. Video Abstract. © 2014 American Association for Marriage and Family Therapy.
A Unified Framework for Monetary Theory and Policy Analysis.
ERIC Educational Resources Information Center
Lagos, Ricardo; Wright, Randall
2005-01-01
Search-theoretic models of monetary exchange are based on explicit descriptions of the frictions that make money essential. However, tractable versions of these models typically make strong assumptions that render them ill suited for monetary policy analysis. We propose a new framework, based on explicit micro foundations, within which macro…
A Naturalistic Inquiry into Praxis When Education Instructors Use Explicit Metacognitive Modeling
ERIC Educational Resources Information Center
Shannon, Nancy Gayle
2014-01-01
This naturalistic inquiry brought together six education instructors in one small teacher preparation program to explore what happens to educational instructors' praxis when the education instructors use explicit metacognitive modeling to reveal their thinking behind their pedagogical decision-making. The participants, while teaching an…
Modeling trends from North American Breeding Bird Survey data: a spatially explicit approach
Bled, Florent; Sauer, John R.; Pardieck, Keith L.; Doherty, Paul; Royle, J. Andy
2013-01-01
Population trends, defined as interval-specific proportional changes in population size, are often used to help identify species of conservation interest. Efficient modeling of such trends depends on the consideration of the correlation of population changes with key spatial and environmental covariates. This can provide insights into causal mechanisms and allow spatially explicit summaries at scales that are of interest to management agencies. We expand the hierarchical modeling framework used in the North American Breeding Bird Survey (BBS) by developing a spatially explicit model of temporal trend using a conditional autoregressive (CAR) model. By adopting a formal spatial model for abundance, we produce spatially explicit abundance and trend estimates. Analyses based on large-scale geographic strata such as Bird Conservation Regions (BCR) can suffer from basic imbalances in spatial sampling. Our approach addresses this issue by providing an explicit weighting based on the fundamental sample allocation unit of the BBS. We applied the spatial model to three species from the BBS. Species have been chosen based upon their well-known population change patterns, which allows us to evaluate the quality of our model and the biological meaning of our estimates. We also compare our results with the ones obtained for BCRs using a nonspatial hierarchical model (Sauer and Link 2011). Globally, estimates for mean trends are consistent between the two approaches but spatial estimates provide much more precise trend estimates in regions on the edges of species ranges that were poorly estimated in non-spatial analyses. Incorporating a spatial component in the analysis not only allows us to obtain relevant and biologically meaningful estimates for population trends, but also enables us to provide a flexible framework in order to obtain trend estimates for any area.
Can a continuum solvent model reproduce the free energy landscape of a -hairpin folding in water?
NASA Astrophysics Data System (ADS)
Zhou, Ruhong; Berne, Bruce J.
2002-10-01
The folding free energy landscape of the C-terminal -hairpin of protein G is explored using the surface-generalized Born (SGB) implicit solvent model, and the results are compared with the landscape from an earlier study with explicit solvent model. The OPLSAA force field is used for the -hairpin in both implicit and explicit solvent simulations, and the conformational space sampling is carried out with a highly parallel replica-exchange method. Surprisingly, we find from exhaustive conformation space sampling that the free energy landscape from the implicit solvent model is quite different from that of the explicit solvent model. In the implicit solvent model some nonnative states are heavily overweighted, and more importantly, the lowest free energy state is no longer the native -strand structure. An overly strong salt-bridge effect between charged residues (E42, D46, D47, E56, and K50) is found to be responsible for this behavior in the implicit solvent model. Despite this, we find that the OPLSAA/SGB energies of all the nonnative structures are higher than that of the native structure; thus the OPLSAA/SGB energy is still a good scoring function for structure prediction for this -hairpin. Furthermore, the -hairpin population at 282 K is found to be less than 40% from the implicit solvent model, which is much smaller than the 72% from the explicit solvent model and 80% from experiment. On the other hand, both implicit and explicit solvent simulations with the OPLSAA force field exhibit no meaningful helical content during the folding process, which is in contrast to some very recent studies using other force fields.
Can a continuum solvent model reproduce the free energy landscape of a β-hairpin folding in water?
Zhou, Ruhong; Berne, Bruce J.
2002-01-01
The folding free energy landscape of the C-terminal β-hairpin of protein G is explored using the surface-generalized Born (SGB) implicit solvent model, and the results are compared with the landscape from an earlier study with explicit solvent model. The OPLSAA force field is used for the β-hairpin in both implicit and explicit solvent simulations, and the conformational space sampling is carried out with a highly parallel replica-exchange method. Surprisingly, we find from exhaustive conformation space sampling that the free energy landscape from the implicit solvent model is quite different from that of the explicit solvent model. In the implicit solvent model some nonnative states are heavily overweighted, and more importantly, the lowest free energy state is no longer the native β-strand structure. An overly strong salt-bridge effect between charged residues (E42, D46, D47, E56, and K50) is found to be responsible for this behavior in the implicit solvent model. Despite this, we find that the OPLSAA/SGB energies of all the nonnative structures are higher than that of the native structure; thus the OPLSAA/SGB energy is still a good scoring function for structure prediction for this β-hairpin. Furthermore, the β-hairpin population at 282 K is found to be less than 40% from the implicit solvent model, which is much smaller than the 72% from the explicit solvent model and ≈80% from experiment. On the other hand, both implicit and explicit solvent simulations with the OPLSAA force field exhibit no meaningful helical content during the folding process, which is in contrast to some very recent studies using other force fields. PMID:12242327
Zhou, Ruhong; Berne, Bruce J
2002-10-01
The folding free energy landscape of the C-terminal beta-hairpin of protein G is explored using the surface-generalized Born (SGB) implicit solvent model, and the results are compared with the landscape from an earlier study with explicit solvent model. The OPLSAA force field is used for the beta-hairpin in both implicit and explicit solvent simulations, and the conformational space sampling is carried out with a highly parallel replica-exchange method. Surprisingly, we find from exhaustive conformation space sampling that the free energy landscape from the implicit solvent model is quite different from that of the explicit solvent model. In the implicit solvent model some nonnative states are heavily overweighted, and more importantly, the lowest free energy state is no longer the native beta-strand structure. An overly strong salt-bridge effect between charged residues (E42, D46, D47, E56, and K50) is found to be responsible for this behavior in the implicit solvent model. Despite this, we find that the OPLSAA/SGB energies of all the nonnative structures are higher than that of the native structure; thus the OPLSAA/SGB energy is still a good scoring function for structure prediction for this beta-hairpin. Furthermore, the beta-hairpin population at 282 K is found to be less than 40% from the implicit solvent model, which is much smaller than the 72% from the explicit solvent model and approximately equal 80% from experiment. On the other hand, both implicit and explicit solvent simulations with the OPLSAA force field exhibit no meaningful helical content during the folding process, which is in contrast to some very recent studies using other force fields.
Atmospheric Turbulence Modeling for Aero Vehicles: Fractional Order Fits
NASA Technical Reports Server (NTRS)
Kopasakis, George
2015-01-01
Atmospheric turbulence models are necessary for the design of both inlet/engine and flight controls, as well as for studying coupling between the propulsion and the vehicle structural dynamics for supersonic vehicles. Models based on the Kolmogorov spectrum have been previously utilized to model atmospheric turbulence. In this paper, a more accurate model is developed in its representative fractional order form, typical of atmospheric disturbances. This is accomplished by first scaling the Kolmogorov spectral to convert them into finite energy von Karman forms and then by deriving an explicit fractional circuit-filter type analog for this model. This circuit model is utilized to develop a generalized formulation in frequency domain to approximate the fractional order with the products of first order transfer functions, which enables accurate time domain simulations. The objective of this work is as follows. Given the parameters describing the conditions of atmospheric disturbances, and utilizing the derived formulations, directly compute the transfer function poles and zeros describing these disturbances for acoustic velocity, temperature, pressure, and density. Time domain simulations of representative atmospheric turbulence can then be developed by utilizing these computed transfer functions together with the disturbance frequencies of interest.
Atmospheric Turbulence Modeling for Aero Vehicles: Fractional Order Fits
NASA Technical Reports Server (NTRS)
Kopasakis, George
2010-01-01
Atmospheric turbulence models are necessary for the design of both inlet/engine and flight controls, as well as for studying coupling between the propulsion and the vehicle structural dynamics for supersonic vehicles. Models based on the Kolmogorov spectrum have been previously utilized to model atmospheric turbulence. In this paper, a more accurate model is developed in its representative fractional order form, typical of atmospheric disturbances. This is accomplished by first scaling the Kolmogorov spectral to convert them into finite energy von Karman forms and then by deriving an explicit fractional circuit-filter type analog for this model. This circuit model is utilized to develop a generalized formulation in frequency domain to approximate the fractional order with the products of first order transfer functions, which enables accurate time domain simulations. The objective of this work is as follows. Given the parameters describing the conditions of atmospheric disturbances, and utilizing the derived formulations, directly compute the transfer function poles and zeros describing these disturbances for acoustic velocity, temperature, pressure, and density. Time domain simulations of representative atmospheric turbulence can then be developed by utilizing these computed transfer functions together with the disturbance frequencies of interest.
Modeling a SI epidemic with stochastic transmission: hyperbolic incidence rate.
Christen, Alejandra; Maulén-Yañez, M Angélica; González-Olivares, Eduardo; Curé, Michel
2018-03-01
In this paper a stochastic susceptible-infectious (SI) epidemic model is analysed, which is based on the model proposed by Roberts and Saha (Appl Math Lett 12: 37-41, 1999), considering a hyperbolic type nonlinear incidence rate. Assuming the proportion of infected population varies with time, our new model is described by an ordinary differential equation, which is analogous to the equation that describes the double Allee effect. The limit of the solution of this equation (deterministic model) is found when time tends to infinity. Then, the asymptotic behaviour of a stochastic fluctuation due to the environmental variation in the coefficient of disease transmission is studied. Thus a stochastic differential equation (SDE) is obtained and the existence of a unique solution is proved. Moreover, the SDE is analysed through the associated Fokker-Planck equation to obtain the invariant measure when the proportion of the infected population reaches steady state. An explicit expression for invariant measure is found and we study some of its properties. The long time behaviour of deterministic and stochastic models are compared by simulations. According to our knowledge this incidence rate has not been previously used for this type of epidemic models.
Generalized reproduction numbers and the prediction of patterns in waterborne disease.
Gatto, Marino; Mari, Lorenzo; Bertuzzo, Enrico; Casagrandi, Renato; Righetto, Lorenzo; Rodriguez-Iturbe, Ignacio; Rinaldo, Andrea
2012-11-27
Understanding, predicting, and controlling outbreaks of waterborne diseases are crucial goals of public health policies, but pose challenging problems because infection patterns are influenced by spatial structure and temporal asynchrony. Although explicit spatial modeling is made possible by widespread data mapping of hydrology, transportation infrastructure, population distribution, and sanitation, the precise condition under which a waterborne disease epidemic can start in a spatially explicit setting is still lacking. Here we show that the requirement that all the local reproduction numbers R0 be larger than unity is neither necessary nor sufficient for outbreaks to occur when local settlements are connected by networks of primary and secondary infection mechanisms. To determine onset conditions, we derive general analytical expressions for a reproduction matrix G0, explicitly accounting for spatial distributions of human settlements and pathogen transmission via hydrological and human mobility networks. At disease onset, a generalized reproduction number Λ0 (the dominant eigenvalue of G0) must be larger than unity. We also show that geographical outbreak patterns in complex environments are linked to the dominant eigenvector and to spectral properties of G0. Tests against data and computations for the 2010 Haiti and 2000 KwaZulu-Natal cholera outbreaks, as well as against computations for metapopulation networks, demonstrate that eigenvectors of G0 provide a synthetic and effective tool for predicting the disease course in space and time. Networked connectivity models, describing the interplay between hydrology, epidemiology, and social behavior sustaining human mobility, thus prove to be key tools for emergency management of waterborne infections.
Experiment for Integrating Dutch 3d Spatial Planning and Bim for Checking Building Permits
NASA Astrophysics Data System (ADS)
van Berlo, L.; Dijkmans, T.; Stoter, J.
2013-09-01
This paper presents a research project in The Netherlands in which several SMEs collaborated to create a 3D model of the National spatial planning information. This 2D information system described in the IMRO data standard holds implicit 3D information that can be used to generate an explicit 3D model. The project realized a proof of concept to generate a 3D spatial planning model. The team used the model to integrate it with several 3D Building Information Models (BIMs) described in the open data standard Industry Foundation Classes (IFC). Goal of the project was (1) to generate a 3D BIM model from spatial planning information to be used by the architect during the early design phase, and (2) allow 3D checking of building permits. The team used several technologies like CityGML, BIM clash detection and GeoBIM to explore the potential of this innovation. Within the project a showcase was created with a part of the spatial plan from the city of The Hague. Several BIM models were integrated in the 3D spatial plan of this area. A workflow has been described that demonstrates the benefits of collaboration between the spatial domain and the AEC industry in 3D. The research results in a showcase with conclusions and considerations for both national and international practice.
Probing eukaryotic cell mechanics via mesoscopic simulations
Shang, Menglin; Lim, Chwee Teck
2017-01-01
Cell mechanics has proven to be important in many biological processes. Although there is a number of experimental techniques which allow us to study mechanical properties of cell, there is still a lack of understanding of the role each sub-cellular component plays during cell deformations. We present a new mesoscopic particle-based eukaryotic cell model which explicitly describes cell membrane, nucleus and cytoskeleton. We employ Dissipative Particle Dynamics (DPD) method that provides us with the unified framework for modeling of a cell and its interactions in the flow. Data from micropipette aspiration experiments were used to define model parameters. The model was validated using data from microfluidic experiments. The validated model was then applied to study the impact of the sub-cellular components on the cell viscoelastic response in micropipette aspiration and microfluidic experiments. PMID:28922399
Ward identities and combinatorics of rainbow tensor models
NASA Astrophysics Data System (ADS)
Itoyama, H.; Mironov, A.; Morozov, A.
2017-06-01
We discuss the notion of renormalization group (RG) completion of non-Gaussian Lagrangians and its treatment within the framework of Bogoliubov-Zimmermann theory in application to the matrix and tensor models. With the example of the simplest non-trivial RGB tensor theory (Aristotelian rainbow), we introduce a few methods, which allow one to connect calculations in the tensor models to those in the matrix models. As a byproduct, we obtain some new factorization formulas and sum rules for the Gaussian correlators in the Hermitian and complex matrix theories, square and rectangular. These sum rules describe correlators as solutions to finite linear systems, which are much simpler than the bilinear Hirota equations and the infinite Virasoro recursion. Search for such relations can be a way to solving the tensor models, where an explicit integrability is still obscure.
Clusters in nonsmooth oscillator networks
NASA Astrophysics Data System (ADS)
Nicks, Rachel; Chambon, Lucie; Coombes, Stephen
2018-03-01
For coupled oscillator networks with Laplacian coupling, the master stability function (MSF) has proven a particularly powerful tool for assessing the stability of the synchronous state. Using tools from group theory, this approach has recently been extended to treat more general cluster states. However, the MSF and its generalizations require the determination of a set of Floquet multipliers from variational equations obtained by linearization around a periodic orbit. Since closed form solutions for periodic orbits are invariably hard to come by, the framework is often explored using numerical techniques. Here, we show that further insight into network dynamics can be obtained by focusing on piecewise linear (PWL) oscillator models. Not only do these allow for the explicit construction of periodic orbits, their variational analysis can also be explicitly performed. The price for adopting such nonsmooth systems is that many of the notions from smooth dynamical systems, and in particular linear stability, need to be modified to take into account possible jumps in the components of Jacobians. This is naturally accommodated with the use of saltation matrices. By augmenting the variational approach for studying smooth dynamical systems with such matrices we show that, for a wide variety of networks that have been used as models of biological systems, cluster states can be explicitly investigated. By way of illustration, we analyze an integrate-and-fire network model with event-driven synaptic coupling as well as a diffusively coupled network built from planar PWL nodes, including a reduction of the popular Morris-Lecar neuron model. We use these examples to emphasize that the stability of network cluster states can depend as much on the choice of single node dynamics as it does on the form of network structural connectivity. Importantly, the procedure that we present here, for understanding cluster synchronization in networks, is valid for a wide variety of systems in biology, physics, and engineering that can be described by PWL oscillators.
Using container orchestration to improve service management at the RAL Tier-1
NASA Astrophysics Data System (ADS)
Lahiff, Andrew; Collier, Ian
2017-10-01
In recent years container orchestration has been emerging as a means of gaining many potential benefits compared to a traditional static infrastructure, such as increased utilisation through multi-tenancy, improved availability due to self-healing, and the ability to handle changing loads due to elasticity and auto-scaling. To this end we have been investigating migrating services at the RAL Tier-1 to an Apache Mesos cluster. In this model the concept of individual machines is abstracted away and services are run in containers on a cluster of machines, managed by schedulers, enabling a high degree of automation. Here we describe Mesos, the infrastructure deployed at RAL, and describe in detail the explicit example of running a batch farm on Mesos.
Cerebellar input configuration toward object model abstraction in manipulation tasks.
Luque, Niceto R; Garrido, Jesus A; Carrillo, Richard R; Coenen, Olivier J-M D; Ros, Eduardo
2011-08-01
It is widely assumed that the cerebellum is one of the main nervous centers involved in correcting and refining planned movement and accounting for disturbances occurring during movement, for instance, due to the manipulation of objects which affect the kinematics and dynamics of the robot-arm plant model. In this brief, we evaluate a way in which a cerebellar-like structure can store a model in the granular and molecular layers. Furthermore, we study how its microstructure and input representations (context labels and sensorimotor signals) can efficiently support model abstraction toward delivering accurate corrective torque values for increasing precision during different-object manipulation. We also describe how the explicit (object-related input labels) and implicit state input representations (sensorimotor signals) complement each other to better handle different models and allow interpolation between two already stored models. This facilitates accurate corrections during manipulations of new objects taking advantage of already stored models.
Lv, Qiming; Schneider, Manuel K; Pitchford, Jonathan W
2008-08-01
We study individual plant growth and size hierarchy formation in an experimental population of Arabidopsis thaliana, within an integrated analysis that explicitly accounts for size-dependent growth, size- and space-dependent competition, and environmental stochasticity. It is shown that a Gompertz-type stochastic differential equation (SDE) model, involving asymmetric competition kernels and a stochastic term which decreases with the logarithm of plant weight, efficiently describes individual plant growth, competition, and variability in the studied population. The model is evaluated within a Bayesian framework and compared to its deterministic counterpart, and to several simplified stochastic models, using distributional validation. We show that stochasticity is an important determinant of size hierarchy and that SDE models outperform the deterministic model if and only if structural components of competition (asymmetry; size- and space-dependence) are accounted for. Implications of these results are discussed in the context of plant ecology and in more general modelling situations.
Hierarchial mark-recapture models: a framework for inference about demographic processes
Link, W.A.; Barker, R.J.
2004-01-01
The development of sophisticated mark-recapture models over the last four decades has provided fundamental tools for the study of wildlife populations, allowing reliable inference about population sizes and demographic rates based on clearly formulated models for the sampling processes. Mark-recapture models are now routinely described by large numbers of parameters. These large models provide the next challenge to wildlife modelers: the extraction of signal from noise in large collections of parameters. Pattern among parameters can be described by strong, deterministic relations (as in ultrastructural models) but is more flexibly and credibly modeled using weaker, stochastic relations. Trend in survival rates is not likely to be manifest by a sequence of values falling precisely on a given parametric curve; rather, if we could somehow know the true values, we might anticipate a regression relation between parameters and explanatory variables, in which true value equals signal plus noise. Hierarchical models provide a useful framework for inference about collections of related parameters. Instead of regarding parameters as fixed but unknown quantities, we regard them as realizations of stochastic processes governed by hyperparameters. Inference about demographic processes is based on investigation of these hyperparameters. We advocate the Bayesian paradigm as a natural, mathematically and scientifically sound basis for inference about hierarchical models. We describe analysis of capture-recapture data from an open population based on hierarchical extensions of the Cormack-Jolly-Seber model. In addition to recaptures of marked animals, we model first captures of animals and losses on capture, and are thus able to estimate survival probabilities w (i.e., the complement of death or permanent emigration) and per capita growth rates f (i.e., the sum of recruitment and immigration rates). Covariation in these rates, a feature of demographic interest, is explicitly described in the model.
Timóteo, Sérgio; Correia, Marta; Rodríguez-Echeverría, Susana; Freitas, Helena; Heleno, Ruben
2018-01-10
Species interaction networks are traditionally explored as discrete entities with well-defined spatial borders, an oversimplification likely impairing their applicability. Using a multilayer network approach, explicitly accounting for inter-habitat connectivity, we investigate the spatial structure of seed-dispersal networks across the Gorongosa National Park, Mozambique. We show that the overall seed-dispersal network is composed by spatially explicit communities of dispersers spanning across habitats, functionally linking the landscape mosaic. Inter-habitat connectivity determines spatial structure, which cannot be accurately described with standard monolayer approaches either splitting or merging habitats. Multilayer modularity cannot be predicted by null models randomizing either interactions within each habitat or those linking habitats; however, as habitat connectivity increases, random processes become more important for overall structure. The importance of dispersers for the overall network structure is captured by multilayer versatility but not by standard metrics. Highly versatile species disperse many plant species across multiple habitats, being critical to landscape functional cohesion.
NASA Astrophysics Data System (ADS)
Riera, Marc; Mardirossian, Narbe; Bajaj, Pushp; Götz, Andreas W.; Paesani, Francesco
2017-10-01
This study presents the extension of the MB-nrg (Many-Body energy) theoretical/computational framework of transferable potential energy functions (PEFs) for molecular simulations of alkali metal ion-water systems. The MB-nrg PEFs are built upon the many-body expansion of the total energy and include the explicit treatment of one-body, two-body, and three-body interactions, with all higher-order contributions described by classical induction. This study focuses on the MB-nrg two-body terms describing the full-dimensional potential energy surfaces of the M+(H2O) dimers, where M+ = Li+, Na+, K+, Rb+, and Cs+. The MB-nrg PEFs are derived entirely from "first principles" calculations carried out at the explicitly correlated coupled-cluster level including single, double, and perturbative triple excitations [CCSD(T)-F12b] for Li+ and Na+ and at the CCSD(T) level for K+, Rb+, and Cs+. The accuracy of the MB-nrg PEFs is systematically assessed through an extensive analysis of interaction energies, structures, and harmonic frequencies for all five M+(H2O) dimers. In all cases, the MB-nrg PEFs are shown to be superior to both polarizable force fields and ab initio models based on density functional theory. As previously demonstrated for halide-water dimers, the MB-nrg PEFs achieve higher accuracy by correctly describing short-range quantum-mechanical effects associated with electron density overlap as well as long-range electrostatic many-body interactions.
Medical students' perceptions of the patient-centredness of the learning environment.
Wilcox, Mark V; Orlando, Megan S; Rand, Cynthia S; Record, Janet; Christmas, Colleen; Ziegelstein, Roy C; Hanyok, Laura A
2017-02-01
Patient-centred care is an important aspect of quality health care. The learning environment may impact medical students' adoption of patient-centred behaviours. All medical students at a single institution received an anonymous, modified version of the Communication, Curriculum, and Culture instrument that measures patient-centredness in the training environment along three domains: role modelling, students' experience, and support for patient-centred behaviours. We compared domain scores and individual items by class year and gender, and qualitatively analyzed responses to two additional items that asked students to describe experiences that demonstrated varying degrees of patient-centredness. Year 1 and 2 students reported greater patient-centredness than year 3 and 4 students in each domain: role modelling (p = 0.03), students' experience (p = <0.001), and support for patient-centred behaviours (p < 0.001). Female students reported less support for patient-centred behaviours compared with male students (p = 0.03). Qualitative analysis revealed that explicit patient-centred curricula and positive role modelling fostered patient-centredness. Themes relating to low degrees of patient-centredness included negative role modelling and students being discouraged from being patient-centred. Students' perceptions of the patient-centredness of the learning environment decreased as students progressed through medical school, despite increasing exposure to patients. Qualitative analysis found that explicit patient-centred curricula cultivated patient-centred attitudes. Role modelling impacted student perceptions of patient-centredness within the learning environment.
ERIC Educational Resources Information Center
Schneider, Darryl W.; Logan, Gordon D.
2005-01-01
Switch costs in task switching are commonly attributed to an executive control process of task-set reconfiguration, particularly in studies involving the explicit task-cuing procedure. The authors propose an alternative account of explicitly cued performance that is based on 2 mechanisms: priming of cue encoding from residual activation of cues in…
The Things You Do: Internal Models of Others’ Expected Behaviour Guide Action Observation
Schenke, Kimberley C.; Wyer, Natalie A.; Bach, Patric
2016-01-01
Predictions allow humans to manage uncertainties within social interactions. Here, we investigate how explicit and implicit person models–how different people behave in different situations–shape these predictions. In a novel action identification task, participants judged whether actors interacted with or withdrew from objects. In two experiments, we manipulated, unbeknownst to participants, the two actors action likelihoods across situations, such that one actor typically interacted with one object and withdrew from the other, while the other actor showed the opposite behaviour. In Experiment 2, participants additionally received explicit information about the two individuals that either matched or mismatched their actual behaviours. The data revealed direct but dissociable effects of both kinds of person information on action identification. Implicit action likelihoods affected response times, speeding up the identification of typical relative to atypical actions, irrespective of the explicit knowledge about the individual’s behaviour. Explicit person knowledge, in contrast, affected error rates, causing participants to respond according to expectations instead of observed behaviour, even when they were aware that the explicit information might not be valid. Together, the data show that internal models of others’ behaviour are routinely re-activated during action observation. They provide first evidence of a person-specific social anticipation system, which predicts forthcoming actions from both explicit information and an individuals’ prior behaviour in a situation. These data link action observation to recent models of predictive coding in the non-social domain where similar dissociations between implicit effects on stimulus identification and explicit behavioural wagers have been reported. PMID:27434265
Ramirez, Jason J.; Dennhardt, Ashley A.; Baldwin, Scott A.; Murphy, James G.; Lindgren, Kristen P.
2016-01-01
Behavioral economic demand curve indices of alcohol consumption reflect decisions to consume alcohol at varying costs. Although these indices predict alcohol-related problems beyond established predictors, little is known about the determinants of elevated demand. Two cognitive constructs that may underlie alcohol demand are alcohol-approach inclinations and drinking identity. The aim of this study was to evaluate implicit and explicit measures of these constructs as predictors of alcohol demand curve indices. College student drinkers (N = 223, 59% female) completed implicit and explicit measures of drinking identity and alcohol-approach inclinations at three timepoints separated by three-month intervals, and completed the Alcohol Purchase Task to assess demand at Time 3. Given no change in our alcohol-approach inclinations and drinking identity measures over time, random intercept-only models were used to predict two demand indices: Amplitude, which represents maximum hypothetical alcohol consumption and expenditures, and Persistence, which represents sensitivity to increasing prices. When modeled separately, implicit and explicit measures of drinking identity and alcohol-approach inclinations positively predicted demand indices. When implicit and explicit measures were included in the same model, both measures of drinking identity predicted Amplitude, but only explicit drinking identity predicted Persistence. In contrast, explicit measures of alcohol-approach inclinations, but not implicit measures, predicted both demand indices. Therefore, there was more support for explicit, versus implicit, measures as unique predictors of alcohol demand. Overall, drinking identity and alcohol-approach inclinations both exhibit positive associations with alcohol demand and represent potentially modifiable cognitive constructs that may underlie elevated demand in college student drinkers. PMID:27379444
Rodhouse, T.J.; Irvine, K.M.; Vierling, K.T.; Vierling, L.A.
2011-01-01
Monitoring programs that evaluate restoration and inform adaptive management are important for addressing environmental degradation. These efforts may be well served by spatially explicit hierarchical approaches to modeling because of unavoidable spatial structure inherited from past land use patterns and other factors. We developed Bayesian hierarchical models to estimate trends from annual density counts observed in a spatially structured wetland forb (Camassia quamash [camas]) population following the cessation of grazing and mowing on the study area, and in a separate reference population of camas. The restoration site was bisected by roads and drainage ditches, resulting in distinct subpopulations ("zones") with different land use histories. We modeled this spatial structure by fitting zone-specific intercepts and slopes. We allowed spatial covariance parameters in the model to vary by zone, as in stratified kriging, accommodating anisotropy and improving computation and biological interpretation. Trend estimates provided evidence of a positive effect of passive restoration, and the strength of evidence was influenced by the amount of spatial structure in the model. Allowing trends to vary among zones and accounting for topographic heterogeneity increased precision of trend estimates. Accounting for spatial autocorrelation shifted parameter coefficients in ways that varied among zones depending on strength of statistical shrinkage, autocorrelation and topographic heterogeneity-a phenomenon not widely described. Spatially explicit estimates of trend from hierarchical models will generally be more useful to land managers than pooled regional estimates and provide more realistic assessments of uncertainty. The ability to grapple with historical contingency is an appealing benefit of this approach.
Semmens, Brice X; Ward, Eric J; Moore, Jonathan W; Darimont, Chris T
2009-07-09
Variability in resource use defines the width of a trophic niche occupied by a population. Intra-population variability in resource use may occur across hierarchical levels of population structure from individuals to subpopulations. Understanding how levels of population organization contribute to population niche width is critical to ecology and evolution. Here we describe a hierarchical stable isotope mixing model that can simultaneously estimate both the prey composition of a consumer diet and the diet variability among individuals and across levels of population organization. By explicitly estimating variance components for multiple scales, the model can deconstruct the niche width of a consumer population into relevant levels of population structure. We apply this new approach to stable isotope data from a population of gray wolves from coastal British Columbia, and show support for extensive intra-population niche variability among individuals, social groups, and geographically isolated subpopulations. The analytic method we describe improves mixing models by accounting for diet variability, and improves isotope niche width analysis by quantitatively assessing the contribution of levels of organization to the niche width of a population.
Matter-coupled de Sitter supergravity
NASA Astrophysics Data System (ADS)
Kallosh, R. E.
2016-05-01
The de Sitter supergravity describes the interaction of supergravity with general chiral and vector multiplets and also one nilpotent chiral multiplet. The extra universal positive term in the potential, generated by the nilpotent multiplet and corresponding to the anti-D3 brane in string theory, is responsible for the de Sitter vacuum stability in these supergravity models. In the flat-space limit, these supergravity models include the Volkov-Akulov model with a nonlinearly realized supersymmetry. We generalize the rules for constructing the pure de Sitter supergravity action to the case of models containing other matter multiplets. We describe a method for deriving the closed-form general supergravity action with a given potential K, superpotential W, and vectormatrix fAB interacting with a nilpotent chiral multiplet. It has the potential V = eK(|F2|+|DW|2-3|W|2), where F is the auxiliary field of the nilpotent multiplet and is necessarily nonzero. The de Sitter vacuums are present under the simple condition that |F2|-3|W|2 > 0. We present an explicit form of the complete action in the unitary gauge.
Modeling biochemical transformation processes and information processing with Narrator.
Mandel, Johannes J; Fuss, Hendrik; Palfreyman, Niall M; Dubitzky, Werner
2007-03-27
Software tools that model and simulate the dynamics of biological processes and systems are becoming increasingly important. Some of these tools offer sophisticated graphical user interfaces (GUIs), which greatly enhance their acceptance by users. Such GUIs are based on symbolic or graphical notations used to describe, interact and communicate the developed models. Typically, these graphical notations are geared towards conventional biochemical pathway diagrams. They permit the user to represent the transport and transformation of chemical species and to define inhibitory and stimulatory dependencies. A critical weakness of existing tools is their lack of supporting an integrative representation of transport, transformation as well as biological information processing. Narrator is a software tool facilitating the development and simulation of biological systems as Co-dependence models. The Co-dependence Methodology complements the representation of species transport and transformation together with an explicit mechanism to express biological information processing. Thus, Co-dependence models explicitly capture, for instance, signal processing structures and the influence of exogenous factors or events affecting certain parts of a biological system or process. This combined set of features provides the system biologist with a powerful tool to describe and explore the dynamics of life phenomena. Narrator's GUI is based on an expressive graphical notation which forms an integral part of the Co-dependence Methodology. Behind the user-friendly GUI, Narrator hides a flexible feature which makes it relatively easy to map models defined via the graphical notation to mathematical formalisms and languages such as ordinary differential equations, the Systems Biology Markup Language or Gillespie's direct method. This powerful feature facilitates reuse, interoperability and conceptual model development. Narrator is a flexible and intuitive systems biology tool. It is specifically intended for users aiming to construct and simulate dynamic models of biology without recourse to extensive mathematical detail. Its design facilitates mappings to different formal languages and frameworks. The combined set of features makes Narrator unique among tools of its kind. Narrator is implemented as Java software program and available as open-source from http://www.narrator-tool.org.
Modeling biochemical transformation processes and information processing with Narrator
Mandel, Johannes J; Fuß, Hendrik; Palfreyman, Niall M; Dubitzky, Werner
2007-01-01
Background Software tools that model and simulate the dynamics of biological processes and systems are becoming increasingly important. Some of these tools offer sophisticated graphical user interfaces (GUIs), which greatly enhance their acceptance by users. Such GUIs are based on symbolic or graphical notations used to describe, interact and communicate the developed models. Typically, these graphical notations are geared towards conventional biochemical pathway diagrams. They permit the user to represent the transport and transformation of chemical species and to define inhibitory and stimulatory dependencies. A critical weakness of existing tools is their lack of supporting an integrative representation of transport, transformation as well as biological information processing. Results Narrator is a software tool facilitating the development and simulation of biological systems as Co-dependence models. The Co-dependence Methodology complements the representation of species transport and transformation together with an explicit mechanism to express biological information processing. Thus, Co-dependence models explicitly capture, for instance, signal processing structures and the influence of exogenous factors or events affecting certain parts of a biological system or process. This combined set of features provides the system biologist with a powerful tool to describe and explore the dynamics of life phenomena. Narrator's GUI is based on an expressive graphical notation which forms an integral part of the Co-dependence Methodology. Behind the user-friendly GUI, Narrator hides a flexible feature which makes it relatively easy to map models defined via the graphical notation to mathematical formalisms and languages such as ordinary differential equations, the Systems Biology Markup Language or Gillespie's direct method. This powerful feature facilitates reuse, interoperability and conceptual model development. Conclusion Narrator is a flexible and intuitive systems biology tool. It is specifically intended for users aiming to construct and simulate dynamic models of biology without recourse to extensive mathematical detail. Its design facilitates mappings to different formal languages and frameworks. The combined set of features makes Narrator unique among tools of its kind. Narrator is implemented as Java software program and available as open-source from . PMID:17389034
Test Input Generation for Red-Black Trees using Abstraction
NASA Technical Reports Server (NTRS)
Visser, Willem; Pasareanu, Corina S.; Pelanek, Radek
2005-01-01
We consider the problem of test input generation for code that manipulates complex data structures. Test inputs are sequences of method calls from the data structure interface. We describe test input generation techniques that rely on state matching to avoid generation of redundant tests. Exhaustive techniques use explicit state model checking to explore all the possible test sequences up to predefined input sizes. Lossy techniques rely on abstraction mappings to compute and store abstract versions of the concrete states; they explore under-approximations of all the possible test sequences. We have implemented the techniques on top of the Java PathFinder model checker and we evaluate them using a Java implementation of red-black trees.
A novel visual hardware behavioral language
NASA Technical Reports Server (NTRS)
Li, Xueqin; Cheng, H. D.
1992-01-01
Most hardware behavioral languages just use texts to describe the behavior of the desired hardware design. This is inconvenient for VLSI designers who enjoy using the schematic approach. The proposed visual hardware behavioral language has the ability to graphically express design information using visual parallel models (blocks), visual sequential models (processes) and visual data flow graphs (which consist of primitive operational icons, control icons, and Data and Synchro links). Thus, the proposed visual hardware behavioral language can not only specify hardware concurrent and sequential functionality, but can also visually expose parallelism, sequentiality, and disjointness (mutually exclusive operations) for the hardware designers. That would make the hardware designers capture the design ideas easily and explicitly using this visual hardware behavioral language.
Two-dimensional dispersion of magnetostatic volume spin waves
NASA Astrophysics Data System (ADS)
Buijnsters, Frank J.; van Tilburg, Lennert J. A.; Fasolino, Annalisa; Katsnelson, Mikhail I.
2018-06-01
Owing to the dipolar (magnetostatic) interaction, long-wavelength spin waves in in-plane magnetized films show an unusual dispersion behavior, which can be mathematically described by the model of and and refinements thereof. However, solving the two-dimensional dispersion requires the evaluation of a set of coupled transcendental equations and one has to rely on numerics. In this work, we present a systematic perturbative analysis of the spin wave model. An expansion in the in-plane wavevector allows us to obtain explicit closed-form expressions for the dispersion relation and mode profiles in various asymptotic regimes. Moreover, we derive a very accurate semi-analytical expression for the dispersion relation of the lowest-frequency mode that is straightforward to evaluate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bray, O.H.
This paper describes a natural language based, semantic information modeling methodology and explores its use and value in clarifying and comparing political science theories and frameworks. As an example, the paper uses this methodology to clarify and compare some of the basic concepts and relationships in the realist (e.g. Waltz) and the liberal (e.g. Rosenau) paradigms for international relations. The methodology can provide three types of benefits: (1) it can clarify and make explicit exactly what is meant by a concept; (2) it can often identify unanticipated implications and consequence of concepts and relationships; and (3) it can help inmore » identifying and operationalizing testable hypotheses.« less
NASA Astrophysics Data System (ADS)
Bykov, N. V.
2014-12-01
Numerical modelling of a ballistic setup with a tapered adapter and plastic piston is considered. The processes in the firing chamber are described within the framework of quasi- one-dimensional gas dynamics and a geometrical law of propellant burn by means of Lagrangian mass coordinates. The deformable piston is considered to be an ideal liquid with specific equations of state. The numerical solution is obtained by means of a modified explicit von Neumann scheme. The calculation results given show that the ballistic setup with a tapered adapter and plastic piston produces increased shell muzzle velocities by a factor of more than 1.5-2.
Application of the Hughes-LIU algorithm to the 2-dimensional heat equation
NASA Technical Reports Server (NTRS)
Malkus, D. S.; Reichmann, P. I.; Haftka, R. T.
1982-01-01
An implicit explicit algorithm for the solution of transient problems in structural dynamics is described. The method involved dividing the finite elements into implicit and explicit groups while automatically satisfying the conditions. This algorithm is applied to the solution of the linear, transient, two dimensional heat equation subject to an initial condition derived from the soluton of a steady state problem over an L-shaped region made up of a good conductor and an insulating material. Using the IIT/PRIME computer with virtual memory, a FORTRAN computer program code was developed to make accuracy, stability, and cost comparisons among the fully explicit Euler, the Hughes-Liu, and the fully implicit Crank-Nicholson algorithms. The Hughes-Liu claim that the explicit group governs the stability of the entire region while maintaining the unconditional stability of the implicit group is illustrated.
A Three-Stage Model of Housing Search,
1980-05-01
Hanushek and Quigley, 1978) that recognize housing search as a transaction cost but rarely - .. examine search behavior; and descriptive studies of search...explicit mobility models that have recently appeared in the liter- ature (Speare et al., 1975; Hanushek and Quigley, 1978; Brummell, 1979). Although...1978; Hanushek and Quigley, 1978; Cronin, 1978). By explicitly assigning dollar values, the economic models attempt to obtain an objective measure of
DoD Product Line Practice Workshop Report
1998-05-01
capability. The essential enterprise management practices include ensuring sound business goals providing an appropriate funding model performing...business. This way requires vision and explicit support at the organizational level. There must be an explicit funding model to support the development...the same group seems to work best in smaller organizations. A funding model for core asset development also needs to be developed because the core
Effects-based strategy development through center of gravity and target system analysis
NASA Astrophysics Data System (ADS)
White, Christopher M.; Prendergast, Michael; Pioch, Nicholas; Jones, Eric K.; Graham, Stephen
2003-09-01
This paper describes an approach to effects-based planning in which a strategic-theater-level mission is refined into operational-level and ultimately tactical-level tasks and desired effects, informed by models of the expected enemy response at each level of abstraction. We describe a strategy development system that implements this approach and supports human-in-the-loop development of an effects-based plan. This system consists of plan authoring tools tightly integrated with a suite of center of gravity (COG) and target system analysis tools. A human planner employs the plan authoring tools to develop a hierarchy of tasks and desired effects. Upon invocation, the target system analysis tools use reduced-order models of enemy centers of gravity to select appropriate target set options for the achievement of desired effects, together with associated indicators for each option. The COG analysis tools also provide explicit models of the causal mechanisms linking tasks and desired effects to one another, and suggest appropriate observable indicators to guide ISR planning, execution monitoring, and campaign assessment. We are currently implementing the system described here as part of the AFRL-sponsored Effects Based Operations program.
A model for foam formation, stability, and breakdown in glass-melting furnaces.
van der Schaaf, John; Beerkens, Ruud G C
2006-03-01
A dynamic model for describing the build-up and breakdown of a glass-melt foam is presented. The foam height is determined by the gas flux to the glass-melt surface and the drainage rate of the liquid lamellae between the gas bubbles. The drainage rate is determined by the average gas bubble radius and the physical properties of the glass melt: density, viscosity, surface tension, and interfacial mobility. Neither the assumption of a fully mobile nor the assumption of a fully immobile glass-melt interface describe the observed foam formation on glass melts adequately. The glass-melt interface appears partially mobile due to the presence of surface active species, e.g., sodium sulfate and silanol groups. The partial mobility can be represented by a single, glass-melt composition specific parameter psi. The value of psi can be estimated from gas bubble lifetime experiments under furnace conditions. With this parameter, laboratory experiments of foam build-up and breakdown in a glass melt are adequately described, qualitatively and quantitatively by a set of ordinary differential equations. An approximate explicit relationship for the prediction of the steady-state foam height is derived from the fundamental model.
A Minimal Three-Dimensional Tropical Cyclone Model.
NASA Astrophysics Data System (ADS)
Zhu, Hongyan; Smith, Roger K.; Ulrich, Wolfgang
2001-07-01
A minimal 3D numerical model designed for basic studies of tropical cyclone behavior is described. The model is formulated in coordinates on an f or plane and has three vertical levels, one characterizing a shallow boundary layer and the other two representing the upper and lower troposphere, respectively. It has three options for treating cumulus convection on the subgrid scale and a simple scheme for the explicit release of latent heat on the grid scale. The subgrid-scale schemes are based on the mass-flux models suggested by Arakawa and Ooyama in the late 1960s, but modified to include the effects of precipitation-cooled downdrafts. They differ from one another in the closure that determines the cloud-base mass flux. One closure is based on the assumption of boundary layer quasi-equilibrium proposed by Raymond and Emanuel.It is shown that a realistic hurricane-like vortex develops from a moderate strength initial vortex, even when the initial environment is slightly stable to deep convection. This is true for all three cumulus schemes as well as in the case where only the explicit release of latent heat is included. In all cases there is a period of gestation during which the boundary layer moisture in the inner core region increases on account of surface moisture fluxes, followed by a period of rapid deepening. Precipitation from the convection scheme dominates the explicit precipitation in the early stages of development, but this situation is reversed as the vortex matures. These findings are similar to those of Baik et al., who used the Betts-Miller parameterization scheme in an axisymmetric model with 11 levels in the vertical. The most striking difference between the model results using different convection schemes is the length of the gestation period, whereas the maximum intensity attained is similar for the three schemes. The calculations suggest the hypothesis that the period of rapid development in tropical cyclones is accompanied by a change in the character of deep convection in the inner core region from buoyantly driven, predominantly upright convection to slantwise forced moist ascent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wieder, William R.; Allison, Steven D.; Davidson, Eric A.
Microbes influence soil organic matter (SOM) decomposition and the long-term stabilization of carbon (C) in soils. We contend that by revising the representation of microbial processes and their interactions with the physicochemical soil environment, Earth system models (ESMs) may make more realistic global C cycle projections. Explicit representation of microbial processes presents considerable challenges due to the scale at which these processes occur. Thus, applying microbial theory in ESMs requires a framework to link micro-scale process-level understanding and measurements to macro-scale models used to make decadal- to century-long projections. Here, we review the diversity, advantages, and pitfalls of simulating soilmore » biogeochemical cycles using microbial-explicit modeling approaches. We present a roadmap for how to begin building, applying, and evaluating reliable microbial-explicit model formulations that can be applied in ESMs. Drawing from experience with traditional decomposition models we suggest: (1) guidelines for common model parameters and output that can facilitate future model intercomparisons; (2) development of benchmarking and model-data integration frameworks that can be used to effectively guide, inform, and evaluate model parameterizations with data from well-curated repositories; and (3) the application of scaling methods to integrate microbial-explicit soil biogeochemistry modules within ESMs. With contributions across scientific disciplines, we feel this roadmap can advance our fundamental understanding of soil biogeochemical dynamics and more realistically project likely soil C response to environmental change at global scales.« less
Self-Love or Other-Love? Explicit Other-Preference but Implicit Self-Preference
Gebauer, Jochen E.; Göritz, Anja S.; Hofmann, Wilhelm; Sedikides, Constantine
2012-01-01
Do humans prefer the self even over their favorite other person? This question has pervaded philosophy and social-behavioral sciences. Psychology’s distinction between explicit and implicit preferences calls for a two-tiered solution. Our evolutionarily-based Dissociative Self-Preference Model offers two hypotheses. Other-preferences prevail at an explicit level, because they convey caring for others, which strengthens interpersonal bonds–a major evolutionary advantage. Self-preferences, however, prevail at an implicit level, because they facilitate self-serving automatic behavior, which favors the self in life-or-die situations–also a major evolutionary advantage. We examined the data of 1,519 participants, who completed an explicit measure and one of five implicit measures of preferences for self versus favorite other. The results were consistent with the Dissociative Self-Preference Model. Explicitly, participants preferred their favorite other over the self. Implicitly, however, they preferred the self over their favorite other (be it their child, romantic partner, or best friend). Results are discussed in relation to evolutionary theorizing on self-deception. PMID:22848605
We demonstrate a spatially-explicit regional assessment of current condition of aquatic ecoservices in the Coal River Basin (CRB), with limited sensitivity analysis for the atmospheric contaminant mercury. The integrated modeling framework (IMF) forecasts water quality and quant...
We have developed a modeling framework to support grid-based simulation of ecosystems at multiple spatial scales, the Ecological Component Library for Parallel Spatial Simulation (ECLPSS). ECLPSS helps ecologists to build robust spatially explicit simulations of ...
NASA Astrophysics Data System (ADS)
Rinaldo, A.; Gatto, M.; Mari, L.; Casagrandi, R.; Righetto, L.; Bertuzzo, E.; Rodriguez-Iturbe, I.
2012-12-01
Metacommunity and individual-based theoretical models are studied in the context of the spreading of infections of water-borne diseases along the ecological corridors defined by river basins and networks of human mobility. The overarching claim is that mathematical models can indeed provide predictive insight into the course of an ongoing epidemic, potentially aiding real-time emergency management in allocating health care resources and by anticipating the impact of alternative interventions. To support the claim, we examine the ex-post reliability of published predictions of the 2010-2011 Haiti cholera outbreak from four independent modeling studies that appeared almost simultaneously during the unfolding epidemic. For each modeled epidemic trajectory, it is assessed how well predictions reproduced the observed spatial and temporal features of the outbreak to date. The impact of different approaches is considered to the modeling of the spatial spread of V. cholera, the mechanics of cholera transmission and in accounting for the dynamics of susceptible and infected individuals within different local human communities. A generalized model for Haitian epidemic cholera and the related uncertainty is thus constructed and applied to the year-long dataset of reported cases now available. Specific emphasis will be dedicated to models of human mobility, a fundamental infection mechanism. Lessons learned and open issues are discussed and placed in perspective, supporting the conclusion that, despite differences in methods that can be tested through model-guided field validation, mathematical modeling of large-scale outbreaks emerges as an essential component of future cholera epidemic control. Although explicit spatial modeling is made routinely possible by widespread data mapping of hydrology, transportation infrastructure, population distribution, and sanitation, the precise condition under which a waterborne disease epidemic can start in a spatially explicit setting is still lacking. Here, we show that the requirement that all the local reproduction numbers R0 be larger than unity is neither necessary nor sufficient for outbreaks to occur when local settlements are connected by networks of primary and secondary infection mechanisms. To determine onset conditions, we derive general analytical expressions for a reproduction matrix G0 explicitly accounting for spatial distributions of human settlements and pathogen transmission via hydrological and human mobility networks. At disease onset, a generalized reproduction number Λ0 (the dominant eigenvalue of G0) must be larger than unity. We also show that geographical outbreak patterns in complex environments are linked to the dominant eigenvector and to spectral properties of G0. Tests against data and computations for the 2010 Haiti and 2000 KwaZulu-Natal cholera outbreaks, as well as against computations for metapopulation networks, demonstrate that eigenvectors of G0 provide a synthetic and effective tool for predicting the disease course in space and time. Networked connectivity models, describing the interplay between hydrology, epidemiology and social behavior sustaining human mobility, thus prove to be key tools for emergency management of waterborne infections.
Marissen, Marlies A E; Brouwer, Marlies E; Hiemstra, Annemarie M F; Deen, Mathijs L; Franken, Ingmar H A
2016-08-30
The mask model of narcissism states that the narcissistic traits of patients with NPD are the result of a compensatory reaction to underlying ego fragility. This model assumes that high explicit self-esteem masks low implicit self-esteem. However, research on narcissism has predominantly focused on non-clinical participants and data derived from patients diagnosed with Narcissistic Personality Disorder (NPD) remain scarce. Therefore, the goal of the present study was to test the mask model hypothesis of narcissism among patients with NPD. Male patients with NPD were compared to patients with other PD's and healthy participants on implicit and explicit self-esteem. NPD patients did not differ in levels of explicit and implicit self-esteem compared to both the psychiatric and the healthy control group. Overall, the current study found no evidence in support of the mask model of narcissism among a clinical group. This implicates that it might not be relevant for clinicians to focus treatment of NPD on an underlying negative self-esteem. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Genthon, Christophe; Le Treut, Herve; Sadourny, Robert; Jouzel, Jean
1990-01-01
A Charney-Branscome based parameterization has been tested as a way of representing the eddy sensible heat transports missing in a zonally averaged dynamic model (ZADM) of the atmosphere. The ZADM used is a zonally averaged version of a general circulation model (GCM). The parameterized transports in the ZADM are gaged against the corresponding fluxes explicitly simulated in the GCM, using the same zonally averaged boundary conditions in both models. The Charney-Branscome approach neglects stationary eddies and transient barotropic disturbances and relies on a set of simplifying assumptions, including the linear appoximation, to describe growing transient baroclinic eddies. Nevertheless, fairly satisfactory results are obtained when the parameterization is performed interactively with the model. Compared with noninteractive tests, a very efficient restoring feedback effect between the modeled zonal-mean climate and the parameterized meridional eddy transport is identified.
A Computer Model of Insect Traps in a Landscape
NASA Astrophysics Data System (ADS)
Manoukis, Nicholas C.; Hall, Brian; Geib, Scott M.
2014-11-01
Attractant-based trap networks are important elements of invasive insect detection, pest control, and basic research programs. We present a landscape-level, spatially explicit model of trap networks, focused on detection, that incorporates variable attractiveness of traps and a movement model for insect dispersion. We describe the model and validate its behavior using field trap data on networks targeting two species, Ceratitis capitata and Anoplophora glabripennis. Our model will assist efforts to optimize trap networks by 1) introducing an accessible and realistic mathematical characterization of the operation of a single trap that lends itself easily to parametrization via field experiments and 2) allowing direct quantification and comparison of sensitivity between trap networks. Results from the two case studies indicate that the relationship between number of traps and their spatial distribution and capture probability under the model is qualitatively dependent on the attractiveness of the traps, a result with important practical consequences.
Numerical model of water flow in a fractured basalt vadose zone: Box Canyon Site, Idaho
NASA Astrophysics Data System (ADS)
Doughty, Christine
2000-12-01
A numerical model of a fractured basalt vadose zone has been developed on the basis of the conceptual model described by Faybishenko et al. [[his issue]. The model has been used to simulate a ponded infiltration test in order to investigate infiltration through partially saturated fractured basalt. A key question addressed is how the fracture pattern geometry and fracture connectivity within a single basalt flow of the Snake River Plain basalt affect water infiltration. The two-dimensional numerical model extends from the ground surface to a perched water body 20 m below and uses an unconventional quasi-deterministic approach with explicit but highly simplified representation of major fractures and other important hydrogeologic features. The model adequately reproduces the majority of the field observation and provides insights into the infiltration process that cannot be obtained by data collection alone, demonstrating its value as a component of field studies.
ERIC Educational Resources Information Center
Stoel, Gerhard L.; van Drie, Jannet P.; van Boxtel, Carla A. M.
2017-01-01
This article reports an experimental study on the effects of explicit teaching on 11th grade students' ability to reason causally in history. Underpinned by the model of domain learning, explicit teaching is conceptualized as multidimensional, focusing on strategies and second-order concepts to generate and verbalize causal explanations and…
Keatley, David; Clarke, David D; Hagger, Martin S
2012-01-01
The literature on health-related behaviours and motivation is replete with research involving explicit processes and their relations with intentions and behaviour. Recently, interest has been focused on the impact of implicit processes and measures on health-related behaviours. Dual-systems models have been proposed to provide a framework for understanding the effects of explicit or deliberative and implicit or impulsive processes on health behaviours. Informed by a dual-systems approach and self-determination theory, the aim of this study was to test the effects of implicit and explicit motivation on three health-related behaviours in a sample of undergraduate students (N = 162). Implicit motives were hypothesised to predict behaviour independent of intentions while explicit motives would be mediated by intentions. Regression analyses indicated that implicit motivation predicted physical activity behaviour only. Across all behaviours, intention mediated the effects of explicit motivational variables from self-determination theory. This study provides limited support for dual-systems models and the role of implicit motivation in the prediction of health-related behaviour. Suggestions for future research into the role of implicit processes in motivation are outlined.
NASA Astrophysics Data System (ADS)
Yulia, M.; Suhandy, D.
2018-03-01
NIR spectra obtained from spectral data acquisition system contains both chemical information of samples as well as physical information of the samples, such as particle size and bulk density. Several methods have been established for developing calibration models that can compensate for sample physical information variations. One common approach is to include physical information variation in the calibration model both explicitly and implicitly. The objective of this study was to evaluate the feasibility of using explicit method to compensate the influence of different particle size of coffee powder in NIR calibration model performance. A number of 220 coffee powder samples with two different types of coffee (civet and non-civet) and two different particle sizes (212 and 500 µm) were prepared. Spectral data was acquired using NIR spectrometer equipped with an integrating sphere for diffuse reflectance measurement. A discrimination method based on PLS-DA was conducted and the influence of different particle size on the performance of PLS-DA was investigated. In explicit method, we add directly the particle size as predicted variable results in an X block containing only the NIR spectra and a Y block containing the particle size and type of coffee. The explicit inclusion of the particle size into the calibration model is expected to improve the accuracy of type of coffee determination. The result shows that using explicit method the quality of the developed calibration model for type of coffee determination is a little bit superior with coefficient of determination (R2) = 0.99 and root mean square error of cross-validation (RMSECV) = 0.041. The performance of the PLS2 calibration model for type of coffee determination with particle size compensation was quite good and able to predict the type of coffee in two different particle sizes with relatively high R2 pred values. The prediction also resulted in low bias and RMSEP values.
High-Order/Low-Order methods for ocean modeling
Newman, Christopher; Womeldorff, Geoff; Chacón, Luis; ...
2015-06-01
In this study, we examine a High Order/Low Order (HOLO) approach for a z-level ocean model and show that the traditional semi-implicit and split-explicit methods, as well as a recent preconditioning strategy, can easily be cast in the framework of HOLO methods. The HOLO formulation admits an implicit-explicit method that is algorithmically scalable and second-order accurate, allowing timesteps much larger than the barotropic time scale. We show how HOLO approaches, in particular the implicit-explicit method, can provide a solid route for ocean simulation to heterogeneous computing and exascale environments.
NASA Technical Reports Server (NTRS)
Arnold, S. M.; Saleeb, A. F.; Tan, H. Q.; Zhang, Y.
1993-01-01
The issue of developing effective and robust schemes to implement a class of the Ogden-type hyperelastic constitutive models is addressed. To this end, special purpose functions (running under MACSYMA) are developed for the symbolic derivation, evaluation, and automatic FORTRAN code generation of explicit expressions for the corresponding stress function and material tangent stiffness tensors. These explicit forms are valid over the entire deformation range, since the singularities resulting from repeated principal-stretch values have been theoretically removed. The required computational algorithms are outlined, and the resulting FORTRAN computer code is presented.
Values Engagement in Evaluation: Ideas, Illustrations, and Implications
ERIC Educational Resources Information Center
Hall, Jori N.; Ahn, Jeehae; Greene, Jennifer C.
2012-01-01
Values-engagement in evaluation involves both describing stakeholder values and prescribing certain values. Describing stakeholder values is common practice in responsive evaluation traditions. Prescribing or advocating particular values is only "explicitly" part of democratic, culturally responsive, critical, and other openly…
The Thomas–Fermi quark model: Non-relativistic aspects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Quan, E-mail: quan_liu@baylor.edu; Wilcox, Walter, E-mail: walter_wilcox@baylor.edu
The first numerical investigation of non-relativistic aspects of the Thomas–Fermi (TF) statistical multi-quark model is given. We begin with a review of the traditional TF model without an explicit spin interaction and find that the spin splittings are too small in this approach. An explicit spin interaction is then introduced which entails the definition of a generalized spin “flavor”. We investigate baryonic states in this approach which can be described with two inequivalent wave functions; such states can however apply to multiple degenerate flavors. We find that the model requires a spatial separation of quark flavors, even if completely degenerate.more » Although the TF model is designed to investigate the possibility of many-quark states, we find surprisingly that it may be used to fit the low energy spectrum of almost all ground state octet and decuplet baryons. The charge radii of such states are determined and compared with lattice calculations and other models. The low energy fit obtained allows us to extrapolate to the six-quark doubly strange H-dibaryon state, flavor symmetric strange states of higher quark content and possible six quark nucleon–nucleon resonances. The emphasis here is on the systematics revealed in this approach. We view our model as a versatile and convenient tool for quickly assessing the characteristics of new, possibly bound, particle states of higher quark number content. -- Highlights: • First application of the statistical Thomas–Fermi quark model to baryonic systems. • Novel aspects: spin as generalized flavor; spatial separation of quark flavor phases. • The model is statistical, but the low energy baryonic spectrum is successfully fit. • Numerical applications include the H-dibaryon, strange states and nucleon resonances. • The statistical point of view does not encourage the idea of bound many-quark baryons.« less
McClelland, James L.
2013-01-01
This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation (IA) model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role of context in interactive neural networks, in which both bottom-up and top-down signals affect the interpretation of sensory inputs. It is pointed out that connectionist units that use the logistic or softmax activation functions can exactly compute Bayesian posterior probabilities when the bias terms and connection weights affecting such units are set to the logarithms of appropriate probabilistic quantities. Bayesian concepts such the prior, likelihood, (joint and marginal) posterior, probability matching and maximizing, and calculating vs. sampling from the posterior are all reviewed and linked to neural network computations. Probabilistic and neural network models are explicitly linked to the concept of a probabilistic generative model that describes the relationship between the underlying target of perception (e.g., the word intended by a speaker or other source of sensory stimuli) and the sensory input that reaches the perceiver for use in inferring the underlying target. It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Ways in which these computations might be realized in real neural systems are also considered. PMID:23970868
McClelland, James L
2013-01-01
This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation (IA) model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role of context in interactive neural networks, in which both bottom-up and top-down signals affect the interpretation of sensory inputs. It is pointed out that connectionist units that use the logistic or softmax activation functions can exactly compute Bayesian posterior probabilities when the bias terms and connection weights affecting such units are set to the logarithms of appropriate probabilistic quantities. Bayesian concepts such the prior, likelihood, (joint and marginal) posterior, probability matching and maximizing, and calculating vs. sampling from the posterior are all reviewed and linked to neural network computations. Probabilistic and neural network models are explicitly linked to the concept of a probabilistic generative model that describes the relationship between the underlying target of perception (e.g., the word intended by a speaker or other source of sensory stimuli) and the sensory input that reaches the perceiver for use in inferring the underlying target. It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Ways in which these computations might be realized in real neural systems are also considered.
DNA → RNA: What Do Students Think the Arrow Means?
Fisk, J. Nick; Newman, Dina L.
2014-01-01
The central dogma of molecular biology, a model that has remained intact for decades, describes the transfer of genetic information from DNA to protein though an RNA intermediate. While recent work has illustrated many exceptions to the central dogma, it is still a common model used to describe and study the relationship between genes and protein products. We investigated understanding of central dogma concepts and found that students are not primed to think about information when presented with the canonical figure of the central dogma. We also uncovered conceptual errors in student interpretation of the meaning of the transcription arrow in the central dogma representation; 36% of students (n = 128; all undergraduate levels) described transcription as a chemical conversion of DNA into RNA or suggested that RNA existed before the process of transcription began. Interviews confirm that students with weak conceptual understanding of information flow find inappropriate meaning in the canonical representation of central dogma. Therefore, we suggest that use of this representation during instruction can be counterproductive unless educators are explicit about the underlying meaning. PMID:26086664
Why are you here? Needs analysis of an interprofessional health-education graduate degree program
Cable, Christian; Knab, Mary; Tham, Kum Ying; Navedo, Deborah D; Armstrong, Elizabeth
2014-01-01
Little is known about the nature of faculty development that is needed to meet calls for a focus on quality and safety with particular attention to the power of interprofessional collaborative practice. Through grounded-theory methodology, the authors describe the motivation and needs of 20 educator/clinicians in multiple disciplines who chose to enroll in an explicitly interprofessional master’s program in health profession education. The results, derived from axial coding described by Strauss and Corbin, revealed that faculty pursue such postprofessional master’s degrees out of a desire to be better prepared for their roles as educators. A hybrid-delivery model on campus and online provided access to graduate degrees while protecting the ability of participants to remain in current positions. The added benefit of a community of practice related to evidence-based and innovative models of education was valued by participants. Authentic, project-based learning and assessment supported their advancement in home institutions and systems. The experience was described by participants as a disruptive innovation that helped them attain their goal of leadership in health profession education. PMID:24748830
Playing relativistic billiards beyond graphene
NASA Astrophysics Data System (ADS)
Sadurní, E.; Seligman, T. H.; Mortessagne, F.
2010-05-01
The possibility of using hexagonal structures in general, and graphene in particular, to emulate the Dirac equation is the topic under consideration here. We show that Dirac oscillators with or without rest mass can be emulated by distorting a tight-binding model on a hexagonal structure. In the quest to make a toy model for such relativistic equations, we first show that a hexagonal lattice of attractive potential wells would be a good candidate. Firstly, we consider the corresponding one-dimensional (1D) model giving rise to a 1D Dirac oscillator and then construct explicitly the deformations needed in the 2D case. Finally, we discuss how such a model can be implemented as an electromagnetic billiard using arrays of dielectric resonators between two conducting plates that ensure evanescent modes outside the resonators for transversal electric modes, and we describe a feasible experimental setup.
NASA Technical Reports Server (NTRS)
Boville, B. A.; Kiehl, J. T.; Briegleb, B. P.
1988-01-01
The possible effect of the Antartic ozone hole on the evolution of the polar vortex during late winter and spring using a general circulation model (GCM) is examined. The GCM is a version of the NCAR Community Climate Model whose domain extends from the surface to the mesosphere and is similar to that described on Boville and Randel (1986). Ozone is not a predicted variable in the model. A zonally averaged ozone distribution is specified as a function of latitude, pressure and month for the radiation parameterization. Rather that explicitly address reasons for the formation of the ozone hole, researchers postulate its existence and ask what effect it has on the subsequent evolution of the vortex. The evolution of the model when an ozone hole is imposed is then discussed.
Coates, Peter S.; Casazza, Michael L.; Brussee, Brianne E.; Ricca, Mark A.; Gustafson, K. Benjamin; Overton, Cory T.; Sanchez-Chopitea, Erika; Kroger, Travis; Mauch, Kimberly; Niell, Lara; Howe, Kristy; Gardner, Scott; Espinosa, Shawn; Delehanty, David J.
2014-01-01
Greater sage-grouse (Centrocercus urophasianus, hereafter referred to as “sage-grouse”) populations are declining throughout the sagebrush (Artemisia spp.) ecosystem, including millions of acres of potential habitat across the West. Habitat maps derived from empirical data are needed given impending listing decisions that will affect both sage-grouse population dynamics and human land-use restrictions. This report presents the process for developing spatially explicit maps describing relative habitat suitability for sage-grouse in Nevada and northeastern California. Maps depicting habitat suitability indices (HSI) values were generated based on model-averaged resource selection functions informed by more than 31,000 independent telemetry locations from more than 1,500 radio-marked sage-grouse across 12 project areas in Nevada and northeastern California collected during a 15-year period (1998–2013). Modeled habitat covariates included land cover composition, water resources, habitat configuration, elevation, and topography, each at multiple spatial scales that were relevant to empirically observed sage-grouse movement patterns. We then present an example of how the HSI can be delineated into categories. Specifically, we demonstrate that the deviation from the mean can be used to classify habitat suitability into three categories of habitat quality (high, moderate, and low) and one non-habitat category. The classification resulted in an agreement of 93–97 percent for habitat versus non-habitat across a suite of independent validation datasets. Lastly, we provide an example of how space use models can be integrated with habitat models to help inform conservation planning. In this example, we combined probabilistic breeding density with a non-linear probability of occurrence relative to distance to nearest lek (traditional breeding ground) using count data to calculate a composite space use index (SUI). The SUI was then classified into two categories of use (high and low-to-no) and intersected with the HSI categories to create potential management prioritization scenarios based oninformation about sage-grouse occupancy coupled with habitat suitability. This provided an example of a conservation planning application that uses the intersection of the spatially-explicit HSI and empirically-based SUI to identify potential spatially explicit strategies for sage-grouse management. Importantly, the reported categories for the HSI and SUI can be reclassified relatively easily to employ alternative conservation thresholds that may be identified through decision-making processes with stake-holders, managers, and biologists. Moreover, the HSI/SUI interface map can be updated readily as new data become available.
Estimating European soil organic carbon mitigation potential in a global integrated land use model
NASA Astrophysics Data System (ADS)
Frank, Stefan; Böttcher, Hannes; Schneider, Uwe; Schmid, Erwin; Havlík, Petr
2013-04-01
Several studies have shown the dynamic interaction between soil organic carbon (SOC) sequestration rates, soil management decisions and SOC levels. Management practices such as reduced and no-tillage, improved residue management and crop rotations as well as the conversion of marginal cropland to native vegetation or conversion of cultivated land to permanent grassland offer the potential to increase SOC content. Even though dynamic interactions are widely acknowledged in literature, they have not been implemented in most existing land use decision models. A major obstacle is the high data and computing requirements for an explicit representation of alternative land use sequences since a model has to be able to track all different management decision paths. To our knowledge no study accounted so far for SOC dynamics explicitly in a global integrated land use model. To overcome these conceptual difficulties described above we apply an approach capable of accounting for SOC dynamics in GLOBIOM (Global Biosphere Management Model), a global recursive dynamic partial equilibrium bottom-up model integrating the agricultural, bioenergy and forestry sectors. GLOBIOM represents all major land based sectors and therefore is able to account for direct and indirect effects of land use change as well as leakage effects (e.g. through trade) implicitly. Together with the detailed representation of technologies (e.g. tillage and fertilizer management systems), these characteristics make the model a highly valuable tool for assessing European SOC emissions and mitigation potential. Demand and international trade are represented in this version of the model at the level of 27 EU member states and 23 aggregated world regions outside Europe. Changes in the demand on the one side, and profitability of the different land based activities on the other side, are the major determinants of land use change in GLOBIOM. In this paper we estimate SOC emissions from cropland for the EU until 2050 explicitly considering SOC dynamics due to land use and land management in a global integrated land use model. Moreover, we calculate the EU SOC mitigation potential taking into account leakage effects outside Europe as well as related feed backs from other sectors. In sensitivity analysis, we disaggregate the SOC mitigation potential i.e. we quantify the impact of different management systems and crop rotations to identify most promising mitigation strategies.
Random unitary evolution model of quantum Darwinism with pure decoherence
NASA Astrophysics Data System (ADS)
Balanesković, Nenad
2015-10-01
We study the behavior of Quantum Darwinism [W.H. Zurek, Nat. Phys. 5, 181 (2009)] within the iterative, random unitary operations qubit-model of pure decoherence [J. Novotný, G. Alber, I. Jex, New J. Phys. 13, 053052 (2011)]. We conclude that Quantum Darwinism, which describes the quantum mechanical evolution of an open system S from the point of view of its environment E, is not a generic phenomenon, but depends on the specific form of input states and on the type of S-E-interactions. Furthermore, we show that within the random unitary model the concept of Quantum Darwinism enables one to explicitly construct and specify artificial input states of environment E that allow to store information about an open system S of interest with maximal efficiency.
Wang, Zhongqi; Yang, Bo; Kang, Yonggang; Yang, Yuan
2016-01-01
Fixture plays an important part in constraining excessive sheet metal part deformation at machining, assembly, and measuring stages during the whole manufacturing process. However, it is still a difficult and nontrivial task to design and optimize sheet metal fixture locating layout at present because there is always no direct and explicit expression describing sheet metal fixture locating layout and responding deformation. To that end, an RBF neural network prediction model is proposed in this paper to assist design and optimization of sheet metal fixture locating layout. The RBF neural network model is constructed by training data set selected by uniform sampling and finite element simulation analysis. Finally, a case study is conducted to verify the proposed method.
Wang, Zhongqi; Yang, Bo; Kang, Yonggang; Yang, Yuan
2016-01-01
Fixture plays an important part in constraining excessive sheet metal part deformation at machining, assembly, and measuring stages during the whole manufacturing process. However, it is still a difficult and nontrivial task to design and optimize sheet metal fixture locating layout at present because there is always no direct and explicit expression describing sheet metal fixture locating layout and responding deformation. To that end, an RBF neural network prediction model is proposed in this paper to assist design and optimization of sheet metal fixture locating layout. The RBF neural network model is constructed by training data set selected by uniform sampling and finite element simulation analysis. Finally, a case study is conducted to verify the proposed method. PMID:27127499
How many species of flowering plants are there?
Joppa, Lucas N.; Roberts, David L.; Pimm, Stuart L.
2011-01-01
We estimate the probable number of flowering plants. First, we apply a model that explicitly incorporates taxonomic effort over time to estimate the number of as-yet-unknown species. Second, we ask taxonomic experts their opinions on how many species are likely to be missing, on a family-by-family basis. The results are broadly comparable. We show that the current number of species should grow by between 10 and 20 per cent. There are, however, interesting discrepancies between expert and model estimates for some families, suggesting that our model does not always completely capture patterns of taxonomic activity. The as-yet-unknown species are probably similar to those taxonomists have described recently—overwhelmingly rare and local, and disproportionately in biodiversity hotspots, where there are high levels of habitat destruction. PMID:20610425
On numerical model of time-dependent processes in three-dimensional porous heat-releasing objects
NASA Astrophysics Data System (ADS)
Lutsenko, Nickolay A.
2016-10-01
The gas flows in the gravity field through porous objects with heat-releasing sources are investigated when the self-regulation of the flow rate of the gas passing through the porous object takes place. Such objects can appear after various natural or man-made disasters (like the exploded unit of the Chernobyl NPP). The mathematical model and the original numerical method, based on a combination of explicit and implicit finite difference schemes, are developed for investigating the time-dependent processes in 3D porous energy-releasing objects. The advantage of the numerical model is its ability to describe unsteady processes under both natural convection and forced filtration. The gas cooling of 3D porous objects with different distribution of heat sources is studied using computational experiment.
THE MAYAK WORKER DOSIMETRY SYSTEM (MWDS-2013) FOR INTERNALLY DEPOSITED PLUTONIUM: AN OVERVIEW
DOE Office of Scientific and Technical Information (OSTI.GOV)
Birchall, A.; Vostrotin, V.; Puncher, M.
The Mayak Worker Dosimetry System (MWDS-2013) is a system for interpreting measurement data from Mayak workers from both internal and external sources. This paper is concerned with the calculation of annual organ doses for Mayak workers exposed to plutonium aerosols, where the measurement data consists mainly of activity of plutonium in urine samples. The system utilises the latest biokinetic and dosimetric models, and unlike its predecessors, takes explicit account of uncertainties in both the measurement data and model parameters. The aim of this paper is to describe the complete MWDS-2013 system (including model parameter values and their uncertainties) and themore » methodology used (including all the relevant equations) and the assumptions made. Where necessary, supplementary papers which justify specific assumptions are cited.« less
NASA Astrophysics Data System (ADS)
Khrennikov, Andrei
2017-08-01
Starting with the quantum-like paradigm on application of quantum information and probability outside of physics we proceed to the social laser model describing Stimulated Amplification of Social Actions (SASA). The basic components of social laser are the quantum information field carrying information excitations and the human gain medium. The aim of this note is to analyze constraints on these components making possible SASA. The soical laser model can be used to explain the recent wave of color revolutions as well as such “unpredictable events” as Brexit and election of Donald Trump as the president of the United States of America. The presented quantum-like model is not only descriptive. We shall list explicitly conditions for creation of social laser.