ERIC Educational Resources Information Center
Braasch, Jason L. G.; Bråten, Ivar
2017-01-01
Despite the importance of source attention and evaluation for learning from texts, little is known about the particular conditions that encourage sourcing during reading. In this article, basic assumptions of the discrepancy-induced source comprehension (D-ISC) model are presented, which describes the moment-by-moment cognitive processes that…
ERIC Educational Resources Information Center
Slisko, Josip; Cruz, Adrian Corona
2013-01-01
There is a general agreement that critical thinking is an important element of 21st century skills. Although critical thinking is a very complex and controversial conception, many would accept that recognition and evaluation of assumptions is a basic critical-thinking process. When students use simple mathematical model to reason quantitatively…
High Tech Educators Network Evaluation.
ERIC Educational Resources Information Center
O'Shea, Dan
A process evaluation was conducted to assess the High Tech Educators Network's (HTEN's) activities. Four basic components to the evaluation approach were documentation review, program logic model, written survey, and participant interviews. The model mapped the basic goals and objectives, assumptions, activities, outcome expectations, and…
Causality and headache triggers
Turner, Dana P.; Smitherman, Todd A.; Martin, Vincent T.; Penzien, Donald B.; Houle, Timothy T.
2013-01-01
Objective The objective of this study was to explore the conditions necessary to assign causal status to headache triggers. Background The term “headache trigger” is commonly used to label any stimulus that is assumed to cause headaches. However, the assumptions required for determining if a given stimulus in fact has a causal-type relationship in eliciting headaches have not been explicated. Methods A synthesis and application of Rubin’s Causal Model is applied to the context of headache causes. From this application the conditions necessary to infer that one event (trigger) causes another (headache) are outlined using basic assumptions and examples from relevant literature. Results Although many conditions must be satisfied for a causal attribution, three basic assumptions are identified for determining causality in headache triggers: 1) constancy of the sufferer; 2) constancy of the trigger effect; and 3) constancy of the trigger presentation. A valid evaluation of a potential trigger’s effect can only be undertaken once these three basic assumptions are satisfied during formal or informal studies of headache triggers. Conclusions Evaluating these assumptions is extremely difficult or infeasible in clinical practice, and satisfying them during natural experimentation is unlikely. Researchers, practitioners, and headache sufferers are encouraged to avoid natural experimentation to determine the causal effects of headache triggers. Instead, formal experimental designs or retrospective diary studies using advanced statistical modeling techniques provide the best approaches to satisfy the required assumptions and inform causal statements about headache triggers. PMID:23534872
An Extension of the Partial Credit Model with an Application to the Measurement of Change.
ERIC Educational Resources Information Center
Fischer, Gerhard H.; Ponocny, Ivo
1994-01-01
An extension to the partial credit model, the linear partial credit model, is considered under the assumption of a certain linear decomposition of the item x category parameters into basic parameters. A conditional maximum likelihood algorithm for estimating basic parameters is presented and illustrated with simulation and an empirical study. (SLD)
NASA Astrophysics Data System (ADS)
Cannizzo, John K.
2017-01-01
We utilize the time dependent accretion disk model described by Ichikawa & Osaki (1992) to explore two basic ideas for the outbursts in the SU UMa systems, Osaki's Thermal-Tidal Model, and the basic accretion disk limit cycle model. We explore a range in possible input parameters and model assumptions to delineate under what conditions each model may be preferred.
Interpretation of the results of statistical measurements. [search for basic probability model
NASA Technical Reports Server (NTRS)
Olshevskiy, V. V.
1973-01-01
For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.
Network-level reproduction number and extinction threshold for vector-borne diseases.
Xue, Ling; Scoglio, Caterina
2015-06-01
The basic reproduction number of deterministic models is an essential quantity to predict whether an epidemic will spread or not. Thresholds for disease extinction contribute crucial knowledge of disease control, elimination, and mitigation of infectious diseases. Relationships between basic reproduction numbers of two deterministic network-based ordinary differential equation vector-host models, and extinction thresholds of corresponding stochastic continuous-time Markov chain models are derived under some assumptions. Numerical simulation results for malaria and Rift Valley fever transmission on heterogeneous networks are in agreement with analytical results without any assumptions, reinforcing that the relationships may always exist and proposing a mathematical problem for proving existence of the relationships in general. Moreover, numerical simulations show that the basic reproduction number does not monotonically increase or decrease with the extinction threshold. Consistent trends of extinction probability observed through numerical simulations provide novel insights into mitigation strategies to increase the disease extinction probability. Research findings may improve understandings of thresholds for disease persistence in order to control vector-borne diseases.
Is Tissue the Issue? A Critique of SOMPA's Models and Tests.
ERIC Educational Resources Information Center
Goodman, Joan F.
1979-01-01
A critical view of the underlying theoretical rationale of the System of Multicultural Pluralistic Assessment (SOMPA) model for student assessment is presented. The critique is extensive and questions the basic assumptions of the model. (JKS)
On the Basis of the Basic Variety.
ERIC Educational Resources Information Center
Schwartz, Bonnie D.
1997-01-01
Considers the interplay between source and target language in relation to two points made by Klein and Perdue: (1) the argument that the analysis of the target language should not be used as the model for analyzing interlanguage data; and (2) the theoretical claim that under the technical assumptions of minimalism, the Basic Variety is a "perfect"…
Hong, Sehee; Kim, Soyoung
2018-01-01
There are basically two modeling approaches applicable to analyzing an actor-partner interdependence model: the multilevel modeling (hierarchical linear model) and the structural equation modeling. This article explains how to use these two models in analyzing an actor-partner interdependence model and how these two approaches work differently. As an empirical example, marital conflict data were used to analyze an actor-partner interdependence model. The multilevel modeling and the structural equation modeling produced virtually identical estimates for a basic model. However, the structural equation modeling approach allowed more realistic assumptions on measurement errors and factor loadings, rendering better model fit indices.
ERIC Educational Resources Information Center
Ifenthaler, Dirk; Seel, Norbert M.
2013-01-01
In this paper, there will be a particular focus on mental models and their application to inductive reasoning within the realm of instruction. A basic assumption of this study is the observation that the construction of mental models and related reasoning is a slowly developing capability of cognitive systems that emerges effectively with proper…
78 FR 26269 - Connect America Fund; High-Cost Universal Service Support
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-06
... the model platform, which is the basic framework for the model consisting of key assumptions about the... combination of competitive bidding and a new forward-looking model of the cost of constructing modern multi-purpose networks.'' Using the cost model to ``estimate the support necessary to serve areas where costs...
A Neo-Kohlbergian Approach to Morality Research.
ERIC Educational Resources Information Center
Rest, James R.; Narvaez, Darcia; Thoma, Stephen J.; Bebeau, Muriel J.
2000-01-01
Proposes a model of moral judgment that builds on Lawrence Kohlberg's core assumptions. Addresses the concerns that have surfaced related to Kohlberg's work in moral judgment. Presents an overview of this model using Kohlberg's basic starting points, ideas from cognitive science, and developments in moral philosophy. (CMK)
Reconciling Time, Space and Function: A New Dorsal-Ventral Stream Model of Sentence Comprehension
ERIC Educational Resources Information Center
Bornkessel-Schlesewsky, Ina; Schlesewsky, Matthias
2013-01-01
We present a new dorsal-ventral stream framework for language comprehension which unifies basic neurobiological assumptions (Rauschecker & Scott, 2009) with a cross-linguistic neurocognitive sentence comprehension model (eADM; Bornkessel & Schlesewsky, 2006). The dissociation between (time-dependent) syntactic structure-building and…
Using LISREL to Evaluate Measurement Models and Scale Reliability.
ERIC Educational Resources Information Center
Fleishman, John; Benson, Jeri
1987-01-01
LISREL program was used to examine measurement model assumptions and to assess reliability of Coopersmith Self-Esteem Inventory for Children, Form B. Data on 722 third-sixth graders from over 70 schools in large urban school district were used. LISREL program assessed (1) nature of basic measurement model for scale, (2) scale invariance across…
NASA Technical Reports Server (NTRS)
Timofeyev, Y. M.
1979-01-01
In order to test the error of calculation in assumed values of the transmission function for Soviet and American radiometers sounding the atmosphere thermally from orbiting satellites, the assumptions of the transmission calculation is varied with respect to atmospheric CO2 content, transmission frequency, and atmospheric absorption. The error arising from variations of the assumptions from the standard basic model is calculated.
NASA Astrophysics Data System (ADS)
Rusli, Aloysius
2016-08-01
Until the 1980s, it is well known and practiced in Indonesian Basic Physics courses, to present physics by its effective technicalities: The ideally elastic spring, the pulley and moving blocks, the thermodynamics of ideal engine models, theoretical electrostatics and electrodynamics with model capacitors and inductors, wave behavior and its various superpositions, and hopefully closed with a modern physics description. A different approach was then also experimented with, using the Hobson and Moore texts, stressing the alternative aim of fostering awareness, not just mastery, of science and the scientific method. This is hypothesized to be more in line with the changed attitude of the so-called Millenials cohort who are less attentive if not interested, and are more used to multi-tasking which suits their shorter span of attention. The upside is increased awareness of science and the scientific method. The downside is that they are getting less experience of the scientific method which intensely bases itself on critical observation, analytic thinking to set up conclusions or hypotheses, and checking consistency of the hypotheses with measured data. Another aspect is recognition that the human person encompasses both the reasoning capacity and the mental- spiritual-cultural capacity. This is considered essential, as the world grows even smaller due to increased communication capacity, causing strong interactions, nonlinear effects, and showing that value systems become more challenging and challenged due to physics / science and its cosmology, which is successfully based on the scientific method. So students should be made aware of the common basis of these two capacities: the assumptions, the reasoning capacity and the consistency assumption. This shows that the limits of science are their set of basic quantifiable assumptions, and the limits of the mental-spiritual-cultural aspects of life are their set of basic metaphysical (non-quantifiable) assumptions. The bridging between these two human aspects of life, can lead to a “why” of science, and a “meaning” of life. A progress report on these efforts is presented, essentially being of the results indicated by an extended format of the usual weekly reporting used previously in Basic Physics lectures.
Variable thickness transient ground-water flow model. Volume 1. Formulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reisenauer, A.E.
1979-12-01
Mathematical formulation for the variable thickness transient (VTT) model of an aquifer system is presented. The basic assumptions are described. Specific data requirements for the physical parameters are discussed. The boundary definitions and solution techniques of the numerical formulation of the system of equations are presented.
ERIC Educational Resources Information Center
Fischer, Gerhard H.
1987-01-01
A natural parameterization and formalization of the problem of measuring change in dichotomous data is developed. Mathematically-exact definitions of specific objectivity are presented, and the basic structures of the linear logistic test model and the linear logistic model with relaxed assumptions are clarified. (SLD)
A Markov chain model for reliability growth and decay
NASA Technical Reports Server (NTRS)
Siegrist, K.
1982-01-01
A mathematical model is developed to describe a complex system undergoing a sequence of trials in which there is interaction between the internal states of the system and the outcomes of the trials. For example, the model might describe a system undergoing testing that is redesigned after each failure. The basic assumptions for the model are that the state of the system after a trial depends probabilistically only on the state before the trial and on the outcome of the trial and that the outcome of a trial depends probabilistically only on the state of the system before the trial. It is shown that under these basic assumptions, the successive states form a Markov chain and the successive states and outcomes jointly form a Markov chain. General results are obtained for the transition probabilities, steady-state distributions, etc. A special case studied in detail describes a system that has two possible state ('repaired' and 'unrepaired') undergoing trials that have three possible outcomes ('inherent failure', 'assignable-cause' 'failure' and 'success'). For this model, the reliability function is computed explicitly and an optimal repair policy is obtained.
Optimal post-experiment estimation of poorly modeled dynamic systems
NASA Technical Reports Server (NTRS)
Mook, D. Joseph
1988-01-01
Recently, a novel strategy for post-experiment state estimation of discretely-measured dynamic systems has been developed. The method accounts for errors in the system dynamic model equations in a more general and rigorous manner than do filter-smoother algorithms. The dynamic model error terms do not require the usual process noise assumptions of zero-mean, symmetrically distributed random disturbances. Instead, the model error terms require no prior assumptions other than piecewise continuity. The resulting state estimates are more accurate than filters for applications in which the dynamic model error clearly violates the typical process noise assumptions, and the available measurements are sparse and/or noisy. Estimates of the dynamic model error, in addition to the states, are obtained as part of the solution of a two-point boundary value problem, and may be exploited for numerous reasons. In this paper, the basic technique is explained, and several example applications are given. Included among the examples are both state estimation and exploitation of the model error estimates.
Experimental investigation of two-phase heat transfer in a porous matrix.
NASA Technical Reports Server (NTRS)
Von Reth, R.; Frost, W.
1972-01-01
One-dimensional two-phase flow transpiration cooling through porous metal is studied experimentally. The experimental data is compared with a previous one-dimensional analysis. Good agreement with calculated temperature distribution is obtained as long as the basic assumptions of the analytical model are satisfied. Deviations from the basic assumptions are caused by nonhomogeneous and oscillating flow conditions. Preliminary derivation of nondimensional parameters which characterize the stable and unstable flow conditions is given. Superheated liquid droplets observed sputtering from the heated surface indicated incomplete evaporation at heat fluxes well in access of the latent energy transport. A parameter is developed to account for the nonequilibrium thermodynamic effects. Measured and calculated pressure drops show contradicting trends which are attributed to capillary forces.
Initial Comparison of Single Cylinder Stirling Engine Computer Model Predictions with Test Results
NASA Technical Reports Server (NTRS)
Tew, R. C., Jr.; Thieme, L. G.; Miao, D.
1979-01-01
A Stirling engine digital computer model developed at NASA Lewis Research Center was configured to predict the performance of the GPU-3 single-cylinder rhombic drive engine. Revisions to the basic equations and assumptions are discussed. Model predictions with the early results of the Lewis Research Center GPU-3 tests are compared.
Model for Developing an In-Service Teacher Workshop To Help Multilingual and Multicultural Students.
ERIC Educational Resources Information Center
Kachaturoff, Grace; Romatowski, Jane A.
This is a model for designing an inservice teacher workshop to assist teachers working with multicultural students. The basic assumption underlying the model is universities and schools need to work cooperatively to provide experiences for improving the quality of teaching by increasing awareness of educational issues and situations and by…
The Eleventh Quadrennial Review of Military Compensation. Supporting Research Papers
2012-06-01
value. 4. BAH + BAS is roughly equal to expenditures for housing and food for servicemembers.22 In the first phase of the formal model, we further...assume that taxes, housing, and food are the only basic living expenses. Then, in the next phase, we include estimates of noncash benefits not included...assumption 4 with assumption 2 implies that civilian housing and food expenses are also equal to military BAH and BAS. However, civilian housing and food
39 Questionable Assumptions in Modern Physics
NASA Astrophysics Data System (ADS)
Volk, Greg
2009-03-01
The growing body of anomalies in new energy, low energy nuclear reactions, astrophysics, atomic physics, and entanglement, combined with the failure of the Standard Model and string theory to predict many of the most basic fundamental phenomena, all point to a need for major new paradigms. Not Band-Aids, but revolutionary new ways of conceptualizing physics, in the spirit of Thomas Kuhn's The Structure of Scientific Revolutions. This paper identifies a number of long-held, but unproven assumptions currently being challenged by an increasing number of alternative scientists. Two common themes, both with venerable histories, keep recurring in the many alternative theories being proposed: (1) Mach's Principle, and (2) toroidal, vortex particles. Matter-based Mach's Principle differs from both space-based universal frames and observer-based Einsteinian relativity. Toroidal particles, in addition to explaining electron spin and the fundamental constants, satisfy the basic requirement of Gauss's misunderstood B Law, that motion itself circulates. Though a comprehensive theory is beyond the scope of this paper, it will suggest alternatives to the long list of assumptions in context.
ERIC Educational Resources Information Center
Elleven, Russell K.
2007-01-01
The article examines a relatively new tool to increase the effectiveness of organizations and people. The recent development and background of Appreciative Inquiry (AI) is reviewed. Basic assumptions of the model are discussed. Implications for departments and divisions of student affairs are analyzed. Finally, suggested readings and workshop…
ERIC Educational Resources Information Center
Engstrom, Cathy McHugh
2008-01-01
The pedagogical assumptions and teaching practices of learning community models reflect exemplary conditions for learning, so using these models with unprepared students seems desirable and worthy of investigation. This chapter describes the key role of faculty in creating active, integrative learning experiences for students in basic skills…
Consumption of Mass Communication--Construction of a Model on Information Consumption Behaviour.
ERIC Educational Resources Information Center
Sepstrup, Preben
A general conceptual model on the consumption of information is introduced. Information as the output of the mass media is treated as a product, and a model on the consumption of this product is developed by merging elements from consumer behavior theory and mass communication theory. Chapter I gives basic assumptions about the individual and the…
Plant uptake of elements in soil and pore water: field observations versus model assumptions.
Raguž, Veronika; Jarsjö, Jerker; Grolander, Sara; Lindborg, Regina; Avila, Rodolfo
2013-09-15
Contaminant concentrations in various edible plant parts transfer hazardous substances from polluted areas to animals and humans. Thus, the accurate prediction of plant uptake of elements is of significant importance. The processes involved contain many interacting factors and are, as such, complex. In contrast, the most common way to currently quantify element transfer from soils into plants is relatively simple, using an empirical soil-to-plant transfer factor (TF). This practice is based on theoretical assumptions that have been previously shown to not generally be valid. Using field data on concentrations of 61 basic elements in spring barley, soil and pore water at four agricultural sites in mid-eastern Sweden, we quantify element-specific TFs. Our aim is to investigate to which extent observed element-specific uptake is consistent with TF model assumptions and to which extent TF's can be used to predict observed differences in concentrations between different plant parts (root, stem and ear). Results show that for most elements, plant-ear concentrations are not linearly related to bulk soil concentrations, which is congruent with previous studies. This behaviour violates a basic TF model assumption of linearity. However, substantially better linear correlations are found when weighted average element concentrations in whole plants are used for TF estimation. The highest number of linearly-behaving elements was found when relating average plant concentrations to soil pore-water concentrations. In contrast to other elements, essential elements (micronutrients and macronutrients) exhibited relatively small differences in concentration between different plant parts. Generally, the TF model was shown to work reasonably well for micronutrients, whereas it did not for macronutrients. The results also suggest that plant uptake of elements from sources other than the soil compartment (e.g. from air) may be non-negligible. Copyright © 2013 Elsevier Ltd. All rights reserved.
The Case for a Hierarchical Cosmology
ERIC Educational Resources Information Center
Vaucouleurs, G. de
1970-01-01
The development of modern theoretical cosmology is presented and some questionable assumptions of orthodox cosmology are pointed out. Suggests that recent observations indicate that hierarchical clustering is a basic factor in cosmology. The implications of hierarchical models of the universe are considered. Bibliography. (LC)
The Estimation Theory Framework of Data Assimilation
NASA Technical Reports Server (NTRS)
Cohn, S.; Atlas, Robert (Technical Monitor)
2002-01-01
Lecture 1. The Estimation Theory Framework of Data Assimilation: 1. The basic framework: dynamical and observation models; 2. Assumptions and approximations; 3. The filtering, smoothing, and prediction problems; 4. Discrete Kalman filter and smoother algorithms; and 5. Example: A retrospective data assimilation system
NASA Technical Reports Server (NTRS)
1976-01-01
Assumptions made and techniques used in modeling the power network to the 480 volt level are discussed. Basic computational techniques used in the short circuit program are described along with a flow diagram of the program and operational procedures. Procedures for incorporating network changes are included in this user's manual.
Likelihood ratio decisions in memory: three implied regularities.
Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T
2009-06-01
We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.
Dynamics of an HIV-1 infection model with cell mediated immunity
NASA Astrophysics Data System (ADS)
Yu, Pei; Huang, Jianing; Jiang, Jiao
2014-10-01
In this paper, we study the dynamics of an improved mathematical model on HIV-1 virus with cell mediated immunity. This new 5-dimensional model is based on the combination of a basic 3-dimensional HIV-1 model and a 4-dimensional immunity response model, which more realistically describes dynamics between the uninfected cells, infected cells, virus, the CTL response cells and CTL effector cells. Our 5-dimensional model may be reduced to the 4-dimensional model by applying a quasi-steady state assumption on the variable of virus. However, it is shown in this paper that virus is necessary to be involved in the modeling, and that a quasi-steady state assumption should be applied carefully, which may miss some important dynamical behavior of the system. Detailed bifurcation analysis is given to show that the system has three equilibrium solutions, namely the infection-free equilibrium, the infectious equilibrium without CTL, and the infectious equilibrium with CTL, and a series of bifurcations including two transcritical bifurcations and one or two possible Hopf bifurcations occur from these three equilibria as the basic reproduction number is varied. The mathematical methods applied in this paper include characteristic equations, Routh-Hurwitz condition, fluctuation lemma, Lyapunov function and computation of normal forms. Numerical simulation is also presented to demonstrate the applicability of the theoretical predictions.
ERIC Educational Resources Information Center
Haegele, Justin A.; Hodge, Samuel R.
2015-01-01
Emerging professionals, particularly senior-level undergraduate and graduate students in kinesiology who have an interest in physical education for individuals with and without disabilities, should understand the basic assumptions of the quantitative research paradigm. Knowledge of basic assumptions is critical for conducting, analyzing, and…
ERIC Educational Resources Information Center
Sells, Scott P.
A model for treating difficult adolescents and their families is presented. Part 1 offers six basic assumptions about the causes of severe behavioral problems and presents the treatment model with guidelines necessary to address each of these six causes. Case examples highlight and clarify major points within each of the 15 procedural steps of the…
Guikema, Seth
2012-07-01
Intelligent adversary modeling has become increasingly important for risk analysis, and a number of different approaches have been proposed for incorporating intelligent adversaries in risk analysis models. However, these approaches are based on a range of often-implicit assumptions about the desirable properties of intelligent adversary models. This "Perspective" paper aims to further risk analysis for situations involving intelligent adversaries by fostering a discussion of the desirable properties for these models. A set of four basic necessary conditions for intelligent adversary models is proposed and discussed. These are: (1) behavioral accuracy to the degree possible, (2) computational tractability to support decision making, (3) explicit consideration of uncertainty, and (4) ability to gain confidence in the model. It is hoped that these suggested necessary conditions foster discussion about the goals and assumptions underlying intelligent adversary modeling in risk analysis. © 2011 Society for Risk Analysis.
Misleading Theoretical Assumptions in Hypertext/Hypermedia Research.
ERIC Educational Resources Information Center
Tergan, Sigmar-Olaf
1997-01-01
Reviews basic theoretical assumptions of research on learning with hypertext/hypermedia. Focuses on whether the results of research on hypertext/hypermedia-based learning support these assumptions. Results of empirical studies and theoretical analysis reveal that many research approaches have been misled by inappropriate theoretical assumptions on…
Didactics and History of Mathematics: Knowledge and Self-Knowledge
ERIC Educational Resources Information Center
Fried, Michael N.
2007-01-01
The basic assumption of this paper is that mathematics and history of mathematics are both forms of knowledge and, therefore, represent different ways of knowing. This was also the basic assumption of Fried (2001) who maintained that these ways of knowing imply different conceptual and methodological commitments, which, in turn, lead to a conflict…
1981-09-01
corresponds to the same square footage that consumed the electrical energy. 3. The basic assumptions of multiple linear regres- sion, as enumerated in...7. Data related to the sample of bases is assumed to be representative of bases in the population. Limitations Basic limitations on this research were... Ratemaking --Overview. Rand Report R-5894, Santa Monica CA, May 1977. Chatterjee, Samprit, and Bertram Price. Regression Analysis by Example. New York: John
Dewey and Schon: An Analysis of Reflective Thinking.
ERIC Educational Resources Information Center
Bauer, Norman J.
The challenge to the dominance of rationality in educational philosophy presented by John Dewey and Donald Schon is examined in this paper. The paper identifies basic assumptions of their perspective and explains concepts of reflective thinking, which include biography, context of uncertainty, and "not-yet." A model of reflective thought…
A "View from Nowhen" on Time Perception Experiments
ERIC Educational Resources Information Center
Riemer, Martin; Trojan, Jorg; Kleinbohl, Dieter; Holzl, Rupert
2012-01-01
Systematic errors in time reproduction tasks have been interpreted as a misperception of time and therefore seem to contradict basic assumptions of pacemaker-accumulator models. Here we propose an alternative explanation of this phenomenon based on methodological constraints regarding the direction of time, which cannot be manipulated in…
Exceptional Children Conference Papers: Behavioral and Emotional Problems.
ERIC Educational Resources Information Center
Council for Exceptional Children, Arlington, VA.
Four of the seven conference papers treating behavioral and emotional problems concern the Conceptual Project, an attempt to provide definition and evaluation of conceptual models of the various theories of emotional disturbance and their basic assumptions, and to provide training packages based on these materials. The project is described in…
Human Praxis: A New Basic Assumption for Art Educators of the Future.
ERIC Educational Resources Information Center
Hodder, Geoffrey S.
1980-01-01
After analyzing Vincent Lanier's five characteristic roles of art education, the article briefly explains the pedagogy of Paulo Freire, based on human praxis, and applies it to the existing "oppresive" art education system. The article reduces Lanier's roles to resemble a single Freirean model. (SB)
Supply-demand balance in outward-directed networks and Kleiber's law
Painter, Page R
2005-01-01
Background Recent theories have attempted to derive the value of the exponent α in the allometric formula for scaling of basal metabolic rate from the properties of distribution network models for arteries and capillaries. It has recently been stated that a basic theorem relating the sum of nutrient currents to the specific nutrient uptake rate, together with a relationship claimed to be required in order to match nutrient supply to nutrient demand in 3-dimensional outward-directed networks, leads to Kleiber's law (b = 3/4). Methods The validity of the supply-demand matching principle and the assumptions required to prove the basic theorem are assessed. The supply-demand principle is evaluated by examining the supply term and the demand term in outward-directed lattice models of nutrient and water distribution systems and by applying the principle to fractal-like models of mammalian arterial systems. Results Application of the supply-demand principle to bifurcating fractal-like networks that are outward-directed does not predict 3/4-power scaling, and evaluation of water distribution system models shows that the matching principle does not match supply to demand in such systems. Furthermore, proof of the basic theorem is shown to require that the covariance of nutrient uptake and current path length is 0, an assumption unlikely to be true in mammalian arterial systems. Conclusion The supply-demand matching principle does not lead to a satisfactory explanation for the approximately 3/4-power scaling of mammalian basal metabolic rate. PMID:16283939
Supply-demand balance in outward-directed networks and Kleiber's law.
Painter, Page R
2005-11-10
Recent theories have attempted to derive the value of the exponent alpha in the allometric formula for scaling of basal metabolic rate from the properties of distribution network models for arteries and capillaries. It has recently been stated that a basic theorem relating the sum of nutrient currents to the specific nutrient uptake rate, together with a relationship claimed to be required in order to match nutrient supply to nutrient demand in 3-dimensional outward-directed networks, leads to Kleiber's law (b = 3/4). The validity of the supply-demand matching principle and the assumptions required to prove the basic theorem are assessed. The supply-demand principle is evaluated by examining the supply term and the demand term in outward-directed lattice models of nutrient and water distribution systems and by applying the principle to fractal-like models of mammalian arterial systems. Application of the supply-demand principle to bifurcating fractal-like networks that are outward-directed does not predict 3/4-power scaling, and evaluation of water distribution system models shows that the matching principle does not match supply to demand in such systems. Furthermore, proof of the basic theorem is shown to require that the covariance of nutrient uptake and current path length is 0, an assumption unlikely to be true in mammalian arterial systems. The supply-demand matching principle does not lead to a satisfactory explanation for the approximately 3/4-power scaling of mammalian basal metabolic rate.
Knowledge Discovery from Relations
ERIC Educational Resources Information Center
Guo, Zhen
2010-01-01
A basic and classical assumption in the machine learning research area is "randomness assumption" (also known as i.i.d assumption), which states that data are assumed to be independent and identically generated by some known or unknown distribution. This assumption, which is the foundation of most existing approaches in the literature, simplifies…
Teaching Critical Literacy across the Curriculum in Multimedia America.
ERIC Educational Resources Information Center
Semali, Ladislaus M.
The teaching of media texts as a form of textual construction is embedded in the assumption that audiences bring individual preexisting dispositions even though the media may contribute to their shaping of basic attitudes, beliefs, values, and behavior. As summed up by D. Lusted, at the core of such textual construction are basic assumptions that…
ERIC Educational Resources Information Center
Zigarmi, Drea; Roberts, Taylor Peyton
2017-01-01
Purpose: This study aims to test the following three assertions underlying the Situational Leadership® II (SLII) Model: all four leadership styles are received by followers; all four leadership styles are needed by followers; and if there is a fit between the leadership style a follower receives and needs, that follower will demonstrate favorable…
NASA Astrophysics Data System (ADS)
Chung, Kun-Jen
2013-09-01
An inventory problem involves a lot of factors influencing inventory decisions. To understand it, the traditional economic production quantity (EPQ) model plays rather important role for inventory analysis. Although the traditional EPQ models are still widely used in industry, practitioners frequently question validities of assumptions of these models such that their use encounters challenges and difficulties. So, this article tries to present a new inventory model by considering two levels of trade credit, finite replenishment rate and limited storage capacity together to relax the basic assumptions of the traditional EPQ model to improve the environment of the use of it. Keeping in mind cost-minimisation strategy, four easy-to-use theorems are developed to characterise the optimal solution. Finally, the sensitivity analyses are executed to investigate the effects of the various parameters on ordering policies and the annual total relevant costs of the inventory system.
Spiral Growth in Plants: Models and Simulations
ERIC Educational Resources Information Center
Allen, Bradford D.
2004-01-01
The analysis and simulation of spiral growth in plants integrates algebra and trigonometry in a botanical setting. When the ideas presented here are used in a mathematics classroom/computer lab, students can better understand how basic assumptions about plant growth lead to the golden ratio and how the use of circular functions leads to accurate…
Dynamic Assessment and Its Implications for RTI Models
ERIC Educational Resources Information Center
Wagner, Richard K.; Compton, Donald L.
2011-01-01
Dynamic assessment refers to assessment that combines elements of instruction for the purpose of learning something about an individual that cannot be learned as easily or at all from conventional assessment. The origins of dynamic assessment can be traced to Thorndike (1924), Rey (1934), and Vygotsky (1962), who shared three basic assumptions.…
United States Air Force Training Line Simulator. Final Report.
ERIC Educational Resources Information Center
Nauta, Franz; Pierce, Michael B.
This report describes the technical aspects and potential applications of a computer-based model simulating the flow of airmen through basic training and entry-level technical training. The objective of the simulation is to assess the impacts of alternative recruit classification and training policies under a wide variety of assumptions regarding…
ERIC Educational Resources Information Center
Schultz, Katherine
Although the National Workplace Literacy Program is relatively new, a new orthodoxy of program development based on particular understandings of literacy and learning has emerged. Descriptions of two model workplace education programs are the beginning points for an examination of the assumptions contained in most reports of workplace education…
Fault and event tree analyses for process systems risk analysis: uncertainty handling formulations.
Ferdous, Refaul; Khan, Faisal; Sadiq, Rehan; Amyotte, Paul; Veitch, Brian
2011-01-01
Quantitative risk analysis (QRA) is a systematic approach for evaluating likelihood, consequences, and risk of adverse events. QRA based on event (ETA) and fault tree analyses (FTA) employs two basic assumptions. The first assumption is related to likelihood values of input events, and the second assumption is regarding interdependence among the events (for ETA) or basic events (for FTA). Traditionally, FTA and ETA both use crisp probabilities; however, to deal with uncertainties, the probability distributions of input event likelihoods are assumed. These probability distributions are often hard to come by and even if available, they are subject to incompleteness (partial ignorance) and imprecision. Furthermore, both FTA and ETA assume that events (or basic events) are independent. In practice, these two assumptions are often unrealistic. This article focuses on handling uncertainty in a QRA framework of a process system. Fuzzy set theory and evidence theory are used to describe the uncertainties in the input event likelihoods. A method based on a dependency coefficient is used to express interdependencies of events (or basic events) in ETA and FTA. To demonstrate the approach, two case studies are discussed. © 2010 Society for Risk Analysis.
Production process stability - core assumption of INDUSTRY 4.0 concept
NASA Astrophysics Data System (ADS)
Chromjakova, F.; Bobak, R.; Hrusecka, D.
2017-06-01
Today’s industrial enterprises are confronted by implementation of INDUSTRY 4.0 concept with basic problem - stabilised manufacturing and supporting processes. Through this phenomenon of stabilisation, they will achieve positive digital management of both processes and continuously throughput. There is required structural stability of horizontal (business) and vertical (digitized) manufacturing processes, supported through digitalised technologies of INDUSTRY 4.0 concept. Results presented in this paper based on the research results and survey realised in more industrial companies. Following will described basic model for structural process stabilisation in manufacturing environment.
Modeling of the illumination driven coma of 67P/Churyumov-Gerasimenko
NASA Astrophysics Data System (ADS)
Bieler, André
2015-04-01
In this paper we present results modeling 67P/Churyumov-Gerasimenko's (C-G) neutral coma properties observed by the Rosetta ROSINA experiment with 3 different model approaches. The basic assumption for all models is the idea that the out-gassing properties of C-G are mainly illumination driven. With this assumption all models are capable of reproducing most features in the neutral coma signature as detected by the ROSINA-COPS instrument over several months. The models include the realistic shape model of the nucleus to calculate the illumination conditions over time which are used to define the boundary conditions for the hydrodynamic (BATS-R-US code) and the Direct Simulation Monte Carlo (AMPS code) simulations. The third model finally computes the projection of the total illumination on the comet surface towards the spacecraft. Our results indicate that at large heliocentric distances (3.5 to 2.8 AU) most gas coma structures observed by the in-situ instruments can be explained by uniformly distributed activity regions spread over the whole nucleus surface.
Challenging Freedom: Neoliberalism and the Erosion of Democratic Education
ERIC Educational Resources Information Center
Karaba, Robert
2016-01-01
Goodlad, et al. (2002) rightly point out that a culture can either resist or support change. Schein's (2010) model of culture indicates observable behaviors of a culture can be explained by exposing underlying shared values and basic assumptions that give meaning to the performance. Yet culture is many-faceted and complex. So Schein advised a…
Understanding Business Models in Health Care.
Sharan, Alok D; Schroeder, Gregory D; West, Michael E; Vaccaro, Alexander R
2016-05-01
The increasing focus on the costs of care is forcing health care organizations to critically look at their basic set of processes and activities, to determine what type of value they can deliver. A business model describes the resources, processes, and cost assumptions that an organization makes that will lead to the delivery of a unique value proposition to a customer. As health care organizations are beginning to transform their structure in preparation for a value-based delivery system, understanding business model theory can help in the redesign process.
Hartemink, Nienke; Cianci, Daniela; Reiter, Paul
2015-03-01
Mathematical modeling and notably the basic reproduction number R0 have become popular tools for the description of vector-borne disease dynamics. We compare two widely used methods to calculate the probability of a vector to survive the extrinsic incubation period. The two methods are based on different assumptions for the duration of the extrinsic incubation period; one method assumes a fixed period and the other method assumes a fixed daily rate of becoming infectious. We conclude that the outcomes differ substantially between the methods when the average life span of the vector is short compared to the extrinsic incubation period.
Shielding of substations against direct lightning strokes by shield wires
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chowdhuri, P.
1994-01-01
A new analysis for shielding outdoor substations against direct lightning strokes by shield wires is proposed. The basic assumption of this proposed method is that any lightning stroke which penetrates the shields will cause damage. The second assumption is that a certain level of risk of failure must be accepted, such as one or two failures per 100 years. The proposed method, using electrogeometric model, was applied to design shield wires for two outdoor substations: (1) 161-kV/69-kV station, and (2) 500-kV/161-kV station. The results of the proposed method were also compared with the shielding data of two other substations.
Haegele, Justin A; Hodge, Samuel Russell
2015-10-01
There are basic philosophical and paradigmatic assumptions that guide scholarly research endeavors, including the methods used and the types of questions asked. Through this article, kinesiology faculty and students with interests in adapted physical activity are encouraged to understand the basic assumptions of applied behavior analysis (ABA) methodology for conducting, analyzing, and presenting research of high quality in this paradigm. The purposes of this viewpoint paper are to present information fundamental to understanding the assumptions undergirding research methodology in ABA, describe key aspects of single-subject research designs, and discuss common research designs and data-analysis strategies used in single-subject studies.
Experimental measurement of binding energy, selectivity, and allostery using fluctuation theorems.
Camunas-Soler, Joan; Alemany, Anna; Ritort, Felix
2017-01-27
Thermodynamic bulk measurements of binding reactions rely on the validity of the law of mass action and the assumption of a dilute solution. Yet, important biological systems such as allosteric ligand-receptor binding, macromolecular crowding, or misfolded molecules may not follow these assumptions and may require a particular reaction model. Here we introduce a fluctuation theorem for ligand binding and an experimental approach using single-molecule force spectroscopy to determine binding energies, selectivity, and allostery of nucleic acids and peptides in a model-independent fashion. A similar approach could be used for proteins. This work extends the use of fluctuation theorems beyond unimolecular folding reactions, bridging the thermodynamics of small systems and the basic laws of chemical equilibrium. Copyright © 2017, American Association for the Advancement of Science.
On the accuracy of personality judgment: a realistic approach.
Funder, D C
1995-10-01
The "accuracy paradigm" for the study of personality judgment provides an important, new complement to the "error paradigm" that dominated this area of research for almost 2 decades. The present article introduces a specific approach within the accuracy paradigm called the Realistic Accuracy Model (RAM). RAM begins with the assumption that personality traits are real attributes of individuals. This assumption entails the use of a broad array of criteria for the evaluation of personality judgment and leads to a model that describes accuracy as a function of the availability, detection, and utilization of relevant behavioral cues. RAM provides a common explanation for basic moderators of accuracy, sheds light on how these moderators interact, and outlines a research agenda that includes the reintegration of the study of error with the study of accuracy.
An eco-epidemiological system with infected prey and predator subject to the weak Allee effect.
Sasmal, Sourav Kumar; Chattopadhyay, Joydev
2013-12-01
In this article, we propose a general prey–predator model with disease in prey and predator subject to the weak Allee effects. We make the following assumptions: (i) infected prey competes for resources but does not contribute to reproduction; and (ii) in comparison to the consumption of the susceptible prey, consumption of infected prey would contribute less or negatively to the growth of predator. Based on these assumptions, we provide basic dynamic properties for the full model and corresponding submodels with and without the Allee effects. By comparing the disease free submodels (susceptible prey–predator model) with and without the Allee effects, we conclude that the Allee effects can create or destroy the interior attractors. This enables us to obtain the complete dynamics of the full model and conclude that the model has only one attractor (only susceptible prey survives or susceptible-infected coexist), or two attractors (bi-stability with only susceptible prey and susceptible prey–predator coexist or susceptible prey-infected prey coexists and susceptible prey–predator coexist). This model does not support the coexistence of susceptible-infected-predator, which is caused by the assumption that infected population contributes less or are harmful to the growth of predator in comparison to the consumption of susceptible prey.
Stochastic analysis of surface roughness models in quantum wires
NASA Astrophysics Data System (ADS)
Nedjalkov, Mihail; Ellinghaus, Paul; Weinbub, Josef; Sadi, Toufik; Asenov, Asen; Dimov, Ivan; Selberherr, Siegfried
2018-07-01
We present a signed particle computational approach for the Wigner transport model and use it to analyze the electron state dynamics in quantum wires focusing on the effect of surface roughness. Usually surface roughness is considered as a scattering model, accounted for by the Fermi Golden Rule, which relies on approximations like statistical averaging and in the case of quantum wires incorporates quantum corrections based on the mode space approach. We provide a novel computational approach to enable physical analysis of these assumptions in terms of phase space and particles. Utilized is the signed particles model of Wigner evolution, which, besides providing a full quantum description of the electron dynamics, enables intuitive insights into the processes of tunneling, which govern the physical evolution. It is shown that the basic assumptions of the quantum-corrected scattering model correspond to the quantum behavior of the electron system. Of particular importance is the distribution of the density: Due to the quantum confinement, electrons are kept away from the walls, which is in contrast to the classical scattering model. Further quantum effects are retardation of the electron dynamics and quantum reflection. Far from equilibrium the assumption of homogeneous conditions along the wire breaks even in the case of ideal wire walls.
Anderson, Christine A; Whall, Ann L
2013-10-01
Opinion leaders are informal leaders who have the ability to influence others' decisions about adopting new products, practices or ideas. In the healthcare setting, the importance of translating new research evidence into practice has led to interest in understanding how opinion leaders could be used to speed this process. Despite continued interest, gaps in understanding opinion leadership remain. Agent-based models are computer models that have proven to be useful for representing dynamic and contextual phenomena such as opinion leadership. The purpose of this paper is to describe the work conducted in preparation for the development of an agent-based model of nursing opinion leadership. The aim of this phase of the model development project was to clarify basic assumptions about opinions, the individual attributes of opinion leaders and characteristics of the context in which they are effective. The process used to clarify these assumptions was the construction of a preliminary nursing opinion leader model, derived from philosophical theories about belief formation. © 2013 John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wen, Xinyu; Liu, Zhengyu; Chen, Zhongxiao
Water isotopes in precipitation have played a key role in the reconstruction of past climate on millennial timescales and longer. But, for midlatitude regions like East Asia with complex terrain, the reliability behind the basic assumptions of the temperature effect and amount effect is based on modern observational data and still remains unclear for past climate. In the present work, we reexamine the two basic effects on seasonal, interannual, and millennial timescales in a set of time slice experiments for the period 22–0 ka using an isotope-enabled atmospheric general circulation model (AGCM). Our study confirms the robustness of the temperaturemore » and amount effects on the seasonal cycle over China in the present climatic conditions, with the temperature effect dominating in northern China and the amount effect dominating in the far south of China but no distinct effect in the transition region of central China. However, our analysis shows that neither temperature nor amount effect is significantly dominant over China on millennial and interannual timescales, which is a challenge to those classic assumptions in past climate reconstruction. This work helps shed light on the interpretation of the proxy record of δ 18O from a modeling point of view.« less
Wen, Xinyu; Liu, Zhengyu; Chen, Zhongxiao; ...
2016-11-06
Water isotopes in precipitation have played a key role in the reconstruction of past climate on millennial timescales and longer. But, for midlatitude regions like East Asia with complex terrain, the reliability behind the basic assumptions of the temperature effect and amount effect is based on modern observational data and still remains unclear for past climate. In the present work, we reexamine the two basic effects on seasonal, interannual, and millennial timescales in a set of time slice experiments for the period 22–0 ka using an isotope-enabled atmospheric general circulation model (AGCM). Our study confirms the robustness of the temperaturemore » and amount effects on the seasonal cycle over China in the present climatic conditions, with the temperature effect dominating in northern China and the amount effect dominating in the far south of China but no distinct effect in the transition region of central China. However, our analysis shows that neither temperature nor amount effect is significantly dominant over China on millennial and interannual timescales, which is a challenge to those classic assumptions in past climate reconstruction. This work helps shed light on the interpretation of the proxy record of δ 18O from a modeling point of view.« less
The unique world of the Everett version of quantum theory
NASA Astrophysics Data System (ADS)
Squires, Euan J.
1988-03-01
We ask whether the basic Everett assumption, that there are no changes of the wavefunction other than those given by the Schrödinger equation, is compatible with experience. We conclude that it is, provided we allow the world of observation to be partially a creation of consciousness. The model suggests the possible existence of quantum paranormal effects.
ERIC Educational Resources Information Center
Ramseyer, Gary C.; Tcheng, Tse-Kia
The present study was directed at determining the extent to which the Type I Error rate is affected by violations in the basic assumptions of the q statistic. Monte Carlo methods were employed, and a variety of departures from the assumptions were examined. (Author)
Boundary layer transition: A review of theory, experiment and related phenomena
NASA Technical Reports Server (NTRS)
Kistler, E. L.
1971-01-01
The overall problem of boundary layer flow transition is reviewed. Evidence indicates a need for new, basic physical hypotheses in classical fluid mechanics math models based on the Navier-Stokes equations. The Navier-Stokes equations are challenged as inadequate for the investigation of fluid transition, since they are based on several assumptions which should be expected to alter significantly the stability characteristics of the resulting math model. Strong prima facie evidence is presented to this effect.
Spreading dynamics on complex networks: a general stochastic approach.
Noël, Pierre-André; Allard, Antoine; Hébert-Dufresne, Laurent; Marceau, Vincent; Dubé, Louis J
2014-12-01
Dynamics on networks is considered from the perspective of Markov stochastic processes. We partially describe the state of the system through network motifs and infer any missing data using the available information. This versatile approach is especially well adapted for modelling spreading processes and/or population dynamics. In particular, the generality of our framework and the fact that its assumptions are explicitly stated suggests that it could be used as a common ground for comparing existing epidemics models too complex for direct comparison, such as agent-based computer simulations. We provide many examples for the special cases of susceptible-infectious-susceptible and susceptible-infectious-removed dynamics (e.g., epidemics propagation) and we observe multiple situations where accurate results may be obtained at low computational cost. Our perspective reveals a subtle balance between the complex requirements of a realistic model and its basic assumptions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kline, Keith L; Oladosu, Gbadebo A; Dale, Virginia H
2011-01-01
Vigorous debate on the effects of biofuels derives largely from the changes in land use estimated using economic models designed mainly for the analysis of agricultural trade and markets. The models referenced for land-use change (LUC) analysis in the U.S. Environmental Protection Agency Final Rule on the Renewable Fuel Standard include GTAP, FAPRI-CARD, and FASOM. To address bioenergy impacts, these models were expanded and modified to facilitate simulations of hypothesized LUC. However, even when models use similar basic assumptions and data, the range of LUC results can vary by ten-fold or more. While the market dynamics simulated in these modelsmore » include processes that are important in estimating effects of biofuel policies, the models have not been validated for estimating land-use changes and employ crucial assumptions and simplifications that contradict empirical evidence.« less
Equivalence of binormal likelihood-ratio and bi-chi-squared ROC curve models
Hillis, Stephen L.
2015-01-01
A basic assumption for a meaningful diagnostic decision variable is that there is a monotone relationship between it and its likelihood ratio. This relationship, however, generally does not hold for a decision variable that results in a binormal ROC curve. As a result, receiver operating characteristic (ROC) curve estimation based on the assumption of a binormal ROC-curve model produces improper ROC curves that have “hooks,” are not concave over the entire domain, and cross the chance line. Although in practice this “improperness” is usually not noticeable, sometimes it is evident and problematic. To avoid this problem, Metz and Pan proposed basing ROC-curve estimation on the assumption of a binormal likelihood-ratio (binormal-LR) model, which states that the decision variable is an increasing transformation of the likelihood-ratio function of a random variable having normal conditional diseased and nondiseased distributions. However, their development is not easy to follow. I show that the binormal-LR model is equivalent to a bi-chi-squared model in the sense that the families of corresponding ROC curves are the same. The bi-chi-squared formulation provides an easier-to-follow development of the binormal-LR ROC curve and its properties in terms of well-known distributions. PMID:26608405
Schultze-Lutter, F
2016-12-01
The early detection of psychoses has become increasingly relevant in research and clinic. Next to the ultra-high risk (UHR) approach that targets an immediate risk of developing frank psychosis, the basic symptom approach that targets the earliest possible detection of the developing disorder is being increasingly used worldwide. The present review gives an introduction to the development and basic assumptions of the basic symptom concept, summarizes the results of studies on the specificity of basic symptoms for psychoses in different age groups as well as on studies of their psychosis-predictive value, and gives an outlook on future results. Moreover, a brief introduction to first recent imaging studies is given that supports one of the main assumptions of the basic symptom concept, i. e., that basic symptoms are the most immediate phenomenological expression of the cerebral aberrations underlying the development of psychosis. From this, it is concluded that basic symptoms might be able to provide important information on future neurobiological research on the etiopathology of psychoses. © Georg Thieme Verlag KG Stuttgart · New York.
Why is metal bioaccumulation so variable? Biodynamics as a unifying concept
Luoma, Samuel N.; Rainbow, Philip S.
2005-01-01
Ecological risks from metal contaminants are difficult to document because responses differ among species, threats differ among metals, and environmental influences are complex. Unifying concepts are needed to better tie together such complexities. Here we suggest that a biologically based conceptualization, the biodynamic model, provides the necessary unification for a key aspect in risk: metal bioaccumulation (internal exposure). The model is mechanistically based, but empirically considers geochemical influences, biological differences, and differences among metals. Forecasts from the model agree closely with observations from nature, validating its basic assumptions. The biodynamic metal bioaccumulation model combines targeted, high-quality geochemical analyses from a site of interest with parametrization of key physiological constants for a species from that site. The physiological parameters include metal influx rates from water, influx rates from food, rate constants of loss, and growth rates (when high). We compiled results from 15 publications that forecast species-specific bioaccumulation, and compare the forecasts to bioaccumulation data from the field. These data consider concentrations that cover 7 orders of magnitude. They include 7 metals and 14 species of animals from 3 phyla and 11 marine, estuarine, and freshwater environments. The coefficient of determination (R2) between forecasts and independently observed bioaccumulation from the field was 0.98. Most forecasts agreed with observations within 2-fold. The agreement suggests that the basic assumptions of the biodynamic model are tenable. A unified explanation of metal bioaccumulation sets the stage for a realistic understanding of toxicity and ecological effects of metals in nature.
Heterogeneity, Mixing, and the Spatial Scales of Mosquito-Borne Pathogen Transmission
Perkins, T. Alex; Scott, Thomas W.; Le Menach, Arnaud; Smith, David L.
2013-01-01
The Ross-Macdonald model has dominated theory for mosquito-borne pathogen transmission dynamics and control for over a century. The model, like many other basic population models, makes the mathematically convenient assumption that populations are well mixed; i.e., that each mosquito is equally likely to bite any vertebrate host. This assumption raises questions about the validity and utility of current theory because it is in conflict with preponderant empirical evidence that transmission is heterogeneous. Here, we propose a new dynamic framework that is realistic enough to describe biological causes of heterogeneous transmission of mosquito-borne pathogens of humans, yet tractable enough to provide a basis for developing and improving general theory. The framework is based on the ecological context of mosquito blood meals and the fine-scale movements of individual mosquitoes and human hosts that give rise to heterogeneous transmission. Using this framework, we describe pathogen dispersion in terms of individual-level analogues of two classical quantities: vectorial capacity and the basic reproductive number, . Importantly, this framework explicitly accounts for three key components of overall heterogeneity in transmission: heterogeneous exposure, poor mixing, and finite host numbers. Using these tools, we propose two ways of characterizing the spatial scales of transmission—pathogen dispersion kernels and the evenness of mixing across scales of aggregation—and demonstrate the consequences of a model's choice of spatial scale for epidemic dynamics and for estimation of , both by a priori model formulas and by inference of the force of infection from time-series data. PMID:24348223
NASA Astrophysics Data System (ADS)
Fontaine, G.; Dufour, P.; Chayer, P.; Dupuis, J.; Brassard, P.
2015-06-01
The accretion-diffusion picture is the model par excellence for describing the presence of planetary debris polluting the atmospheres of relatively cool white dwarfs. Inferences on the process based on diffusion timescale arguments make the implicit assumption that the concentration gradient of a given metal at the base of the convection zone is negligible. This assumption is, in fact, not rigorously valid, but it allows the decoupling of the surface abundance from the evolving distribution of a given metal in deeper layers. A better approach is a full time-dependent calculation of the evolution of the abundance profile of an accreting-diffusing element. We used the same approach as that developed by Dupuis et al. to model accretion episodes involving many more elements than those considered by these authors. Our calculations incorporate the improvements to diffusion physics mentioned in Paper I. The basic assumption in the Dupuis et al. approach is that the accreted metals are trace elements, i.e, that they have no effects on the background (DA or non-DA) stellar structure. This allows us to consider an arbitrary number of accreting elements.
Application Note: Power Grid Modeling With Xyce.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sholander, Peter E.
This application note describes how to model steady-state power flows and transient events in electric power grids with the SPICE-compatible Xyce TM Parallel Electronic Simulator developed at Sandia National Labs. This application notes provides a brief tutorial on the basic devices (branches, bus shunts, transformers and generators) found in power grids. The focus is on the features supported and assumptions made by the Xyce models for power grid elements. It then provides a detailed explanation, including working Xyce netlists, for simulating some simple power grid examples such as the IEEE 14-bus test case.
The Trouble with Levels: A Reexamination of Craik and Lockhart's Framework for Memory Research
ERIC Educational Resources Information Center
Baddeley, Alan D.
1978-01-01
Begins by discussing a number of problems in applying a levels-of-processing approach to memory as proposed in the late 1960s and then revised in 1972 by Craik and Lockhart, suggests that some of the basic assumptions are false, and argues for information-processing models devised to study working memory and reading, which aim to explore the…
Adaptive control: Myths and realities
NASA Technical Reports Server (NTRS)
Athans, M.; Valavani, L.
1984-01-01
It was found that all currently existing globally stable adaptive algorithms have three basic properties in common: positive realness of the error equation, square-integrability of the parameter adjustment law and, need for sufficient excitation for asymptotic parameter convergence. Of the three, the first property is of primary importance since it satisfies a sufficient condition for stabillity of the overall system, which is a baseline design objective. The second property has been instrumental in the proof of asymptotic error convergence to zero, while the third addresses the issue of parameter convergence. Positive-real error dynamics can be generated only if the relative degree (excess of poles over zeroes) of the process to be controlled is known exactly; this, in turn, implies perfect modeling. This and other assumptions, such as absence of nonminimum phase plant zeros on which the mathematical arguments are based, do not necessarily reflect properties of real systems. As a result, it is natural to inquire what happens to the designs under less than ideal assumptions. The issues arising from violation of the exact modeling assumption which is extremely restrictive in practice and impacts the most important system property, stability, are discussed.
Fisher's geometrical model emerges as a property of complex integrated phenotypic networks.
Martin, Guillaume
2014-05-01
Models relating phenotype space to fitness (phenotype-fitness landscapes) have seen important developments recently. They can roughly be divided into mechanistic models (e.g., metabolic networks) and more heuristic models like Fisher's geometrical model. Each has its own drawbacks, but both yield testable predictions on how the context (genomic background or environment) affects the distribution of mutation effects on fitness and thus adaptation. Both have received some empirical validation. This article aims at bridging the gap between these approaches. A derivation of the Fisher model "from first principles" is proposed, where the basic assumptions emerge from a more general model, inspired by mechanistic networks. I start from a general phenotypic network relating unspecified phenotypic traits and fitness. A limited set of qualitative assumptions is then imposed, mostly corresponding to known features of phenotypic networks: a large set of traits is pleiotropically affected by mutations and determines a much smaller set of traits under optimizing selection. Otherwise, the model remains fairly general regarding the phenotypic processes involved or the distribution of mutation effects affecting the network. A statistical treatment and a local approximation close to a fitness optimum yield a landscape that is effectively the isotropic Fisher model or its extension with a single dominant phenotypic direction. The fit of the resulting alternative distributions is illustrated in an empirical data set. These results bear implications on the validity of Fisher's model's assumptions and on which features of mutation fitness effects may vary (or not) across genomic or environmental contexts.
Nechtelberger, Andrea; Renner, Walter; Nechtelberger, Martin; Supeková, Soňa Chovanová; Hadjimarkou, Maria; Offurum, Chino; Ramalingam, Panchalan; Senft, Birgit; Redfern, Kylie
2017-01-01
The United Nations Academic Impact (UNAI) Initiative has set forth 10 Basic Principles for higher education. In the present study, a 10 item self-report questionnaire measuring personal endorsement of these principles has been tested by self-report questionnaires with university and post-graduate students from Austria, China, Cyprus, India, Nigeria, and Slovakia (total N = 976, N = 627 female, mean age 24.7 years, s = 5.7). Starting from the assumptions of Moral Foundations Theory (MFT), we expected that personal attitudes toward the UNAI Basic Principles would be predicted by endorsement of various moral foundations as suggested by MFT and by the individual's degree of globalization. Whereas for the Austrian, Cypriot, and Nigerian sub- samples this assumption was largely confirmed, for the Chinese, Indian, and Slovak sub- samples only small amounts of the variance could be explained by regression models. All six sub-samples differed substantially with regard to their overall questionnaire responses: by five discriminant functions 83.6% of participants were classified correctly. We conclude that implementation of UNAI principles should adhere closely to the cultural requirements of the respective society and, where necessary should be accompanied by thorough informational campaigns about UN educational goals. PMID:29180977
Nechtelberger, Andrea; Renner, Walter; Nechtelberger, Martin; Supeková, Soňa Chovanová; Hadjimarkou, Maria; Offurum, Chino; Ramalingam, Panchalan; Senft, Birgit; Redfern, Kylie
2017-01-01
The United Nations Academic Impact (UNAI) Initiative has set forth 10 Basic Principles for higher education. In the present study, a 10 item self-report questionnaire measuring personal endorsement of these principles has been tested by self-report questionnaires with university and post-graduate students from Austria, China, Cyprus, India, Nigeria, and Slovakia (total N = 976, N = 627 female, mean age 24.7 years, s = 5.7). Starting from the assumptions of Moral Foundations Theory (MFT), we expected that personal attitudes toward the UNAI Basic Principles would be predicted by endorsement of various moral foundations as suggested by MFT and by the individual's degree of globalization. Whereas for the Austrian, Cypriot, and Nigerian sub- samples this assumption was largely confirmed, for the Chinese, Indian, and Slovak sub- samples only small amounts of the variance could be explained by regression models. All six sub-samples differed substantially with regard to their overall questionnaire responses: by five discriminant functions 83.6% of participants were classified correctly. We conclude that implementation of UNAI principles should adhere closely to the cultural requirements of the respective society and, where necessary should be accompanied by thorough informational campaigns about UN educational goals.
Can Basic Research on Children and Families Be Useful for the Policy Process?
ERIC Educational Resources Information Center
Moore, Kristin A.
Based on the assumption that basic science is the crucial building block for technological and biomedical progress, this paper examines the relevance for public policy of basic demographic and behavioral sciences research on children and families. The characteristics of basic research as they apply to policy making are explored. First, basic…
Refraction effects of atmosphere on geodetic measurements to celestial bodies
NASA Technical Reports Server (NTRS)
Joshi, C. S.
1973-01-01
The problem is considered of obtaining accurate values of refraction corrections for geodetic measurements of celestial bodies. The basic principles of optics governing the phenomenon of refraction are defined, and differential equations are derived for the refraction corrections. The corrections fall into two main categories: (1) refraction effects due to change in the direction of propagation, and (2) refraction effects mainly due to change in the velocity of propagation. The various assumptions made by earlier investigators are reviewed along with the basic principles of improved models designed by investigators of the twentieth century. The accuracy problem for various quantities is discussed, and the conclusions and recommendations are summarized.
Sawan, Mouna; Jeon, Yun-Hee; Chen, Timothy F
2018-04-01
Psychotropic medicines have limited efficacy in the management of behavioural and psychological disturbances, yet they are commonly used in nursing homes. Organisational culture is an important consideration influencing use of psychotropic medicines. Schein's theory elucidates that organisational culture is underpinned by basic assumptions, which are the taken for granted beliefs driving organisational members' behaviour and practices. By exploring the basic assumptions of culture we are able to find explanations for why psychotropic medicines are prescribed contrary to standards. A qualitative study guided by Schein's theory was conducted using semi-structured interviews with 40 staff representing a broad range of roles from eight nursing homes. Findings from the study suggest two basic assumptions influenced the use of psychotropic medicines: locus of control and necessity for efficiency or comprehensiveness. Locus of control pertained to whether staff believed they could control decisions when facing negative work experiences. Necessity for efficiency or comprehensiveness concerned how much time and effort was spent on a given task. Participants' arrived at decisions to use psychotropic medicines that were inconsistent with ideal standards when they believed they were helpless to do the right thing by the resident and it was necessary to restrict time on a given task. Basic assumptions tended to provide the rationale for staff to use psychotropic medicines when it was not compatible with standards. Organisational culture is an important factor that should be addressed to optimise psychotropic medicine use. Copyright © 2018 Elsevier Ltd. All rights reserved.
Dendrite and Axon Specific Geometrical Transformation in Neurite Development
Mironov, Vasily I.; Semyanov, Alexey V.; Kazantsev, Victor B.
2016-01-01
We propose a model of neurite growth to explain the differences in dendrite and axon specific neurite development. The model implements basic molecular kinetics, e.g., building protein synthesis and transport to the growth cone, and includes explicit dependence of the building kinetics on the geometry of the neurite. The basic assumption was that the radius of the neurite decreases with length. We found that the neurite dynamics crucially depended on the relationship between the rate of active transport and the rate of morphological changes. If these rates were in the balance, then the neurite displayed axon specific development with a constant elongation speed. For dendrite specific growth, the maximal length was rapidly saturated by degradation of building protein structures or limited by proximal part expansion reaching the characteristic cell size. PMID:26858635
Space-time dynamics of Stem Cell Niches: a unified approach for Plants.
Pérez, Maria Del Carmen; López, Alejandro; Padilla, Pablo
2013-06-01
Many complex systems cannot be analyzed using traditional mathematical tools, due to their irreducible nature. This makes it necessary to develop models that can be implemented computationally to simulate their evolution. Examples of these models are cellular automata, evolutionary algorithms, complex networks, agent-based models, symbolic dynamics and dynamical systems techniques. We review some representative approaches to model the stem cell niche in Arabidopsis thaliana and the basic biological mechanisms that underlie its formation and maintenance. We propose a mathematical model based on cellular automata for describing the space-time dynamics of the stem cell niche in the root. By making minimal assumptions on the cell communication process documented in experiments, we classify the basic developmental features of the stem-cell niche, including the basic structural architecture, and suggest that they could be understood as the result of generic mechanisms given by short and long range signals. This could be a first step in understanding why different stem cell niches share similar topologies, not only in plants. Also the fact that this organization is a robust consequence of the way information is being processed by the cells and to some extent independent of the detailed features of the signaling mechanism.
Space-time dynamics of stem cell niches: a unified approach for plants.
Pérez, Maria del Carmen; López, Alejandro; Padilla, Pablo
2013-04-02
Many complex systems cannot be analyzed using traditional mathematical tools, due to their irreducible nature. This makes it necessary to develop models that can be implemented computationally to simulate their evolution. Examples of these models are cellular automata, evolutionary algorithms, complex networks, agent-based models, symbolic dynamics and dynamical systems techniques. We review some representative approaches to model the stem cell niche in Arabidopsis thaliana and the basic biological mechanisms that underlie its formation and maintenance. We propose a mathematical model based on cellular automata for describing the space-time dynamics of the stem cell niche in the root. By making minimal assumptions on the cell communication process documented in experiments, we classify the basic developmental features of the stem-cell niche, including the basic structural architecture, and suggest that they could be understood as the result of generic mechanisms given by short and long range signals. This could be a first step in understanding why different stem cell niches share similar topologies, not only in plants. Also the fact that this organization is a robust consequence of the way information is being processed by the cells and to some extent independent of the detailed features of the signaling mechanism.
Jackson, Charlotte; Mangtani, Punam; Hawker, Jeremy; Olowokure, Babatunde; Vynnycky, Emilia
2014-01-01
School closure is a potential intervention during an influenza pandemic and has been investigated in many modelling studies. To systematically review the effects of school closure on influenza outbreaks as predicted by simulation studies. We searched Medline and Embase for relevant modelling studies published by the end of October 2012, and handsearched key journals. We summarised the predicted effects of school closure on the peak and cumulative attack rates and the duration of the epidemic. We investigated how these predictions depended on the basic reproduction number, the timing and duration of closure and the assumed effects of school closures on contact patterns. School closures were usually predicted to be most effective if they caused large reductions in contact, if transmissibility was low (e.g. a basic reproduction number <2), and if attack rates were higher in children than in adults. The cumulative attack rate was expected to change less than the peak, but quantitative predictions varied (e.g. reductions in the peak were frequently 20-60% but some studies predicted >90% reductions or even increases under certain assumptions). This partly reflected differences in model assumptions, such as those regarding population contact patterns. Simulation studies suggest that school closure can be a useful control measure during an influenza pandemic, particularly for reducing peak demand on health services. However, it is difficult to accurately quantify the likely benefits. Further studies of the effects of reactive school closures on contact patterns are needed to improve the accuracy of model predictions.
Techniques for the computation in demographic projections of health manpower.
Horbach, L
1979-01-01
Some basic principles and algorithms are presented which can be used for projective calculations of medical staff on the basis of demographic data. The effects of modifications of the input data such as by health policy measures concerning training capacity, can be demonstrated by repeated calculations with assumptions. Such models give a variety of results and may highlight the probable future balance between health manpower supply and requirements.
An uncertainty analysis of the flood-stage upstream from a bridge.
Sowiński, M
2006-01-01
The paper begins with the formulation of the problem in the form of a general performance function. Next the Latin hypercube sampling (LHS) technique--a modified version of the Monte Carlo method is briefly described. The essential uncertainty analysis of the flood-stage upstream from a bridge starts with a description of the hydraulic model. This model concept is based on the HEC-RAS model developed for subcritical flow under a bridge without piers in which the energy equation is applied. The next section contains the characteristic of the basic variables including a specification of their statistics (means and variances). Next the problem of correlated variables is discussed and assumptions concerning correlation among basic variables are formulated. The analysis of results is based on LHS ranking lists obtained from the computer package UNCSAM. Results fot two examples are given: one for independent and the other for correlated variables.
Artificial Intelligence: Underlying Assumptions and Basic Objectives.
ERIC Educational Resources Information Center
Cercone, Nick; McCalla, Gordon
1984-01-01
Presents perspectives on methodological assumptions underlying research efforts in artificial intelligence (AI) and charts activities, motivations, methods, and current status of research in each of the major AI subareas: natural language understanding; computer vision; expert systems; search, problem solving, planning; theorem proving and logic…
Teaching Practices: Reexamining Assumptions.
ERIC Educational Resources Information Center
Spodek, Bernard, Ed.
This publication contains eight papers, selected from papers presented at the Bicentennial Conference on Early Childhood Education, that discuss different aspects of teaching practices. The first two chapters reexamine basic assumptions underlying the organization of curriculum experiences for young children. Chapter 3 discusses the need to…
Notes on power of normality tests of error terms in regression models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Střelec, Luboš
2015-03-10
Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importancemore » of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models.« less
Rocca, Elena; Andersen, Fredrik
2017-08-14
Scientific risk evaluations are constructed by specific evidence, value judgements and biological background assumptions. The latter are the framework-setting suppositions we apply in order to understand some new phenomenon. That background assumptions co-determine choice of methodology, data interpretation, and choice of relevant evidence is an uncontroversial claim in modern basic science. Furthermore, it is commonly accepted that, unless explicated, disagreements in background assumptions can lead to misunderstanding as well as miscommunication. Here, we extend the discussion on background assumptions from basic science to the debate over genetically modified (GM) plants risk assessment. In this realm, while the different political, social and economic values are often mentioned, the identity and role of background assumptions at play are rarely examined. We use an example from the debate over risk assessment of stacked genetically modified plants (GM stacks), obtained by applying conventional breeding techniques to GM plants. There are two main regulatory practices of GM stacks: (i) regulate as conventional hybrids and (ii) regulate as new GM plants. We analyzed eight papers representative of these positions and found that, in all cases, additional premises are needed to reach the stated conclusions. We suggest that these premises play the role of biological background assumptions and argue that the most effective way toward a unified framework for risk analysis and regulation of GM stacks is by explicating and examining the biological background assumptions of each position. Once explicated, it is possible to either evaluate which background assumptions best reflect contemporary biological knowledge, or to apply Douglas' 'inductive risk' argument.
Sampling Assumptions in Inductive Generalization
ERIC Educational Resources Information Center
Navarro, Daniel J.; Dry, Matthew J.; Lee, Michael D.
2012-01-01
Inductive generalization, where people go beyond the data provided, is a basic cognitive capability, and it underpins theoretical accounts of learning, categorization, and decision making. To complete the inductive leap needed for generalization, people must make a key "sampling" assumption about how the available data were generated.…
Performance evaluation of Olympic weightlifters.
Garhammer, J
1979-01-01
The comparison of weights lifted by athletes in different bodyweight categories is a continuing problem for the sport of olympic weightlifting. An objective mechanical evaluation procedure was developed using basic ideas from a model proposed by Ranta in 1975. This procedure was based on more realistic assumptions than the original model and considered both vertical and horizontal bar movements. Utilization of data obtained from film of national caliber lifters indicated that the proposed method was workable, and that the evaluative indices ranked lifters in reasonable order relative to other comparative techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandler, S.I.
1986-01-01
The objective of the work is to use the generalized van der Waals theory, as derived earlier (''The Generalized van der Waals Partition Function I. Basic Theory'' by S.I. Sandler, Fluid Phase Equilibria 19, 233 (1985)) to: (1) understand the molecular level assumptions inherent in current thermodynamic models; (2) use theory and computer simulation studies to test these assumptions; and (3) develop new, improved thermodynamic models based on better molecular level assumptions. From such a fundamental study, thermodynamic models will be developed that will be applicable to mixtures of molecules of widely different size and functionality, as occurs in themore » processing of heavy oils, coal liquids and other synthetic fuels. An important aspect of our work is to reduce our fundamental theoretical developments to engineering practice through extensive testing and evaluation with experimental data on real mixtures. During the first year of this project important progress was made in the areas specified in the original proposal, as well as several subsidiary areas identified as the work progressed. Some of this work has been written up and submitted for publication. Manuscripts acknowledging DOE support, together with a very brief description, are listed herein.« less
A test of the hierarchical model of litter decomposition.
Bradford, Mark A; Veen, G F Ciska; Bonis, Anne; Bradford, Ella M; Classen, Aimee T; Cornelissen, J Hans C; Crowther, Thomas W; De Long, Jonathan R; Freschet, Gregoire T; Kardol, Paul; Manrubia-Freixa, Marta; Maynard, Daniel S; Newman, Gregory S; Logtestijn, Richard S P; Viketoft, Maria; Wardle, David A; Wieder, William R; Wood, Stephen A; van der Putten, Wim H
2017-12-01
Our basic understanding of plant litter decomposition informs the assumptions underlying widely applied soil biogeochemical models, including those embedded in Earth system models. Confidence in projected carbon cycle-climate feedbacks therefore depends on accurate knowledge about the controls regulating the rate at which plant biomass is decomposed into products such as CO 2 . Here we test underlying assumptions of the dominant conceptual model of litter decomposition. The model posits that a primary control on the rate of decomposition at regional to global scales is climate (temperature and moisture), with the controlling effects of decomposers negligible at such broad spatial scales. Using a regional-scale litter decomposition experiment at six sites spanning from northern Sweden to southern France-and capturing both within and among site variation in putative controls-we find that contrary to predictions from the hierarchical model, decomposer (microbial) biomass strongly regulates decomposition at regional scales. Furthermore, the size of the microbial biomass dictates the absolute change in decomposition rates with changing climate variables. Our findings suggest the need for revision of the hierarchical model, with decomposers acting as both local- and broad-scale controls on litter decomposition rates, necessitating their explicit consideration in global biogeochemical models.
Predictability of currency market exchange
NASA Astrophysics Data System (ADS)
Ohira, Toru; Sazuka, Naoya; Marumo, Kouhei; Shimizu, Tokiko; Takayasu, Misako; Takayasu, Hideki
2002-05-01
We analyze tick data of yen-dollar exchange with a focus on its up and down movement. We show that there exists a rather particular conditional probability structure with such high frequency data. This result provides us with evidence to question one of the basic assumptions of the traditional market theory, where such bias in high frequency price movements is regarded as not present. We also construct systematically a random walk model reflecting this probability structure.
Belief Structures about People Held by Selected Graduate Students.
ERIC Educational Resources Information Center
Dole, Arthur A.; And Others
Wrightsman has established that assumptions about human nature distinguish religious, occupational, political, gender, and other groups, and that they predict behavior in structured situations. Hjelle and Ziegler proposed a set of nine basic bipolar assumptions about the nature of people: freedom-determinism; rationality-irrationality;…
Turbulent Convection: Is 2D a good proxy of 3D?
NASA Technical Reports Server (NTRS)
Canuto, V. M.
2000-01-01
Several authors have recently carried out 2D simulations of turbulent convection for both solar and massive stars. Fitting the 2D results with the MLT, they obtain that alpha(sub MLT) greater than 1 specifically, 1.4 less than alpha(sub MLT) less than 1.8. The authors further suggest that this methodology could be used to calibrate the MLT used in stellar evolutionary codes. We suggest the opposite viewpoint: the 2D results show that MLT is internally inconsistent because the resulting alpha(sub MLT) greater than 1 violates the MLT basic assumption that alpha(sub MLT) less than 1. When the 2D results are fitted with the CM model, alpha(sub CMT) less than 1, in accord with the basic tenet of the model. On the other hand, since both MLT and CM are local models, they should be replaced by the next generation of non-local, time dependent turbulence models which we discuss in some detail.
Model-based analysis of keratin intermediate filament assembly
NASA Astrophysics Data System (ADS)
Martin, Ines; Leitner, Anke; Walther, Paul; Herrmann, Harald; Marti, Othmar
2015-09-01
The cytoskeleton of epithelial cells consists of three types of filament systems: microtubules, actin filaments and intermediate filaments (IFs). Here, we took a closer look at type I and type II IF proteins, i.e. keratins. They are hallmark constituents of epithelial cells and are responsible for the generation of stiffness, the cellular response to mechanical stimuli and the integrity of entire cell layers. Thereby, keratin networks constitute an important instrument for cells to adapt to their environment. In particular, we applied models to characterize the assembly of keratin K8 and K18 into elongated filaments as a means for network formation. For this purpose, we measured the length of in vitro assembled keratin K8/K18 filaments by transmission electron microscopy at different time points. We evaluated the experimental data of the longitudinal annealing reaction using two models from polymer chemistry: the Schulz-Zimm model and the condensation polymerization model. In both scenarios one has to make assumptions about the reaction process. We compare how well the models fit the measured data and thus determine which assumptions fit best. Based on mathematical modelling of experimental filament assembly data we define basic mechanistic properties of the elongation reaction process.
Thin Skin, Deep Damage: Addressing the Wounded Writer in the Basic Writing Course
ERIC Educational Resources Information Center
Boone, Stephanie D.
2010-01-01
How do institutions and their writing faculties see basic writers? What assumptions about these writers drive writing curricula, pedagogies and assessments? How do writing programs enable or frustrate these writers? How might course design facilitate the outcomes we envision? This article argues that, in order to teach basic writers to enter…
Writing Partners: Service Learning as a Route to Authority for Basic Writers
ERIC Educational Resources Information Center
Gabor, Catherine
2009-01-01
This article looks at best practices in basic writing instruction in terms of non-traditional audiences and writerly authority. Much conventional wisdom discourages participation in service-learning projects for basic writers because of the assumption that their writing is not yet ready to "go public." Countering this line of thinking, the author…
Study on low intensity aeration oxygenation model and optimization for shallow water
NASA Astrophysics Data System (ADS)
Chen, Xiao; Ding, Zhibin; Ding, Jian; Wang, Yi
2018-02-01
Aeration/oxygenation is an effective measure to improve self-purification capacity in shallow water treatment while high energy consumption, high noise and expensive management refrain the development and the application of this process. Based on two-film theory, the theoretical model of the three-dimensional partial differential equation of aeration in shallow water is established. In order to simplify the equation, the basic assumptions of gas-liquid mass transfer in vertical direction and concentration diffusion in horizontal direction are proposed based on engineering practice and are tested by the simulation results of gas holdup which are obtained by simulating the gas-liquid two-phase flow in aeration tank under low-intensity condition. Based on the basic assumptions and the theory of shallow permeability, the model of three-dimensional partial differential equations is simplified and the calculation model of low-intensity aeration oxygenation is obtained. The model is verified through comparing the aeration experiment. Conclusions as follows: (1)The calculation model of gas-liquid mass transfer in vertical direction and concentration diffusion in horizontal direction can reflect the process of aeration well; (2) Under low-intensity conditions, the long-term aeration and oxygenation is theoretically feasible to enhance the self-purification capacity of water bodies; (3) In the case of the same total aeration intensity, the effect of multipoint distributed aeration on the diffusion of oxygen concentration in the horizontal direction is obvious; (4) In the shallow water treatment, reducing the volume of aeration equipment with the methods of miniaturization, array, low-intensity, mobilization to overcome the high energy consumption, large size, noise and other problems can provide a good reference.
Rumor spreading model with the different attitudes towards rumors
NASA Astrophysics Data System (ADS)
Hu, Yuhan; Pan, Qiuhui; Hou, Wenbing; He, Mingfeng
2018-07-01
Rumor spreading has a profound influence on people's well-being and social stability. There are many factors influencing rumor spreading. In this paper, we recommended an assumption that among the common mass there are three attitudes towards rumors: to like rumor spreading, to dislike rumor spreading, and to be hesitant (or neutral) to rumor spreading. Based on such an assumption, a Susceptible-Hesitating-Affected-Resistant(SHAR) model is established, which considered individuals' different attitudes towards rumor spreading. We also analyzed the local and global stability of rumor-free equilibrium and rumor-existence equilibrium, calculated the basic reproduction number of our model. With numerical simulations, we illustrated the effect of parameter changes on rumor spreading, analyzing the parameter sensitivity of the model. The results of the theoretical analysis and numerical simulations illustrated the conclusions of this study. People having different attitudes towards rumors may play different roles in the process of rumor spreading. It was surprising to find, in our research, that people who hesitate to spread rumors have a positive effect on the spread of rumors.
A comparison between EGS4 and MCNP computer modeling of an in vivo X-ray fluorescence system.
Al-Ghorabie, F H; Natto, S S; Al-Lyhiani, S H
2001-03-01
The Monte Carlo computer codes EGS4 and MCNP were used to develop a theoretical model of a 180 degrees geometry in vivo X-ray fluorescence system for the measurement of platinum concentration in head and neck tumors. The model included specification of the photon source, collimators, phantoms and detector. Theoretical results were compared and evaluated against X-ray fluorescence data obtained experimentally from an existing system developed by the Swansea In Vivo Analysis and Cancer Research Group. The EGS4 results agreed well with the MCNP results. However, agreement between the measured spectral shape obtained using the experimental X-ray fluorescence system and the simulated spectral shape obtained using the two Monte Carlo codes was relatively poor. The main reason for the disagreement between the results arises from the basic assumptions which the two codes used in their calculations. Both codes assume a "free" electron model for Compton interactions. This assumption will underestimate the results and invalidates any predicted and experimental spectra when compared with each other.
Introduction to the Application of Web-Based Surveys.
ERIC Educational Resources Information Center
Timmerman, Annemarie
This paper discusses some basic assumptions and issues concerning web-based surveys. Discussion includes: assumptions regarding cost and ease of use; disadvantages of web-based surveys, concerning the inability to compensate for four common errors of survey research: coverage error, sampling error, measurement error and nonresponse error; and…
School, Cultural Diversity, Multiculturalism, and Contact
ERIC Educational Resources Information Center
Pagani, Camilla; Robustelli, Francesco; Martinelli, Cristina
2011-01-01
The basic assumption of this paper is that school's potential to improve cross-cultural relations, as well as interpersonal relations in general, is enormous. This assumption is supported by a number of theoretical considerations and by the analysis of data we obtained from a study we conducted on the attitudes toward diversity and…
Small area estimation for estimating the number of infant mortality in West Java, Indonesia
NASA Astrophysics Data System (ADS)
Anggreyani, Arie; Indahwati, Kurnia, Anang
2016-02-01
Demographic and Health Survey Indonesia (DHSI) is a national designed survey to provide information regarding birth rate, mortality rate, family planning and health. DHSI was conducted by BPS in cooperation with National Population and Family Planning Institution (BKKBN), Indonesia Ministry of Health (KEMENKES) and USAID. Based on the publication of DHSI 2012, the infant mortality rate for a period of five years before survey conducted is 32 for 1000 birth lives. In this paper, Small Area Estimation (SAE) is used to estimate the number of infant mortality in districts of West Java. SAE is a special model of Generalized Linear Mixed Models (GLMM). In this case, the incidence of infant mortality is a Poisson distribution which has equdispersion assumption. The methods to handle overdispersion are binomial negative and quasi-likelihood model. Based on the results of analysis, quasi-likelihood model is the best model to overcome overdispersion problem. The basic model of the small area estimation used basic area level model. Mean square error (MSE) which based on resampling method is used to measure the accuracy of small area estimates.
Nuclear Reactions in Micro/Nano-Scale Metal Particles
NASA Astrophysics Data System (ADS)
Kim, Y. E.
2013-03-01
Low-energy nuclear reactions in micro/nano-scale metal particles are described based on the theory of Bose-Einstein condensation nuclear fusion (BECNF). The BECNF theory is based on a single basic assumption capable of explaining the observed LENR phenomena; deuterons in metals undergo Bose-Einstein condensation. The BECNF theory is also a quantitative predictive physical theory. Experimental tests of the basic assumption and theoretical predictions are proposed. Potential application to energy generation by ignition at low temperatures is described. Generalized theory of BECNF is used to carry out theoretical analyses of recently reported experimental results for hydrogen-nickel system.
How well do basic models describe the turbidity currents coming down Monterey and Congo Canyon?
NASA Astrophysics Data System (ADS)
Cartigny, M.; Simmons, S.; Heerema, C.; Xu, J. P.; Azpiroz, M.; Clare, M. A.; Cooper, C.; Gales, J. A.; Maier, K. L.; Parsons, D. R.; Paull, C. K.; Sumner, E. J.; Talling, P.
2017-12-01
Turbidity currents rival rivers in their global capacity to transport sediment and organic carbon. Furthermore, turbidity currents break submarine cables that now transport >95% of our global data traffic. Accurate turbidity current models are thus needed to quantify their transport capacity and to predict the forces exerted on seafloor structures. Despite this need, existing numerical models are typically only calibrated with scaled-down laboratory measurements due to the paucity of direct measurements of field-scale turbidity currents. This lack of calibration thus leaves much uncertainty in the validity of existing models. Here we use the most detailed observations of turbidity currents yet acquired to validate one of the most fundamental models proposed for turbidity currents, the modified Chézy model. Direct measurements on which the validation is based come from two sites that feature distinctly different flow modes and grain sizes. The first are from the multi-institution Coordinated Canyon Experiment (CCE) in Monterey Canyon, California. An array of six moorings along the canyon axis captured at least 15 flow events that lasted up to hours. The second is the deep-sea Congo Canyon, where 10 finer grained flows were measured by a single mooring, each lasting several days. Moorings captured depth-resolved velocity and suspended sediment concentration at high resolution (<30 second) for each of the 25 events. We use both datasets to test the most basic model available for turbidity currents; the modified Chézy model. This basic model has been very useful for river studies over the past 200 years, as it provides a rapid estimate of how flow velocity varies with changes in river level and energy slope. Chézy-type models assume that the gravitational force of the flow equals the friction of the river-bed. Modified Chézy models have been proposed for turbidity currents. However, the absence of detailed measurements of friction and sediment concentration within full-scale turbidity currents has forced modellers to make rough assumptions for these parameters. Here we use mooring data to deduce observation-based relations that can replace the previous assumptions. This improvement will significantly enhance the model predictions and allow us to better constrain the behaviour of turbidity currents.
Modelling the morphology of migrating bacterial colonies
NASA Astrophysics Data System (ADS)
Nishiyama, A.; Tokihiro, T.; Badoual, M.; Grammaticos, B.
2010-08-01
We present a model which aims at describing the morphology of colonies of Proteus mirabilis and Bacillus subtilis. Our model is based on a cellular automaton which is obtained by the adequate discretisation of a diffusion-like equation, describing the migration of the bacteria, to which we have added rules simulating the consolidation process. Our basic assumption, following the findings of the group of Chuo University, is that the migration and consolidation processes are controlled by the local density of the bacteria. We show that it is possible within our model to reproduce the morphological diagrams of both bacteria species. Moreover, we model some detailed experiments done by the Chuo University group, obtaining a fine agreement.
Spectral properties of blast-wave models of gamma-ray burst sources
NASA Technical Reports Server (NTRS)
Meszaros, P.; Rees, M. J.; Papathanassiou, H.
1994-01-01
We calculate the spectrum of blast-wave models of gamma-ray burst sources, for various assumptions about the magnetic field density and the relativistic particle acceleration efficiency. For a range of physically plausible models we find that the radiation efficiency is high and leads to nonthermal spectra with breaks at various energies comparable to those observed in the gamma-ray range. Radiation is also predicted at other wavebands, in particular at X-ray, optical/UV, and GeV/TeV energies. We discuss the spectra as a function of duration for three basic types of models, and for cosmological, halo, and galactic disk distances. We also evaluate the gamma-ray fluences and the spectral characteristics for a range of external densities. Impulsive burst models at cosmological distances can satisfy the conventional X-ray paucity constraint S(sub x)/S(sub gamma)less than a few percent over a wide range of durations, but galactic models can do so only for bursts shorter than a few seconds, unless additional assumptions are made. The emissivity is generally larger for bursts in a denser external environment, with the efficiency increasing up to the point where all the energy input is radiated away.
Flora, David B.; LaBrish, Cathy; Chalmers, R. Philip
2011-01-01
We provide a basic review of the data screening and assumption testing issues relevant to exploratory and confirmatory factor analysis along with practical advice for conducting analyses that are sensitive to these concerns. Historically, factor analysis was developed for explaining the relationships among many continuous test scores, which led to the expression of the common factor model as a multivariate linear regression model with observed, continuous variables serving as dependent variables, and unobserved factors as the independent, explanatory variables. Thus, we begin our paper with a review of the assumptions for the common factor model and data screening issues as they pertain to the factor analysis of continuous observed variables. In particular, we describe how principles from regression diagnostics also apply to factor analysis. Next, because modern applications of factor analysis frequently involve the analysis of the individual items from a single test or questionnaire, an important focus of this paper is the factor analysis of items. Although the traditional linear factor model is well-suited to the analysis of continuously distributed variables, commonly used item types, including Likert-type items, almost always produce dichotomous or ordered categorical variables. We describe how relationships among such items are often not well described by product-moment correlations, which has clear ramifications for the traditional linear factor analysis. An alternative, non-linear factor analysis using polychoric correlations has become more readily available to applied researchers and thus more popular. Consequently, we also review the assumptions and data-screening issues involved in this method. Throughout the paper, we demonstrate these procedures using an historic data set of nine cognitive ability variables. PMID:22403561
The Equations of Oceanic Motions
NASA Astrophysics Data System (ADS)
Müller, Peter
2006-10-01
Modeling and prediction of oceanographic phenomena and climate is based on the integration of dynamic equations. The Equations of Oceanic Motions derives and systematically classifies the most common dynamic equations used in physical oceanography, from large scale thermohaline circulations to those governing small scale motions and turbulence. After establishing the basic dynamical equations that describe all oceanic motions, M|ller then derives approximate equations, emphasizing the assumptions made and physical processes eliminated. He distinguishes between geometric, thermodynamic and dynamic approximations and between the acoustic, gravity, vortical and temperature-salinity modes of motion. Basic concepts and formulae of equilibrium thermodynamics, vector and tensor calculus, curvilinear coordinate systems, and the kinematics of fluid motion and wave propagation are covered in appendices. Providing the basic theoretical background for graduate students and researchers of physical oceanography and climate science, this book will serve as both a comprehensive text and an essential reference.
Porto, Paolo; Walling, Des E
2012-10-01
Information on rates of soil loss from agricultural land is a key requirement for assessing both on-site soil degradation and potential off-site sediment problems. Many models and prediction procedures have been developed to estimate rates of soil loss and soil redistribution as a function of the local topography, hydrometeorology, soil type and land management, but empirical data remain essential for validating and calibrating such models and prediction procedures. Direct measurements using erosion plots are, however, costly and the results obtained relate to a small enclosed area, which may not be representative of the wider landscape. In recent years, the use of fallout radionuclides and more particularly caesium-137 ((137)Cs) and excess lead-210 ((210)Pb(ex)) has been shown to provide a very effective means of documenting rates of soil loss and soil and sediment redistribution in the landscape. Several of the assumptions associated with the theoretical conversion models used with such measurements remain essentially unvalidated. This contribution describes the results of a measurement programme involving five experimental plots located in southern Italy, aimed at validating several of the basic assumptions commonly associated with the use of mass balance models for estimating rates of soil redistribution on cultivated land from (137)Cs and (210)Pb(ex) measurements. Overall, the results confirm the general validity of these assumptions and the importance of taking account of the fate of fresh fallout. However, further work is required to validate the conversion models employed in using fallout radionuclide measurements to document soil redistribution in the landscape and this could usefully direct attention to different environments and to the validation of the final estimates of soil redistribution rate as well as the assumptions of the models employed. Copyright © 2012 Elsevier Ltd. All rights reserved.
On Cognitive Constraints and Learning Progressions: The Case of "Structure of Matter"
ERIC Educational Resources Information Center
Talanquer, Vicente
2009-01-01
Based on the analysis of available research on students' alternative conceptions about the particulate nature of matter, we identified basic implicit assumptions that seem to constrain students' ideas and reasoning on this topic at various learning stages. Although many of these assumptions are interrelated, some of them seem to change or…
Rationality as the Basic Assumption in Explaining Japanese (or Any Other) Business Culture.
ERIC Educational Resources Information Center
Koike, Shohei
Economic analysis, with its explicit assumption that people are rational, is applied to the Japanese and American business cultures to illustrate how the approach is useful for understanding cultural differences. Specifically, differences in cooperative behavior among Japanese and American workers are examined. Economic analysis goes beyond simple…
Standardization of Selected Semantic Differential Scales with Secondary School Children.
ERIC Educational Resources Information Center
Evans, G. T.
A basic assumption of this study is that the meaning continuum registered by an adjective pair remains relatively constant over a large universe of concepts and over subjects within a relatively homogeneous population. An attempt was made to validate this assumption by showing the invariance of the factor structure across different types of…
What's Love Got to Do with It? Rethinking Common Sense Assumptions
ERIC Educational Resources Information Center
Trachman, Matthew; Bluestone, Cheryl
2005-01-01
One of the most basic tasks in introductory social science classes is to get students to reexamine their common sense assumptions concerning human behavior. This article introduces a shared assignment developed for a learning community that paired an introductory sociology and psychology class. The assignment challenges students to rethink the…
Semi-supervised Learning for Phenotyping Tasks.
Dligach, Dmitriy; Miller, Timothy; Savova, Guergana K
2015-01-01
Supervised learning is the dominant approach to automatic electronic health records-based phenotyping, but it is expensive due to the cost of manual chart review. Semi-supervised learning takes advantage of both scarce labeled and plentiful unlabeled data. In this work, we study a family of semi-supervised learning algorithms based on Expectation Maximization (EM) in the context of several phenotyping tasks. We first experiment with the basic EM algorithm. When the modeling assumptions are violated, basic EM leads to inaccurate parameter estimation. Augmented EM attenuates this shortcoming by introducing a weighting factor that downweights the unlabeled data. Cross-validation does not always lead to the best setting of the weighting factor and other heuristic methods may be preferred. We show that accurate phenotyping models can be trained with only a few hundred labeled (and a large number of unlabeled) examples, potentially providing substantial savings in the amount of the required manual chart review.
Satellite servicing mission preliminary cost estimation model
NASA Technical Reports Server (NTRS)
1987-01-01
The cost model presented is a preliminary methodology for determining a rough order-of-magnitude cost for implementing a satellite servicing mission. Mission implementation, in this context, encompassess all activities associated with mission design and planning, including both flight and ground crew training and systems integration (payload processing) of servicing hardward with the Shuttle. A basic assumption made in developing this cost model is that a generic set of servicing hardware was developed and flight tested, is inventoried, and is maintained by NASA. This implies that all hardware physical and functional interfaces are well known and therefore recurring CITE testing is not required. The development of the cost model algorithms and examples of their use are discussed.
Ketcham, Jonathan D; Kuminoff, Nicolai V; Powers, Christopher A
2016-12-01
Consumers' enrollment decisions in Medicare Part D can be explained by Abaluck and Gruber’s (2011) model of utility maximization with psychological biases or by a neoclassical version of their model that precludes such biases. We evaluate these competing hypotheses by applying nonparametric tests of utility maximization and model validation tests to administrative data. We find that 79 percent of enrollment decisions from 2006 to 2010 satisfied basic axioms of consumer theory under the assumption of full information. The validation tests provide evidence against widespread psychological biases. In particular, we find that precluding psychological biases improves the structural model's out-of-sample predictions for consumer behavior.
Sorption of small molecules in polymeric media
NASA Astrophysics Data System (ADS)
Camboni, Federico; Sokolov, Igor M.
2016-12-01
We discuss the sorption of penetrant molecules from the gas phase by a polymeric medium within a model which is very close in spirit to the dual sorption mode model: the penetrant molecules are partly dissolved within the polymeric matrix, partly fill the preexisting voids. The only difference with the initial dual sorption mode situation is the assumption that the two populations of molecules are in equilibrium with each other. Applying basic thermodynamics principles we obtain the dependence of the penetrant concentration on the pressure in the gas phase and find that this is expressed via the Lambert W-function, a different functional form than the one proposed by dual sorption mode model. The Lambert-like isotherms appear universally at low and moderate pressures and originate from the assumption that the internal energy in a polymer-penetrant-void ternary mixture is (in the lowest order) a bilinear form in the concentrations of the three components. Fitting the existing data shows that in the domain of parameters where the dual sorption mode model is typically applied, the Lambert function, which describes the same behavior as the one proposed by the gas-polymer matrix model, fits the data equally well.
Quid pro quo: a mechanism for fair collaboration in networked systems.
Santos, Agustín; Fernández Anta, Antonio; López Fernández, Luis
2013-01-01
Collaboration may be understood as the execution of coordinated tasks (in the most general sense) by groups of users, who cooperate for achieving a common goal. Collaboration is a fundamental assumption and requirement for the correct operation of many communication systems. The main challenge when creating collaborative systems in a decentralized manner is dealing with the fact that users may behave in selfish ways, trying to obtain the benefits of the tasks but without participating in their execution. In this context, Game Theory has been instrumental to model collaborative systems and the task allocation problem, and to design mechanisms for optimal allocation of tasks. In this paper, we revise the classical assumptions of these models and propose a new approach to this problem. First, we establish a system model based on heterogenous nodes (users, players), and propose a basic distributed mechanism so that, when a new task appears, it is assigned to the most suitable node. The classical technique for compensating a node that executes a task is the use of payments (which in most networks are hard or impossible to implement). Instead, we propose a distributed mechanism for the optimal allocation of tasks without payments. We prove this mechanism to be robust evenevent in the presence of independent selfish or rationally limited players. Additionally, our model is based on very weak assumptions, which makes the proposed mechanisms susceptible to be implemented in networked systems (e.g., the Internet).
Dark energy cosmology with tachyon field in teleparallel gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Motavalli, H., E-mail: Motavalli@Tabrizu.ac.ir; Akbarieh, A. Rezaei; Nasiry, M.
2016-07-15
We construct a tachyon teleparallel dark energy model for a homogeneous and isotropic flat universe in which a tachyon as a non-canonical scalar field is non-minimally coupled to gravity in the framework of teleparallel gravity. The explicit form of potential and coupling functions are obtained under the assumption that the Lagrangian admits the Noether symmetry approach. The dynamical behavior of the basic cosmological observables is compared to recent observational data, which implies that the tachyon field may serve as a candidate for dark energy.
An interactive quality of work life model applied to organizational transition.
Knox, S; Irving, J A
1997-01-01
Most healthcare organizations in the United States are in the process of some type of organizational change or transition. Professional nurses and other healthcare providers practicing in U.S. healthcare delivery organizations are very aware of the dramatic effects of restructuring processes. A phenomenal amount of change and concern is occurring with organizational redesign, generating many questions and uncertainties. These transitions challenge the basic assumptions and principles guiding the practice of clinical and management roles in healthcare.
2006-07-01
and methamphetamine Our basic assumption is that protective treatments alter both post-translational and translational events so as to reduce the...impact of voluntary running on trophic factor levels and the neurotoxic effects of 6-OHDA. Reportable Outcomes: • Like exercise, GDNF protects DA...also protects against the increased vulnerability to toxins caused by other stressors; and (4) the generality of our results with 6-OHDA to other
Automatic item generation implemented for measuring artistic judgment aptitude.
Bezruczko, Nikolaus
2014-01-01
Automatic item generation (AIG) is a broad class of methods that are being developed to address psychometric issues arising from internet and computer-based testing. In general, issues emphasize efficiency, validity, and diagnostic usefulness of large scale mental testing. Rapid prominence of AIG methods and their implicit perspective on mental testing is bringing painful scrutiny to many sacred psychometric assumptions. This report reviews basic AIG ideas, then presents conceptual foundations, image model development, and operational application to artistic judgment aptitude testing.
NASA Technical Reports Server (NTRS)
Zinnecker, Alicia M.; Chapman, Jeffryes W.; Lavelle, Thomas M.; Litt, Johathan S.
2014-01-01
The Toolbox for Modeling and Analysis of Thermodynamic Systems (T-MATS) is a tool that has been developed to allow a user to build custom models of systems governed by thermodynamic principles using a template to model each basic process. Validation of this tool in an engine model application was performed through reconstruction of the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) (v2) using the building blocks from the T-MATS (v1) library. In order to match the two engine models, it was necessary to address differences in several assumptions made in the two modeling approaches. After these modifications were made, validation of the engine model continued by integrating both a steady-state and dynamic iterative solver with the engine plant and comparing results from steady-state and transient simulation of the T-MATS and C-MAPSS models. The results show that the T-MATS engine model was accurate within 3 of the C-MAPSS model, with inaccuracy attributed to the increased dimension of the iterative solver solution space required by the engine model constructed using the T-MATS library. This demonstrates that, given an understanding of the modeling assumptions made in T-MATS and a baseline model, the T-MATS tool provides a viable option for constructing a computational model of a twin-spool turbofan engine that may be used in simulation studies.
Intellectualizing Adult Basic Literacy Education: A Case Study
ERIC Educational Resources Information Center
Bradbury, Kelly S.
2012-01-01
At a time when accusations of American ignorance and anti-intellectualism are ubiquitous, this article challenges problematic assumptions about intellectualism that overlook the work of adult basic literacy programs and proposes an expanded view of intellectualism. It is important to recognize and to challenge narrow views of intellectualism…
Adult Literacy Programs: Guidelines for Effectiveness.
ERIC Educational Resources Information Center
Lord, Jerome E.
This report is a summary of information from both research and experience about the assumptions and practices that guide successful basic skills programs. The 31 guidelines are basic to building a solid foundation on which effective instructional programs for adults can be developed. The first six guidelines address some important characteristics…
Social Studies Curriculum Guidelines.
ERIC Educational Resources Information Center
Manson, Gary; And Others
These guidelines, which set standards for social studies programs K-12, can be used to update existing programs or may serve as a baseline for further innovation. The first section, "A Basic Rationale for Social Studies Education," identifies the theoretical assumptions basic to the guidelines as knowledge, thinking, valuing, social participation,…
Modeling Spatial Dependencies and Semantic Concepts in Data Mining
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vatsavai, Raju
Data mining is the process of discovering new patterns and relationships in large datasets. However, several studies have shown that general data mining techniques often fail to extract meaningful patterns and relationships from the spatial data owing to the violation of fundamental geospatial principles. In this tutorial, we introduce basic principles behind explicit modeling of spatial and semantic concepts in data mining. In particular, we focus on modeling these concepts in the widely used classification, clustering, and prediction algorithms. Classification is the process of learning a structure or model (from user given inputs) and applying the known model to themore » new data. Clustering is the process of discovering groups and structures in the data that are ``similar,'' without applying any known structures in the data. Prediction is the process of finding a function that models (explains) the data with least error. One common assumption among all these methods is that the data is independent and identically distributed. Such assumptions do not hold well in spatial data, where spatial dependency and spatial heterogeneity are a norm. In addition, spatial semantics are often ignored by the data mining algorithms. In this tutorial we cover recent advances in explicitly modeling of spatial dependencies and semantic concepts in data mining.« less
Valentine, Julie L
2014-01-01
An evaluation of the Integrated Practice Model for Forensic Nursing Science () is presented utilizing methods outlined by . A brief review of nursing theory basics and evaluation methods by Meleis is provided to enhance understanding of the ensuing theoretical evaluation and critique. The Integrated Practice Model for Forensic Nursing Science, created by forensic nursing pioneer Virginia Lynch, captures the theories, assumptions, concepts, and propositions inherent in forensic nursing practice and science. The historical background of the theory is explored as Lynch's model launched the role development of forensic nursing practice as both a nursing and forensic science specialty. It is derived from a combination of nursing, sociological, and philosophical theories to reflect the grounding of forensic nursing in the nursing, legal, psychological, and scientific communities. As Lynch's model is the first inception of forensic nursing theory, it is representative of a conceptual framework although the title implies a practice theory. The clarity and consistency displayed in the theory's structural components of assumptions, concepts, and propositions are analyzed. The model is described and evaluated. A summary of the strengths and limitations of the model is compiled followed by application to practice, education, and research with suggestions for ongoing theory development.
PTSD as Meaning Violation: Testing a Cognitive Worldview Perspective.
Park, Crystal L; Mills, Mary Alice; Edmondson, Donald
2012-01-01
The cognitive perspective on post-traumatic stress disorder (PTSD) has been successful in explaining many PTSD-related phenomena and in developing effective treatments, yet some of its basic assumptions remain surprisingly under-examined. The present study tested two of these assumptions: (1) situational appraisals of the event as violating global meaning (i.e., beliefs and goals) is related to PTSD symptomatology, and (2) the effect of situational appraisals of violation on PTSD symptomatology is mediated by global meaning (i.e., views of self and world). We tested these assumptions in a cross-sectional study of 130 college students who had experienced a DSM-IV level trauma. Structural equation modeling showed that appraisals of the extent to which the trauma violated one's beliefs and goals related fairly strongly to PTSD. In addition, the effects of appraisals of belief and goal violations on PTSD symptoms were fully mediated through negative global beliefs about both the self and the world. These findings support the cognitive worldview perspective, highlighting the importance of the meaning individuals assign to traumatic events, particularly the role of meaning violation.
PTSD as Meaning Violation: Testing a Cognitive Worldview Perspective
Park, Crystal L.; Mills, Mary Alice; Edmondson, Donald
2014-01-01
The cognitive perspective on post-traumatic stress disorder (PTSD) has been successful in explaining many PTSD-related phenomena and in developing effective treatments, yet some of its basic assumptions remain surprisingly under-examined. The present study tested two of these assumptions: (1) situational appraisals of the event as violating global meaning (i.e., beliefs and goals) is related to PTSD symptomatology, and (2) the effect of situational appraisals of violation on PTSD symptomatology is mediated by global meaning (i.e., views of self and world). We tested these assumptions in a cross-sectional study of 130 college students who had experienced a DSM-IV level trauma. Structural equation modeling showed that appraisals of the extent to which the trauma violated one’s beliefs and goals related fairly strongly to PTSD. In addition, the effects of appraisals of belief and goal violations on PTSD symptoms were fully mediated through negative global beliefs about both the self and the world. These findings support the cognitive worldview perspective, highlighting the importance of the meaning individuals assign to traumatic events, particularly the role of meaning violation. PMID:24860641
NASA Technical Reports Server (NTRS)
Hamrock, B. J.; Dowson, D.
1981-01-01
Lubricants, usually Newtonian fluids, are assumed to experience laminar flow. The basic equations used to describe the flow are the Navier-Stokes equation of motion. The study of hydrodynamic lubrication is, from a mathematical standpoint, the application of a reduced form of these Navier-Stokes equations in association with the continuity equation. The Reynolds equation can also be derived from first principles, provided of course that the same basic assumptions are adopted in each case. Both methods are used in deriving the Reynolds equation, and the assumptions inherent in reducing the Navier-Stokes equations are specified. Because the Reynolds equation contains viscosity and density terms and these properties depend on temperature and pressure, it is often necessary to couple the Reynolds with energy equation. The lubricant properties and the energy equation are presented. Film thickness, a parameter of the Reynolds equation, is a function of the elastic behavior of the bearing surface. The governing elasticity equation is therefore presented.
Hogan, Thomas J
2012-05-01
The objective was to review recent economic evaluations of influenza vaccination by injection in the US, assess their evidence, and conclude on their collective findings. The literature was searched for economic evaluations of influenza vaccination injection in healthy working adults in the US published since 1995. Ten evaluations described in nine papers were identified. These were synopsized and their results evaluated, the basic structure of all evaluations was ascertained, and sensitivity of outcomes to changes in parameter values were explored using a decision model. Areas to improve economic evaluations were noted. Eight of nine evaluations with credible economic outcomes were favourable to vaccination, representing a statistically significant result compared with a proportion of 50% that would be expected if vaccination and no vaccination were economically equivalent. Evaluations shared a basic structure, but differed considerably with respect to cost components, assumptions, methods, and parameter estimates. Sensitivity analysis indicated that changes in parameter values within the feasible range, individually or simultaneously, could reverse economic outcomes. Given stated misgivings, the methods of estimating influenza reduction ascribed to vaccination must be researched to confirm that they produce accurate and reliable estimates. Research is also needed to improve estimates of the costs per case of influenza illness and the costs of vaccination. Based on their assumptions, the reviewed papers collectively appear to support the economic benefits of influenza vaccination of healthy adults. Yet the underlying assumptions, methods and parameter estimates themselves warrant further research to confirm they are accurate, reliable and appropriate to economic evaluation purposes.
Statistical Issues for Uncontrolled Reentry Hazards
NASA Technical Reports Server (NTRS)
Matney, Mark
2008-01-01
A number of statistical tools have been developed over the years for assessing the risk of reentering objects to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. The statistical tools use this information to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of the analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper looks at a number of these theoretical assumptions, examining the mathematical basis for the hazard calculations, and outlining the conditions under which the simplifying assumptions hold. In addition, this paper will also outline some new tools for assessing ground hazard risk in useful ways. Also, this study is able to make use of a database of known uncontrolled reentry locations measured by the United States Department of Defense. By using data from objects that were in orbit more than 30 days before reentry, sufficient time is allowed for the orbital parameters to be randomized in the way the models are designed to compute. The predicted ground footprint distributions of these objects are based on the theory that their orbits behave basically like simple Kepler orbits. However, there are a number of factors - including the effects of gravitational harmonics, the effects of the Earth's equatorial bulge on the atmosphere, and the rotation of the Earth and atmosphere - that could cause them to diverge from simple Kepler orbit behavior and change the ground footprints. The measured latitude and longitude distributions of these objects provide data that can be directly compared with the predicted distributions, providing a fundamental empirical test of the model assumptions.
NASA Astrophysics Data System (ADS)
Cazzani, Antonio; Malagù, Marcello; Turco, Emilio
2016-03-01
We illustrate a numerical tool for analyzing plane arches such as those frequently used in historical masonry heritage. It is based on a refined elastic mechanical model derived from the isogeometric approach. In particular, geometry and displacements are modeled by means of non-uniform rational B-splines. After a brief introduction, outlining the basic assumptions of this approach and the corresponding modeling choices, several numerical applications to arches, which are typical of masonry structures, show the performance of this novel technique. These are discussed in detail to emphasize the advantage and potential developments of isogeometric analysis in the field of structural analysis of historical masonry buildings with complex geometries.
A constitutive model for AS4/PEEK thermoplastic composites under cyclic loading
NASA Technical Reports Server (NTRS)
Rui, Yuting; Sun, C. T.
1990-01-01
Based on the basic and essential features of the elastic-plastic response of the AS4/PEEK thermoplastic composite subjected to off-axis cyclic loadings, a simple rate-independent constitutive model is proposed to describe the orthotropic material behavior for cyclic loadings. A one-parameter memory surface is introduced to distinguish the virgin deformation and the subsequent deformation process and to characterize the loading range effect. Cyclic softening is characterized by the change of generalized plastic modulus. By the vanishing yield surface assumption, a yield criterion is not needed and it is not necessary to consider loading and unloading separately. The model is compared with experimental results and good agreement is obtained.
Daniels, Marcus G; Farmer, J Doyne; Gillemot, László; Iori, Giulia; Smith, Eric
2003-03-14
We model trading and price formation in a market under the assumption that order arrival and cancellations are Poisson random processes. This model makes testable predictions for the most basic properties of markets, such as the diffusion rate of prices (which is the standard measure of financial risk) and the spread and price impact functions (which are the main determinants of transaction cost). Guided by dimensional analysis, simulation, and mean-field theory, we find scaling relations in terms of order flow rates. We show that even under completely random order flow the need to store supply and demand to facilitate trading induces anomalous diffusion and temporal structure in prices.
NASA Astrophysics Data System (ADS)
Daniels, Marcus G.; Farmer, J. Doyne; Gillemot, László; Iori, Giulia; Smith, Eric
2003-03-01
We model trading and price formation in a market under the assumption that order arrival and cancellations are Poisson random processes. This model makes testable predictions for the most basic properties of markets, such as the diffusion rate of prices (which is the standard measure of financial risk) and the spread and price impact functions (which are the main determinants of transaction cost). Guided by dimensional analysis, simulation, and mean-field theory, we find scaling relations in terms of order flow rates. We show that even under completely random order flow the need to store supply and demand to facilitate trading induces anomalous diffusion and temporal structure in prices.
[A reflection about organizational culture according to psychoanalysis' view].
Cardoso, Maria Lúcia Alves Pereira
2008-01-01
This article aims at submitting a reflection on the universal presuppositions of human culture proposed by Freud, as a prop for analyzing presuppositions of organizational culture according to Schein. In an article published in 1984, the latter claims that in order to decipher organizational culture one cannot rely upon the (visible) artifacts or to (perceptible) values, but should take a deeper plunge and identify the basic assumptions underlying organizational culture. Such pressupositions spread into the field of sttudy concerning the individual inner self, within the sphere of Psychoanalysis. We have therefore examined Freud's basic assumptions of human culture in order to ascertain its conformity with the paradigms of organizational culture as proposed by Schein.
Thermodynamic Properties of Low-Density {}^{132}Xe Gas in the Temperature Range 165-275 K
NASA Astrophysics Data System (ADS)
Akour, Abdulrahman
2018-01-01
The method of static fluctuation approximation was used to calculate selected thermodynamic properties (internal energy, entropy, energy capacity, and pressure) for xenon in a particularly low-temperature range (165-270 K) under different conditions. This integrated microscopic study started from an initial basic assumption as the main input. The basic assumption in this method was to replace the local field operator with its mean value, then numerically solve a closed set of nonlinear equations using an iterative method, considering the Hartree-Fock B2-type dispersion potential as the most appropriate potential for xenon. The results are in very good agreement with those of an ideal gas.
ERIC Educational Resources Information Center
Ngai, Courtney; Sevian, Hannah; Talanquer, Vicente
2014-01-01
Given the diversity of materials in our surroundings, one should expect scientifically literate citizens to have a basic understanding of the core ideas and practices used to analyze chemical substances. In this article, we use the term 'chemical identity' to encapsulate the assumptions, knowledge, and practices upon which chemical…
PET image reconstruction: a robust state space approach.
Liu, Huafeng; Tian, Yi; Shi, Pengcheng
2005-01-01
Statistical iterative reconstruction algorithms have shown improved image quality over conventional nonstatistical methods in PET by using accurate system response models and measurement noise models. Strictly speaking, however, PET measurements, pre-corrected for accidental coincidences, are neither Poisson nor Gaussian distributed and thus do not meet basic assumptions of these algorithms. In addition, the difficulty in determining the proper system response model also greatly affects the quality of the reconstructed images. In this paper, we explore the usage of state space principles for the estimation of activity map in tomographic PET imaging. The proposed strategy formulates the organ activity distribution through tracer kinetics models, and the photon-counting measurements through observation equations, thus makes it possible to unify the dynamic reconstruction problem and static reconstruction problem into a general framework. Further, it coherently treats the uncertainties of the statistical model of the imaging system and the noisy nature of measurement data. Since H(infinity) filter seeks minimummaximum-error estimates without any assumptions on the system and data noise statistics, it is particular suited for PET image reconstruction where the statistical properties of measurement data and the system model are very complicated. The performance of the proposed framework is evaluated using Shepp-Logan simulated phantom data and real phantom data with favorable results.
A simplified rotor system mathematical model for piloted flight dynamics simulation
NASA Technical Reports Server (NTRS)
Chen, R. T. N.
1979-01-01
The model was developed for real-time pilot-in-the-loop investigation of helicopter flying qualities. The mathematical model included the tip-path plane dynamics and several primary rotor design parameters, such as flapping hinge restraint, flapping hinge offset, blade Lock number, and pitch-flap coupling. The model was used in several exploratory studies of the flying qualities of helicopters with a variety of rotor systems. The basic assumptions used and the major steps involved in the development of the set of equations listed are described. The equations consisted of the tip-path plane dynamic equation, the equations for the main rotor forces and moments, and the equation for control phasing required to achieve decoupling in pitch and roll due to cyclic inputs.
A stratospheric aerosol model with perturbations induced by the space shuttle particulate effluents
NASA Technical Reports Server (NTRS)
Rosen, J. M.; Hofmann, D. J.
1977-01-01
A one dimensional steady state stratospheric aerosol model is developed that considers the subsequent perturbations caused by including the expected space shuttle particulate effluents. Two approaches to the basic modeling effort were made: in one, enough simplifying assumptions were introduced so that a more or less exact solution to the descriptive equations could be obtained; in the other approach very few simplifications were made and a computer technique was used to solve the equations. The most complex form of the model contains the effects of sedimentation, diffusion, particle growth and coagulation. Results of the perturbation calculations show that there will probably be an immeasurably small increase in the stratospheric aerosol concentration for particles larger than about 0.15 micrometer radius.
Genital Measures: Comments on Their Role in Understanding Human Sexuality
ERIC Educational Resources Information Center
Geer, James H.
1976-01-01
This paper discusses the use of genital measures in the study of both applied and basic work in human sexuality. Some of the advantages of psychophysiological measures are considered along with cautions concerning unwarranted assumptions. Some of the advances that are possible in both applied and basic work are examined. (Author)
Duarte, Adam; Adams, Michael J.; Peterson, James T.
2018-01-01
Monitoring animal populations is central to wildlife and fisheries management, and the use of N-mixture models toward these efforts has markedly increased in recent years. Nevertheless, relatively little work has evaluated estimator performance when basic assumptions are violated. Moreover, diagnostics to identify when bias in parameter estimates from N-mixture models is likely is largely unexplored. We simulated count data sets using 837 combinations of detection probability, number of sample units, number of survey occasions, and type and extent of heterogeneity in abundance or detectability. We fit Poisson N-mixture models to these data, quantified the bias associated with each combination, and evaluated if the parametric bootstrap goodness-of-fit (GOF) test can be used to indicate bias in parameter estimates. We also explored if assumption violations can be diagnosed prior to fitting N-mixture models. In doing so, we propose a new model diagnostic, which we term the quasi-coefficient of variation (QCV). N-mixture models performed well when assumptions were met and detection probabilities were moderate (i.e., ≥0.3), and the performance of the estimator improved with increasing survey occasions and sample units. However, the magnitude of bias in estimated mean abundance with even slight amounts of unmodeled heterogeneity was substantial. The parametric bootstrap GOF test did not perform well as a diagnostic for bias in parameter estimates when detectability and sample sizes were low. The results indicate the QCV is useful to diagnose potential bias and that potential bias associated with unidirectional trends in abundance or detectability can be diagnosed using Poisson regression. This study represents the most thorough assessment to date of assumption violations and diagnostics when fitting N-mixture models using the most commonly implemented error distribution. Unbiased estimates of population state variables are needed to properly inform management decision making. Therefore, we also discuss alternative approaches to yield unbiased estimates of population state variables using similar data types, and we stress that there is no substitute for an effective sample design that is grounded upon well-defined management objectives.
Sustainability assessment through analogical models: The approach of aerobic living-organism
NASA Astrophysics Data System (ADS)
Dassisti, Michele
2014-10-01
The most part of scientific discoveries of human being borrow ideas and inspiration from nature. This point gives the rationale of the sustainability assessment approach presented here and based on the aerobic living organism (ALO) already developed by the author, which funds on the basic assumption that it is reasonable and effective to refer to the analogy between an system organized by human (say, manufacturing system, enterprise, etc.) for several decision-making scopes. The critical review of the ALO conceptual model already developed is here discussed through an example of an Italian small enterprise manufacturing metal components for civil furniture to assess its feasibility for sustainability appraisal.
Actin-based propulsion of a microswimmer.
Leshansky, A M
2006-07-01
A simple hydrodynamic model of actin-based propulsion of microparticles in dilute cell-free cytoplasmic extracts is presented. Under the basic assumption that actin polymerization at the particle surface acts as a force dipole, pushing apart the load and the free (nonanchored) actin tail, the propulsive velocity of the microparticle is determined as a function of the tail length, porosity, and particle shape. The anticipated velocities of the cargo displacement and the rearward motion of the tail are in good agreement with recently reported results of biomimetic experiments. A more detailed analysis of the particle-tail hydrodynamic interaction is presented and compared to the prediction of the simplified model.
1982-03-01
to preference types, and uses capacity estimation; therefore, it is basically a good system for recreation and resource inventory and classification...quan- tity, and distribution of recreational resources. Its basic unit of inventory is landform, or the homogeneity of physical features used to...by Clark and Stankey, "the basic assumption underlying the ROS is that quality recreational experiences are best assured by providing a diverse set of
A New Look into the Effect of Large Drops on Radiative Transfer Process
NASA Technical Reports Server (NTRS)
Marshak, Alexander
2003-01-01
Recent studies indicate that a cloudy atmosphere absorbs more solar radiation than any current 1D or 3D radiation model can predict. The excess absorption is not large, perhaps 10-15 W/sq m or less, but any such systematic bias is of concern since radiative transfer models are assumed to be sufficiently accurate for remote sensing applications and climate modeling. The most natural explanation would be that models do not capture real 3D cloud structure and, as a consequence, their photon path lengths are too short. However, extensive calculations, using increasingly realistic 3D cloud structures, failed to produce photon paths long enough to explain the excess absorption. Other possible explanations have also been unsuccessful so, at this point, conventional models seem to offer no solution to this puzzle. The weakest link in conventional models is the way a size distribution of cloud particles is mathematically handled. Basically, real particles are replaced with a single average particle. This "ensemble assumption" assumes that all particle sizes are well represented in any given elementary volume. But the concentration of larger particles can be so low that this assumption is significantly violated. We show how a different mathematical route, using the concept of a cumulative distribution, avoids the ensemble assumption. The cumulative distribution has jumps, or steps, corresponding to the rarer sizes. These jumps result in an additional term, a kind of Green's function, in the solution of the radiative transfer equation. Solving the cloud radiative transfer equation with the measured particle distributions, described in a cumulative rather than an ensemble fashion, may lead to increased cloud absorption of the magnitude observed.
Using Model Replication to Improve the Reliability of Agent-Based Models
NASA Astrophysics Data System (ADS)
Zhong, Wei; Kim, Yushim
The basic presupposition of model replication activities for a computational model such as an agent-based model (ABM) is that, as a robust and reliable tool, it must be replicable in other computing settings. This assumption has recently gained attention in the community of artificial society and simulation due to the challenges of model verification and validation. Illustrating the replication of an ABM representing fraudulent behavior in a public service delivery system originally developed in the Java-based MASON toolkit for NetLogo by a different author, this paper exemplifies how model replication exercises provide unique opportunities for model verification and validation process. At the same time, it helps accumulate best practices and patterns of model replication and contributes to the agenda of developing a standard methodological protocol for agent-based social simulation.
Song, Fujian; Loke, Yoon K; Walsh, Tanya; Glenny, Anne-Marie; Eastwood, Alison J; Altman, Douglas G
2009-04-03
To investigate basic assumptions and other methodological problems in the application of indirect comparison in systematic reviews of competing healthcare interventions. Survey of published systematic reviews. Inclusion criteria Systematic reviews published between 2000 and 2007 in which an indirect approach had been explicitly used. Identified reviews were assessed for comprehensiveness of the literature search, method for indirect comparison, and whether assumptions about similarity and consistency were explicitly mentioned. The survey included 88 review reports. In 13 reviews, indirect comparison was informal. Results from different trials were naively compared without using a common control in six reviews. Adjusted indirect comparison was usually done using classic frequentist methods (n=49) or more complex methods (n=18). The key assumption of trial similarity was explicitly mentioned in only 40 of the 88 reviews. The consistency assumption was not explicit in most cases where direct and indirect evidence were compared or combined (18/30). Evidence from head to head comparison trials was not systematically searched for or not included in nine cases. Identified methodological problems were an unclear understanding of underlying assumptions, inappropriate search and selection of relevant trials, use of inappropriate or flawed methods, lack of objective and validated methods to assess or improve trial similarity, and inadequate comparison or inappropriate combination of direct and indirect evidence. Adequate understanding of basic assumptions underlying indirect and mixed treatment comparison is crucial to resolve these methodological problems. APPENDIX 1: PubMed search strategy. APPENDIX 2: Characteristics of identified reports. APPENDIX 3: Identified studies. References of included studies.
The basic aerodynamics of floatation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davies, M.J.; Wood, D.H.
1983-09-01
The original derivation of the basic theory governing the aerodynamics of both hovercraft and modern floatation ovens, requires the validity of some extremely crude assumptions. However, the basic theory is surprisingly accurate. It is shown that this accuracy occurs because the final expression of the basic theory can be derived by approximating the full Navier-Stokes equations in a manner that clearly shows the limitations of the theory. These limitations are used in discussing the relatively small discrepancies between the theory and experiment, which may not be significant for practical purposes.
Analysis of a general SIS model with infective vectors on the complex networks
NASA Astrophysics Data System (ADS)
Juang, Jonq; Liang, Yu-Hao
2015-11-01
A general SIS model with infective vectors on complex networks is studied in this paper. In particular, the model considers the linear combination of three possible routes of disease propagation between infected and susceptible individuals as well as two possible transmission types which describe how the susceptible vectors attack the infected individuals. A new technique based on the basic reproduction matrix is introduced to obtain the following results. First, necessary and sufficient conditions are obtained for the global stability of the model through a unified approach. As a result, we are able to produce the exact basic reproduction number and the precise epidemic thresholds with respect to three spreading strengths, the curing strength or the immunization strength all at once. Second, the monotonicity of the basic reproduction number and the above mentioned epidemic thresholds with respect to all other parameters can be rigorously characterized. Finally, we are able to compare the effectiveness of various immunization strategies under the assumption that the number of persons getting vaccinated is the same for all strategies. In particular, we prove that in the scale-free networks, both targeted and acquaintance immunizations are more effective than uniform and active immunizations and that active immunization is the least effective strategy among those four. We are also able to determine how the vaccine should be used at minimum to control the outbreak of the disease.
A generating function approach to HIV transmission with dynamic contact rates
Romero-Severson, Ethan O.; Meadors, Grant D.; Volz, Erik M.
2014-04-24
The basic reproduction number, R 0, is often defined as the average number of infections generated by a newly infected individual in a fully susceptible population. The interpretation, meaning, and derivation of R 0 are controversial. However, in the context of mean field models, R 0 demarcates the epidemic threshold below which the infected population approaches zero in the limit of time. In this manner, R 0 has been proposed as a method for understanding the relative impact of public health interventions with respect to disease eliminations from a theoretical perspective. The use of R 0 is made more complexmore » by both the strong dependency of R 0 on the model form and the stochastic nature of transmission. A common assumption in models of HIV transmission that have closed form expressions for R 0 is that a single individual’s behavior is constant over time. For this research, we derive expressions for both R 0 and probability of an epidemic in a finite population under the assumption that people periodically change their sexual behavior over time. We illustrate the use of generating functions as a general framework to model the effects of potentially complex assumptions on the number of transmissions generated by a newly infected person in a susceptible population. In conclusion, we find that the relationship between the probability of an epidemic and R 0 is not straightforward, but, that as the rate of change in sexual behavior increases both R 0 and the probability of an epidemic also decrease.« less
A generating function approach to HIV transmission with dynamic contact rates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero-Severson, Ethan O.; Meadors, Grant D.; Volz, Erik M.
The basic reproduction number, R 0, is often defined as the average number of infections generated by a newly infected individual in a fully susceptible population. The interpretation, meaning, and derivation of R 0 are controversial. However, in the context of mean field models, R 0 demarcates the epidemic threshold below which the infected population approaches zero in the limit of time. In this manner, R 0 has been proposed as a method for understanding the relative impact of public health interventions with respect to disease eliminations from a theoretical perspective. The use of R 0 is made more complexmore » by both the strong dependency of R 0 on the model form and the stochastic nature of transmission. A common assumption in models of HIV transmission that have closed form expressions for R 0 is that a single individual’s behavior is constant over time. For this research, we derive expressions for both R 0 and probability of an epidemic in a finite population under the assumption that people periodically change their sexual behavior over time. We illustrate the use of generating functions as a general framework to model the effects of potentially complex assumptions on the number of transmissions generated by a newly infected person in a susceptible population. In conclusion, we find that the relationship between the probability of an epidemic and R 0 is not straightforward, but, that as the rate of change in sexual behavior increases both R 0 and the probability of an epidemic also decrease.« less
Poulin, Robert; Lagrue, Clément
2017-01-01
The spatial distribution of individuals of any species is a basic concern of ecology. The spatial distribution of parasites matters to control and conservation of parasites that affect human and nonhuman populations. This paper develops a quantitative theory to predict the spatial distribution of parasites based on the distribution of parasites in hosts and the spatial distribution of hosts. Four models are tested against observations of metazoan hosts and their parasites in littoral zones of four lakes in Otago, New Zealand. These models differ in two dichotomous assumptions, constituting a 2 × 2 theoretical design. One assumption specifies whether the variance function of the number of parasites per host individual is described by Taylor's law (TL) or the negative binomial distribution (NBD). The other assumption specifies whether the numbers of parasite individuals within each host in a square meter of habitat are independent or perfectly correlated among host individuals. We find empirically that the variance–mean relationship of the numbers of parasites per square meter is very well described by TL but is not well described by NBD. Two models that posit perfect correlation of the parasite loads of hosts in a square meter of habitat approximate observations much better than two models that posit independence of parasite loads of hosts in a square meter, regardless of whether the variance–mean relationship of parasites per host individual obeys TL or NBD. We infer that high local interhost correlations in parasite load strongly influence the spatial distribution of parasites. Local hotspots could influence control and conservation of parasites. PMID:27994156
Disease Extinction Versus Persistence in Discrete-Time Epidemic Models.
van den Driessche, P; Yakubu, Abdul-Aziz
2018-04-12
We focus on discrete-time infectious disease models in populations that are governed by constant, geometric, Beverton-Holt or Ricker demographic equations, and give a method for computing the basic reproduction number, [Formula: see text]. When [Formula: see text] and the demographic population dynamics are asymptotically constant or under geometric growth (non-oscillatory), we prove global asymptotic stability of the disease-free equilibrium of the disease models. Under the same demographic assumption, when [Formula: see text], we prove uniform persistence of the disease. We apply our theoretical results to specific discrete-time epidemic models that are formulated for SEIR infections, cholera in humans and anthrax in animals. Our simulations show that a unique endemic equilibrium of each of the three specific disease models is asymptotically stable whenever [Formula: see text].
Test of the Peierls-Nabarro model for dislocations in silicon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Q.; Joos, B.; Duesbery, M.S.
1995-11-01
We show, using an atomistic model with a Stillinger-Weber potential (SWP), that in the absence of reconstruction, the basic assumption of the Peierls-Nabarro (PN) model that the dislocation core is spread within the glide plane is verified for silicon. The Peierls stress (PS) obtained from the two models are in quantitative agreement ({approx}0.3{mu}), when restoring forces obtained from first principles generalized stacking-fault energy surfaces are used in the PN model [B. Joos, Q. Ren, and M. S. Duesbery, Phys. Rev. B {bold 50}, 5890 (1994)]. The PS was found to be isotropic in the glide plane. Within the SWP modelmore » no evidence of dissociation in the shuffle dislocations is found but glide sets do separate into two partials.« less
NASA Technical Reports Server (NTRS)
Zinnecker, Alicia M.; Chapman, Jeffryes W.; Lavelle, Thomas M.; Litt, Jonathan S.
2014-01-01
The Toolbox for the Modeling and Analysis of Thermodynamic Systems (T-MATS) is a tool that has been developed to allow a user to build custom models of systems governed by thermodynamic principles using a template to model each basic process. Validation of this tool in an engine model application was performed through reconstruction of the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) (v2) using the building blocks from the T-MATS (v1) library. In order to match the two engine models, it was necessary to address differences in several assumptions made in the two modeling approaches. After these modifications were made, validation of the engine model continued by integrating both a steady-state and dynamic iterative solver with the engine plant and comparing results from steady-state and transient simulation of the T-MATS and C-MAPSS models. The results show that the T-MATS engine model was accurate within 3% of the C-MAPSS model, with inaccuracy attributed to the increased dimension of the iterative solver solution space required by the engine model constructed using the T-MATS library. This demonstrates that, given an understanding of the modeling assumptions made in T-MATS and a baseline model, the T-MATS tool provides a viable option for constructing a computational model of a twin-spool turbofan engine that may be used in simulation studies.
Behavioral assays with mouse models of Alzheimer’s disease: practical considerations and guidelines
Puzzo, Daniela; Lee, Linda; Palmeri, Agostino; Calabrese, Giorgio; Arancio, Ottavio
2014-01-01
In Alzheimer’s disease (AD) basic research and drug discovery, mouse models are essential resources for uncovering biological mechanisms, validating molecular targets and screening potential compounds. Both transgenic and non-genetically modified mouse models enable access to different types of AD-like pathology in vivo. Although there is a wealth of genetic and biochemical studies on proposed AD pathogenic pathways, as a disease that centrally features cognitive failure, the ultimate readout for any interventions should be measures of learning and memory. This is particularly important given the lack of knowledge on disease etiology – assessment by cognitive assays offers the advantage of targeting relevant memory systems without requiring assumptions about pathogenesis. A multitude of behavioral assays are available for assessing cognitive functioning in mouse models, including ones specific for hippocampal-dependent learning and memory. Here we review the basics of available transgenic and non-transgenic AD mouse models and detail three well-established behavioral tasks commonly used for testing hippocampal-dependent cognition in mice – contextual fear conditioning, radial arm water maze and Morris water maze. In particular, we discuss the practical considerations, requirements and caveats of these behavioral testing paradigms. PMID:24462904
A random distribution reacting mixing layer model
NASA Technical Reports Server (NTRS)
Jones, Richard A.; Marek, C. John; Myrabo, Leik N.; Nagamatsu, Henry T.
1994-01-01
A methodology for simulation of molecular mixing, and the resulting velocity and temperature fields has been developed. The ideas are applied to the flow conditions present in the NASA Lewis Research Center Planar Reacting Shear Layer (PRSL) facility, and results compared to experimental data. A gaussian transverse turbulent velocity distribution is used in conjunction with a linearly increasing time scale to describe the mixing of different regions of the flow. Equilibrium reaction calculations are then performed on the mix to arrive at a new species composition and temperature. Velocities are determined through summation of momentum contributions. The analysis indicates a combustion efficiency of the order of 80 percent for the reacting mixing layer, and a turbulent Schmidt number of 2/3. The success of the model is attributed to the simulation of large-scale transport of fluid. The favorable comparison shows that a relatively quick and simple PC calculation is capable of simulating the basic flow structure in the reacting and nonreacting shear layer present in the facility given basic assumptions about turbulence properties.
Code of Federal Regulations, 2010 CFR
2010-01-01
... EMPLOYEES RETIREMENT SYSTEM-GENERAL ADMINISTRATION Employee Deductions and Government Contributions § 841... standards (using dynamic assumptions) and expressed as a level percentage of aggregate basic pay. Normal...
The Not So Common Sense: Differences in How People Judge Social and Political Life.
ERIC Educational Resources Information Center
Rosenberg, Shawn W.
This interdisciplinary book challenges two basic assumptions that orient much contemporary social scientific thinking. Offering theory and empirical research, the book rejects the classic liberal view that people share a basic common sense or rationality; while at the same time, it questions the view of contemporary social theory that meaning is…
Evaluating scaling models in biology using hierarchical Bayesian approaches
Price, Charles A; Ogle, Kiona; White, Ethan P; Weitz, Joshua S
2009-01-01
Theoretical models for allometric relationships between organismal form and function are typically tested by comparing a single predicted relationship with empirical data. Several prominent models, however, predict more than one allometric relationship, and comparisons among alternative models have not taken this into account. Here we evaluate several different scaling models of plant morphology within a hierarchical Bayesian framework that simultaneously fits multiple scaling relationships to three large allometric datasets. The scaling models include: inflexible universal models derived from biophysical assumptions (e.g. elastic similarity or fractal networks), a flexible variation of a fractal network model, and a highly flexible model constrained only by basic algebraic relationships. We demonstrate that variation in intraspecific allometric scaling exponents is inconsistent with the universal models, and that more flexible approaches that allow for biological variability at the species level outperform universal models, even when accounting for relative increases in model complexity. PMID:19453621
Elderd, Bret D.; Dwyer, Greg; Dukic, Vanja
2013-01-01
Estimates of a disease’s basic reproductive rate R0 play a central role in understanding outbreaks and planning intervention strategies. In many calculations of R0, a simplifying assumption is that different host populations have effectively identical transmission rates. This assumption can lead to an underestimate of the overall uncertainty associated with R0, which, due to the non-linearity of epidemic processes, may result in a mis-estimate of epidemic intensity and miscalculated expenditures associated with public-health interventions. In this paper, we utilize a Bayesian method for quantifying the overall uncertainty arising from differences in population-specific basic reproductive rates. Using this method, we fit spatial and non-spatial susceptible-exposed-infected-recovered (SEIR) models to a series of 13 smallpox outbreaks. Five outbreaks occurred in populations that had been previously exposed to smallpox, while the remaining eight occurred in Native-American populations that were naïve to the disease at the time. The Native-American outbreaks were close in a spatial and temporal sense. Using Bayesian Information Criterion (BIC), we show that the best model includes population-specific R0 values. These differences in R0 values may, in part, be due to differences in genetic background, social structure, or food and water availability. As a result of these inter-population differences, the overall uncertainty associated with the “population average” value of smallpox R0 is larger, a finding that can have important consequences for controlling epidemics. In general, Bayesian hierarchical models are able to properly account for the uncertainty associated with multiple epidemics, provide a clearer understanding of variability in epidemic dynamics, and yield a better assessment of the range of potential risks and consequences that decision makers face. PMID:24021521
Dynamics of an epidemic model with quarantine on scale-free networks
NASA Astrophysics Data System (ADS)
Kang, Huiyan; Liu, Kaihui; Fu, Xinchu
2017-12-01
Quarantine strategies are frequently used to control or reduce the transmission risks of epidemic diseases such as SARS, tuberculosis and cholera. In this paper, we formulate a susceptible-exposed-infected-quarantined-recovered model on a scale-free network incorporating the births and deaths of individuals. Considering that the infectivity is related to the degrees of infectious nodes, we introduce quarantined rate as a function of degree into the model, and quantify the basic reproduction number, which is shown to be dependent on some parameters, such as quarantined rate, infectivity and network structures. A theoretical result further indicates the heterogeneity of networks and higher infectivity will raise the disease transmission risk while quarantine measure will contribute to the prevention of epidemic spreading. Meanwhile, the contact assumption between susceptibles and infectives may impact the disease transmission. Furthermore, we prove that the basic reproduction number serves as a threshold value for the global stability of the disease-free and endemic equilibria and the uniform persistence of the disease on the network by constructing appropriate Lyapunov functions. Finally, some numerical simulations are illustrated to perform and complement our analytical results.
Walking through the statistical black boxes of plant breeding.
Xavier, Alencar; Muir, William M; Craig, Bruce; Rainey, Katy Martin
2016-10-01
The main statistical procedures in plant breeding are based on Gaussian process and can be computed through mixed linear models. Intelligent decision making relies on our ability to extract useful information from data to help us achieve our goals more efficiently. Many plant breeders and geneticists perform statistical analyses without understanding the underlying assumptions of the methods or their strengths and pitfalls. In other words, they treat these statistical methods (software and programs) like black boxes. Black boxes represent complex pieces of machinery with contents that are not fully understood by the user. The user sees the inputs and outputs without knowing how the outputs are generated. By providing a general background on statistical methodologies, this review aims (1) to introduce basic concepts of machine learning and its applications to plant breeding; (2) to link classical selection theory to current statistical approaches; (3) to show how to solve mixed models and extend their application to pedigree-based and genomic-based prediction; and (4) to clarify how the algorithms of genome-wide association studies work, including their assumptions and limitations.
A radiosity-based model to compute the radiation transfer of soil surface
NASA Astrophysics Data System (ADS)
Zhao, Feng; Li, Yuguang
2011-11-01
A good understanding of interactions of electromagnetic radiation with soil surface is important for a further improvement of remote sensing methods. In this paper, a radiosity-based analytical model for soil Directional Reflectance Factor's (DRF) distributions was developed and evaluated. The model was specifically dedicated to the study of radiation transfer for the soil surface under tillage practices. The soil was abstracted as two dimensional U-shaped or V-shaped geometric structures with periodic macroscopic variations. The roughness of the simulated surfaces was expressed as a ratio of the height to the width for the U and V-shaped structures. The assumption was made that the shadowing of soil surface, simulated by U or V-shaped grooves, has a greater influence on the soil reflectance distribution than the scattering properties of basic soil particles of silt and clay. Another assumption was that the soil is a perfectly diffuse reflector at a microscopic level, which is a prerequisite for the application of the radiosity method. This radiosity-based analytical model was evaluated by a forward Monte Carlo ray-tracing model under the same structural scenes and identical spectral parameters. The statistics of these two models' BRF fitting results for several soil structures under the same conditions showed the good agreements. By using the model, the physical mechanism of the soil bidirectional reflectance pattern was revealed.
Baroclinic instability with variable gravity: A perturbation analysis
NASA Technical Reports Server (NTRS)
Giere, A. C.; Fowliss, W. W.; Arias, S.
1980-01-01
Solutions for a quasigeostrophic baroclinic stability problem in which gravity is a function of height were obtained. Curvature and horizontal shear of the basic state flow were omitted and the vertical and horizontal temperature gradients of the basic state were taken as constant. The effect of a variable dielectric body force, analogous to gravity, on baroclinic instability for the design of a spherical, baroclinic model for Spacelab was determined. Such modeling could not be performed in a laboratory on the Earth's surface because the body force could not be made strong enough to dominate terrestrial gravity. A consequence of the body force variation and the preceding assumptions was that the potential vorticity gradient of the basic state vanished. The problem was solved using a perturbation method. The solution gives results which are qualitatively similar to Eady's results for constant gravity; a short wavelength cutoff and a wavelength of maximum growth rate were observed. The averaged values of the basic state indicate that both the wavelength range of the instability and the growth rate at maximum instability are increased. Results indicate that the presence of the variable body force will not significantly alter the dynamics of the Spacelab experiment. The solutions are also relevant to other geophysical fluid flows where gravity is constant but the static stability or Brunt-Vaisala frequency is a function of height.
A multi agent model for the limit order book dynamics
NASA Astrophysics Data System (ADS)
Bartolozzi, M.
2010-11-01
In the present work we introduce a novel multi-agent model with the aim to reproduce the dynamics of a double auction market at microscopic time scale through a faithful simulation of the matching mechanics in the limit order book. The agents follow a noise decision making process where their actions are related to a stochastic variable, the market sentiment, which we define as a mixture of public and private information. The model, despite making just few basic assumptions over the trading strategies of the agents, is able to reproduce several empirical features of the high-frequency dynamics of the market microstructure not only related to the price movements but also to the deposition of the orders in the book.
Correction of aeroheating-induced intensity nonuniformity in infrared images
NASA Astrophysics Data System (ADS)
Liu, Li; Yan, Luxin; Zhao, Hui; Dai, Xiaobing; Zhang, Tianxu
2016-05-01
Aeroheating-induced intensity nonuniformity effects severely influence the effective performance of an infrared (IR) imaging system in high-speed flight. In this paper, we propose a new approach to the correction of intensity nonuniformity in IR images. The basic assumption is that the low-frequency intensity bias is additive and smoothly varying so that it can be modeled as a bivariate polynomial and estimated by using an isotropic total variation (TV) model. A half quadratic penalty method is applied to the isotropic form of TV discretization. And an alternating minimization algorithm is adopted for solving the optimization model. The experimental results of simulated and real aerothermal images show that the proposed correction method can effectively improve IR image quality.
Applying the compound Poisson process model to the reporting of injury-related mortality rates.
Kegler, Scott R
2007-02-16
Injury-related mortality rate estimates are often analyzed under the assumption that case counts follow a Poisson distribution. Certain types of injury incidents occasionally involve multiple fatalities, however, resulting in dependencies between cases that are not reflected in the simple Poisson model and which can affect even basic statistical analyses. This paper explores the compound Poisson process model as an alternative, emphasizing adjustments to some commonly used interval estimators for population-based rates and rate ratios. The adjusted estimators involve relatively simple closed-form computations, which in the absence of multiple-case incidents reduce to familiar estimators based on the simpler Poisson model. Summary data from the National Violent Death Reporting System are referenced in several examples demonstrating application of the proposed methodology.
On knowing the unconscious: lessons from the epistemology of geometry and space.
Brakel, L A
1994-02-01
Concepts involving unconscious processes and contents are central to any understanding of psychoanalysis. Indeed, the dynamic unconscious is familiar as a necessary assumption of the psychoanalytic method. Using the manner of knowing the geometry of space, including non-ordinary sized space, this paper attempts to demonstrate by analogy the possibility of knowing (and knowing the nature of) unconscious mentation-that of which by definition we cannot be aware; and yet that which constitutes a basic assumption of psychoanalysis. As an assumption of the psychoanalytic method, no amount of data from within the psychoanalytic method can ever provide evidence for the existence of the unconscious, nor for knowing its nature; hence the need for this sort of illustration by analogy. Along the way, three claims are made: (1) Freudian 'secondary process' operating during everyday adult, normal, logical thought can be considered a modernised version of the Kantian categories. (2) Use of models facilitates a generation of outside-the-Kantian-categories possibilities, and also provides a conserving function, as outside-the-categories possibilities can be assimilated. (3) Transformations are different from translations; knowledge of transformations can provide non-trivial knowledge about various substrates, otherwise difficult to know.
Area, length and thickness conservation: Dogma or reality?
NASA Astrophysics Data System (ADS)
Moretti, Isabelle; Callot, Jean Paul
2012-08-01
The basic assumption of quantitative structural geology is the preservation of material during deformation. However the hypothesis of volume conservation alone does not help to predict past or future geometries and so this assumption is usually translated into bed length in 2D (or area in 3D) and thickness conservation. When subsurface data are missing, geologists may extrapolate surface data to depth using the kink-band approach. These extrapolations, preserving both thicknesses and dips, lead to geometries which are restorable but often erroneous, due to both disharmonic deformation and internal deformation of layers. First, the Bolivian Sub-Andean Zone case is presented to highlight the evolution of the concepts on which balancing is based, and the important role played by a decoupling level in enhancing disharmony. Second, analogue models are analyzed to test the validity of the balancing techniques. Chamberlin's excess area approach is shown to be on average valid. However, neither the length nor the thicknesses are preserved. We propose that in real cases, the length preservation hypothesis during shortening could also be a wrong assumption. If the data are good enough to image the decollement level, the Chamberlin excess area method could be used to compute the bed length changes.
GRMHD Simulations of Visibility Amplitude Variability for Event Horizon Telescope Images of Sgr A*
NASA Astrophysics Data System (ADS)
Medeiros, Lia; Chan, Chi-kwan; Özel, Feryal; Psaltis, Dimitrios; Kim, Junhan; Marrone, Daniel P.; Sa¸dowski, Aleksander
2018-04-01
The Event Horizon Telescope will generate horizon scale images of the black hole in the center of the Milky Way, Sgr A*. Image reconstruction using interferometric visibilities rests on the assumption of a stationary image. We explore the limitations of this assumption using high-cadence disk- and jet-dominated GRMHD simulations of Sgr A*. We also employ analytic models that capture the basic characteristics of the images to understand the origin of the variability in the simulated visibility amplitudes. We find that, in all simulations, the visibility amplitudes for baselines oriented parallel and perpendicular to the spin axis of the black hole follow general trends that do not depend strongly on accretion-flow properties. This suggests that fitting Event Horizon Telescope observations with simple geometric models may lead to a reasonably accurate determination of the orientation of the black hole on the plane of the sky. However, in the disk-dominated models, the locations and depths of the minima in the visibility amplitudes are highly variable and are not related simply to the size of the black hole shadow. This suggests that using time-independent models to infer additional black hole parameters, such as the shadow size or the spin magnitude, will be severely affected by the variability of the accretion flow.
The Mw 7.7 Bhuj earthquake: Global lessons for earthquake hazard in intra-plate regions
Schweig, E.; Gomberg, J.; Petersen, M.; Ellis, M.; Bodin, P.; Mayrose, L.; Rastogi, B.K.
2003-01-01
The Mw 7.7 Bhuj earthquake occurred in the Kachchh District of the State of Gujarat, India on 26 January 2001, and was one of the most damaging intraplate earthquakes ever recorded. This earthquake is in many ways similar to the three great New Madrid earthquakes that occurred in the central United States in 1811-1812, An Indo-US team is studying the similarities and differences of these sequences in order to learn lessons for earthquake hazard in intraplate regions. Herein we present some preliminary conclusions from that study. Both the Kutch and New Madrid regions have rift type geotectonic setting. In both regions the strain rates are of the order of 10-9/yr and attenuation of seismic waves as inferred from observations of intensity and liquefaction are low. These strain rates predict recurrence intervals for Bhuj or New Madrid sized earthquakes of several thousand years or more. In contrast, intervals estimated from paleoseismic studies and from other independent data are significantly shorter, probably hundreds of years. All these observations together may suggest that earthquakes relax high ambient stresses that are locally concentrated by rheologic heterogeneities, rather than loading by plate-tectonic forces. The latter model generally underlies basic assumptions made in earthquake hazard assessment, that the long-term average rate of energy released by earthquakes is determined by the tectonic loading rate, which thus implies an inherent average periodicity of earthquake occurrence. Interpreting the observations in terms of the former model therefore may require re-examining the basic assumptions of hazard assessment.
Li, Qiuping; Lin, Yi; Hu, Caiping; Xu, Yinghua; Zhou, Huiya; Yang, Liping; Xu, Yongyong
2016-12-01
The Hospital Anxiety and Depression Scale (HADS) acts as one of the most frequently used self-reported measures in cancer practice. The evidence for construct validity of HADS, however, remains inconclusive. The objective of this study is to evaluate the psychometric properties of the Chinese version HADS (C-HADS) in terms of construct validity, internal consistency reliability, and concurrent validity in dyads of Chinese cancer patients and their family caregivers. This was a cross-sectional study, conducted in multiple centers: one hospital in each of the seven different administrative regions in China from October 2014 to May 2015. A total of 641 dyads, consisting of cancer patients and family caregivers, completed a survey assessing their demographic and background information, anxiety and depression using C-HADS, and quality of life (QOL) using Chinese version SF-12. Data analysis methods included descriptive statistics, confirmatory factor analysis (CFA), and Pearson correlations. Both the two-factor and one-factor models offered the best and adequate fit to the data in cancer patients and family caregivers respectively. The comparison of the two-factor and single-factor models supports the basic assumption of two-factor construct of C-HADS. The overall and two subscales of C-HADS in both cancer patients and family caregivers had good internal consistency and acceptable concurrent validity. The Chinese version of the HADS may be a reliable and valid screening tool, as indicated by its original two-factor structure. The finding supports the basic assumption of two-factor construct of HADS. Copyright © 2016 Elsevier Ltd. All rights reserved.
The Federal Role and Chapter 1: Rethinking Some Basic Assumptions.
ERIC Educational Resources Information Center
Kirst, Michael W.
In the 20 years since the major Federal program for the disadvantaged began, surprisingly little has changed from its original vision. It is now time to question some of the basic policies of Chapter 1 of the Education Consolidation and Improvement Act in view of the change in conceptions about the Federal role and the recent state and local…
ERIC Educational Resources Information Center
Radtke, Jean, Ed.
Developed as a result of an institute on rehabilitation issues, this document is a guide to assistive technology as it affects successful competitive employment outcomes for people with disabilities. Chapter 1 offers basic information on assistive technology including basic assumptions, service provider approaches, options for technology…
Code of Federal Regulations, 2010 CFR
2010-01-01
... for valuation of the System, based on dynamic assumptions. The present value factors are unisex... EMPLOYEES RETIREMENT SYSTEM-BASIC ANNUITY Alternative Forms of Annuities § 842.702 Definitions. In this...
Study of photon strength functions via (γ→, γ', γ″) reactions at the γ3-setup
NASA Astrophysics Data System (ADS)
Isaak, Johann; Savran, Deniz; Beck, Tobias; Gayer, Udo; Krishichayan; Löher, Bastian; Pietralla, Norbert; Scheck, Marcus; Tornow, Werner; Werner, Volker; Zilges, Andreas
2018-05-01
One of the basic ingredients for the modelling of the nucleosynthesis of heavy elements are so-called photon strength functions and the assumption of the Brink-Axel hypothesis. This hypothesis has been studied for many years by numerous experiments using different and complementary reactions. The present manuscript aims to introduce a model-independent approach to study photon strength functions via γ-γ coincidence spectroscopy of photoexcited states in 128Te. The experimental results provide evidence that the photon strength function extracted from photoabsorption cross sections is not in an overall agreement with the one determined from direct transitions to low-lying excited states.
NASA Astrophysics Data System (ADS)
Brenner, Howard
2011-10-01
Linear irreversible thermodynamic principles are used to demonstrate, by counterexample, the existence of a fundamental incompleteness in the basic pre-constitutive mass, momentum, and energy equations governing fluid mechanics and transport phenomena in continua. The demonstration is effected by addressing the elementary case of steady-state heat conduction (and transport processes in general) occurring in quiescent fluids. The counterexample questions the universal assumption of equality of the four physically different velocities entering into the basic pre-constitutive mass, momentum, and energy conservation equations. Explicitly, it is argued that such equality is an implicit constitutive assumption rather than an established empirical fact of unquestioned authority. Such equality, if indeed true, would require formal proof of its validity, currently absent from the literature. In fact, our counterexample shows the assumption of equality to be false. As the current set of pre-constitutive conservation equations appearing in textbooks are regarded as applicable both to continua and noncontinua (e.g., rarefied gases), our elementary counterexample negating belief in the equality of all four velocities impacts on all aspects of fluid mechanics and transport processes, continua and noncontinua alike.
Transport Phenomena During Equiaxed Solidification of Alloys
NASA Technical Reports Server (NTRS)
Beckermann, C.; deGroh, H. C., III
1997-01-01
Recent progress in modeling of transport phenomena during dendritic alloy solidification is reviewed. Starting from the basic theorems of volume averaging, a general multiphase modeling framework is outlined. This framework allows for the incorporation of a variety of microscale phenomena in the macroscopic transport equations. For the case of diffusion dominated solidification, a simplified set of model equations is examined in detail and validated through comparisons with numerous experimental data for both columnar and equiaxed dendritic growth. This provides a critical assessment of the various model assumptions. Models that include melt flow and solid phase transport are also discussed, although their validation is still at an early stage. Several numerical results are presented that illustrate some of the profound effects of convective transport on the final compositional and structural characteristics of a solidified part. Important issues that deserve continuing attention are identified.
[Application of State Space model in the evaluation of the prevention and control for mumps].
Luo, C; Li, R Z; Xu, Q Q; Xiong, P; Liu, Y X; Xue, F Z; Xu, Q; Li, X J
2017-09-10
Objective: To analyze the epidemiological characteristics of mumps in 2012 and 2014, and to explore the preventive effect of the second dose of mumps-containing vaccine (MuCV) in mumps in Shandong province. Methods: On the basis of certain model assumptions, a Space State model was formulated. Iterated Filter was applied to the epidemic model to estimate the parameters. Results: The basic reproduction number ( R (0)) for children in schools was 4.49 (95 %CI : 4.30-4.67) and 2.50 (95 %CI : 2.38-2.61) respectively for the year of 2012 and 2014. Conclusions: Space State model seems suitable for mumps prevalence description. The policy of 2-dose MuCV can effectively reduce the number of total patients. Children in schools are the key to reduce the mumps.
NASA Astrophysics Data System (ADS)
Line, Michael
The field of transiting exoplanet atmosphere characterization has grown considerably over the past decade given the wealth of photometric and spectroscopic data from the Hubble and Spitzer space telescopes. In order to interpret these data, atmospheric models combined with Bayesian approaches are required. From spectra, these approaches permit us to infer fundamental atmospheric properties and how their compositions can relate back to planet formation. However, such approaches must make a wide range of assumptions regarding the physics/parameterizations included in the model atmospheres. There has yet to be a comprehensive investigation exploring how these model assumptions influence our interpretations of exoplanetary spectra. Understanding the impact of these assumptions is especially important since the James Webb Space Telescope (JWST) is expected to invest a substantial portion of its time observing transiting planet atmospheres. It is therefore prudent to optimize and enhance our tools to maximize the scientific return from the revolutionary data to come. The primary goal of the proposed work is to determine the pieces of information we can robustly learn from transiting planet spectra as obtained by JWST and other future, space-based platforms, by investigating commonly overlooked model assumptions. We propose to explore the following effects and how they impact our ability to infer exoplanet atmospheric properties: 1. Stellar/Planetary Uncertainties: Transit/occultation eclipse depths and subsequent planetary spectra are measured relative to their host stars. How do stellar uncertainties, on radius, effective temperature, metallicity, and gravity, as well as uncertainties in the planetary radius and gravity, propagate into the uncertainties on atmospheric composition and thermal structure? Will these uncertainties significantly bias our atmospheric interpretations? Is it possible to use the relative measurements of the planetary spectra to provide additional constraints on the stellar properties? 2. The "1D" Assumption: Atmospheres are inherently three-dimensional. Many exoplanet atmosphere models, especially within retrieval frameworks, assume 1D physics and chemistry when interpreting spectra. How does this "1D" atmosphere assumption bias our interpretation of exoplanet spectra? Do we have to consider global temperature variations such as day-night contrasts or hot spots? What about spatially inhomogeneous molecular abundances and clouds? How will this change our interpretations of phase resolved spectra? 3. Clouds/Hazes: Understanding how clouds/hazes impact transit spectra is absolutely critical if we are to obtain proper estimates of basic atmospheric quantities. How do the assumptions in cloud physics bias our inferences of molecular abundances in transmission? What kind of data (wavelengths, signal-to-noise, resolution) do we need to infer cloud composition, vertical extent, spatial distribution (patchy or global), and size distributions? The proposed work is relevant and timely to the scope of the NASA Exoplanet Research program. The proposed work aims to further develop the critical theoretical modeling tools required to rigorously interpret transiting exoplanet atmosphere data in order to maximize the science return from JWST and beyond. This work will serve as a benchmark study for defining the data (wavelength ranges, signal-to-noises, and resolutions) required from a modeling perspective to "characterize exoplanets and their atmospheres in order to inform target and operational choices for current NASA missions, and/or targeting, operational, and formulation data for future NASA observatories". Doing so will allow us to better "understand the chemical and physical processes of exoplanets (their atmospheres)" which will ultimately " improve understanding of the origins of exoplanetary systems" through robust planetary elemental abundance determinations.
ERIC Educational Resources Information Center
Bentz, Robert P.; And Others
The commuter institute is one to which students commute. The two basic assumptions of this study are: (1) the Chicago Circle campus of the University of Illinois will remain a commuter institution during the decade ahead; and (2) the campus will increasingly serve a more heterogeneous student body. These assumptions have important implications for…
NASA Astrophysics Data System (ADS)
LeRoy, S.; Segur, P.; Teyssedre, G.; Laurent, C.
2004-01-01
We present a conduction model aimed at describing bipolar transport and space charge phenomena in low density polyethylene under dc stress. In the first part we recall the basic requirements for the description of charge transport and charge storage in disordered media with emphasis on the case of polyethylene. A quick review of available conduction models is presented and our approach is compared with these models. Then, the bases of the model are described and related assumptions are discussed. Finally, results on external current, trapped and free space charge distributions, field distribution and recombination rate are presented and discussed, considering a constant dc voltage, a step-increase of the voltage, and a polarization-depolarization protocol for the applied voltage. It is shown that the model is able to describe the general features reported for external current, electroluminescence and charge distribution in polyethylene.
A mathematical model of Staphylococcus aureus control in dairy herds.
Zadoks, R. N.; Allore, H. G.; Hagenaars, T. J.; Barkema, H. W.; Schukken, Y. H.
2002-01-01
An ordinary differential equation model was developed to simulate dynamics of Staphylococcus aureus mastitis. Data to estimate model parameters were obtained from an 18-month observational study in three commercial dairy herds. A deterministic simulation model was constructed to estimate values of the basic (R0) and effective (Rt) reproductive number in each herd, and to examine the effect of management on mastitis control. In all herds R0 was below the threshold value 1, indicating control of contagious transmission. Rt was higher than R0 because recovered individuals were more susceptible to infection than individuals without prior infection history. Disease dynamics in two herds were well described by the model. Treatment of subclinical mastitis and prevention of influx of infected individuals contributed to decrease of S. aureus prevalence. For one herd, the model failed to mimic field observations. Explanations for the discrepancy are given in a discussion of current knowledge and model assumptions. PMID:12403116
A novel approach of modeling continuous dark hydrogen fermentation.
Alexandropoulou, Maria; Antonopoulou, Georgia; Lyberatos, Gerasimos
2018-02-01
In this study a novel modeling approach for describing fermentative hydrogen production in a continuous stirred tank reactor (CSTR) was developed, using the Aquasim modeling platform. This model accounts for the key metabolic reactions taking place in a fermentative hydrogen producing reactor, using fixed stoichiometry but different reaction rates. Biomass yields are determined based on bioenergetics. The model is capable of describing very well the variation in the distribution of metabolic products for a wide range of hydraulic retention times (HRT). The modeling approach is demonstrated using the experimental data obtained from a CSTR, fed with food industry waste (FIW), operating at different HRTs. The kinetic parameters were estimated through fitting to the experimental results. Hydrogen and total biogas production rates were predicted very well by the model, validating the basic assumptions regarding the implicated stoichiometric biochemical reactions and their kinetic rates. Copyright © 2017 Elsevier Ltd. All rights reserved.
Is the hypothesis of preimplantation genetic screening (PGS) still supportable? A review.
Gleicher, Norbert; Orvieto, Raoul
2017-03-27
The hypothesis of preimplantation genetic diagnosis (PGS) was first proposed 20 years ago, suggesting that elimination of aneuploid embryos prior to transfer will improve implantation rates of remaining embryos during in vitro fertilization (IVF), increase pregnancy and live birth rates and reduce miscarriages. The aforementioned improved outcome was based on 5 essential assumptions: (i) Most IVF cycles fail because of aneuploid embryos. (ii) Their elimination prior to embryo transfer will improve IVF outcomes. (iii) A single trophectoderm biopsy (TEB) at blastocyst stage is representative of the whole TE. (iv) TE ploidy reliably represents the inner cell mass (ICM). (v) Ploidy does not change (i.e., self-correct) downstream from blastocyst stage. We aim to offer a review of the aforementioned assumptions and challenge the general hypothesis of PGS. We reviewed 455 publications, which as of January 20, 2017 were listed in PubMed under the search phrase < preimplantation genetic screening (PGS) for aneuploidy>. The literature review was performed by both authors who agreed on the final 55 references. Various reports over the last 18 months have raised significant questions not only about the basic clinical utility of PGS but the biological underpinnings of the hypothesis, the technical ability of a single trophectoderm (TE) biopsy to accurately assess an embryo's ploidy, and suggested that PGS actually negatively affects IVF outcomes while not affecting miscarriage rates. Moreover, due to high rates of false positive diagnoses as a consequence of high mosaicism rates in TE, PGS leads to the discarding of large numbers of normal embryos with potential for normal euploid pregnancies if transferred rather than disposed of. We found all 5 basic assumptions underlying the hypothesis of PGS to be unsupported: (i) The association of embryo aneuploidy with IVF failure has to be reevaluated in view how much more common TE mosaicism is than has until recently been appreciated. (ii) Reliable elimination of presumed aneuploid embryos prior to embryo transfer appears unrealistic. (iii) Mathematical models demonstrate that a single TEB cannot provide reliable information about the whole TE. (iv) TE does not reliably reflect the ICM. (v) Embryos, likely, still have strong innate ability to self-correct downstream from blastocyst stage, with ICM doing so better than TE. The hypothesis of PGS, therefore, no longer appears supportable. With all 5 basic assumptions underlying the hypothesis of PGS demonstrated to have been mistaken, the hypothesis of PGS, itself, appears to be discredited. Clinical use of PGS for the purpose of IVF outcome improvements should, therefore, going forward be restricted to research studies.
NASA Technical Reports Server (NTRS)
Perez-Peraza, J.; Alvarez, M.; Laville, A.; Gallegos, A.
1985-01-01
The study of charge changing cross sections of fast ions colliding with matter provides the fundamental basis for the analysis of the charge states produced in such interactions. Given the high degree of complexity of the phenomena, there is no theoretical treatment able to give a comprehensive description. In fact, the involved processes are very dependent on the basic parameters of the projectile, such as velocity charge state, and atomic number, and on the target parameters, the physical state (molecular, atomic or ionized matter) and density. The target velocity, may have also incidence on the process, through the temperature of the traversed medium. In addition, multiple electron transfer in single collisions intrincates more the phenomena. Though, in simplified cases, such as protons moving through atomic hydrogen, considerable agreement has been obtained between theory and experiments However, in general the available theoretical approaches have only limited validity in restricted regions of the basic parameters. Since most measurements of charge changing cross sections are performed in atomic matter at ambient temperature, models are commonly based on the assumption of targets at rest, however at Astrophysical scales, temperature displays a wide range in atomic and ionized matter. Therefore, due to the lack of experimental data , an attempt is made here to quantify temperature dependent cross sections on basis to somewhat arbitrary, but physically reasonable assumptions.
Nimmo, J.R.; Herkelrath, W.N.; Laguna, Luna A.M.
2007-01-01
Numerous models are in widespread use for the estimation of soil water retention from more easily measured textural data. Improved models are needed for better prediction and wider applicability. We developed a basic framework from which new and existing models can be derived to facilitate improvements. Starting from the assumption that every particle has a characteristic dimension R associated uniquely with a matric pressure ?? and that the form of the ??-R relation is the defining characteristic of each model, this framework leads to particular models by specification of geometric relationships between pores and particles. Typical assumptions are that particles are spheres, pores are cylinders with volume equal to the associated particle volume times the void ratio, and that the capillary inverse proportionality between radius and matric pressure is valid. Examples include fixed-pore-shape and fixed-pore-length models. We also developed alternative versions of the model of Arya and Paris that eliminate its interval-size dependence and other problems. The alternative models are calculable by direct application of algebraic formulas rather than manipulation of data tables and intermediate results, and they easily combine with other models (e.g., incorporating structural effects) that are formulated on a continuous basis. Additionally, we developed a family of models based on the same pore geometry as the widely used unsaturated hydraulic conductivity model of Mualem. Predictions of measurements for different suitable media show that some of the models provide consistently good results and can be chosen based on ease of calculations and other factors. ?? Soil Science Society of America. All rights reserved.
ERIC Educational Resources Information Center
Falk, Ruma; Kendig, Keith
2013-01-01
Two contestants debate the notorious probability problem of the sex of the second child. The conclusions boil down to explication of the underlying scenarios and assumptions. Basic principles of probability theory are highlighted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basu, N.; Pryor, R.J.
1997-09-01
This report presents a microsimulation model of a transition economy. Transition is defined as the process of moving from a state-enterprise economy to a market economy. The emphasis is on growing a market economy starting from basic microprinciples. The model described in this report extends and modifies the capabilities of Aspen, a new agent-based model that is being developed at Sandia National Laboratories on a massively parallel Paragon computer. Aspen is significantly different from traditional models of the economy. Aspen`s emphasis on disequilibrium growth paths, its analysis based on evolution and emergent behavior rather than on a mechanistic view ofmore » society, and its use of learning algorithms to simulate the behavior of some agents rather than an assumption of perfect rationality make this model well-suited for analyzing economic variables of interest from transition economies. Preliminary results from several runs of the model are included.« less
A comparison of experimental and calculated thin-shell leading-edge buckling due to thermal stresses
NASA Technical Reports Server (NTRS)
Jenkins, Jerald M.
1988-01-01
High-temperature thin-shell leading-edge buckling test data are analyzed using NASA structural analysis (NASTRAN) as a finite element tool for predicting thermal buckling characteristics. Buckling points are predicted for several combinations of edge boundary conditions. The problem of relating the appropriate plate area to the edge stress distribution and the stress gradient is addressed in terms of analysis assumptions. Local plasticity was found to occur on the specimen analyzed, and this tended to simplify the basic problem since it effectively equalized the stress gradient from loaded edge to loaded edge. The initial loading was found to be difficult to select for the buckling analysis because of the transient nature of thermal stress. Multiple initial model loadings are likely required for complicated thermal stress time histories before a pertinent finite element buckling analysis can be achieved. The basic mode shapes determined from experimentation were correctly identified from computation.
Discrete Neural Signatures of Basic Emotions.
Saarimäki, Heini; Gotsopoulos, Athanasios; Jääskeläinen, Iiro P; Lampinen, Jouko; Vuilleumier, Patrik; Hari, Riitta; Sams, Mikko; Nummenmaa, Lauri
2016-06-01
Categorical models of emotions posit neurally and physiologically distinct human basic emotions. We tested this assumption by using multivariate pattern analysis (MVPA) to classify brain activity patterns of 6 basic emotions (disgust, fear, happiness, sadness, anger, and surprise) in 3 experiments. Emotions were induced with short movies or mental imagery during functional magnetic resonance imaging. MVPA accurately classified emotions induced by both methods, and the classification generalized from one induction condition to another and across individuals. Brain regions contributing most to the classification accuracy included medial and inferior lateral prefrontal cortices, frontal pole, precentral and postcentral gyri, precuneus, and posterior cingulate cortex. Thus, specific neural signatures across these regions hold representations of different emotional states in multimodal fashion, independently of how the emotions are induced. Similarity of subjective experiences between emotions was associated with similarity of neural patterns for the same emotions, suggesting a direct link between activity in these brain regions and the subjective emotional experience. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Social factors in space station interiors
NASA Technical Reports Server (NTRS)
Cranz, Galen; Eichold, Alice; Hottes, Klaus; Jones, Kevin; Weinstein, Linda
1987-01-01
Using the example of the chair, which is often written into space station planning but which serves no non-cultural function in zero gravity, difficulties in overcoming cultural assumptions are discussed. An experimental approach is called for which would allow designers to separate cultural assumptions from logistic, social and psychological necessities. Simulations, systematic doubt and monitored brainstorming are recommended as part of basic research so that the designer will approach the problems of space module design with a complete program.
Shek, Daniel T L; Ma, Cecilia M S
2011-01-05
Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.
Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations
Shek, Daniel T. L.; Ma, Cecilia M. S.
2011-01-01
Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented. PMID:21218263
Maximization, learning, and economic behavior
Erev, Ido; Roth, Alvin E.
2014-01-01
The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design. PMID:25024182
Maximization, learning, and economic behavior.
Erev, Ido; Roth, Alvin E
2014-07-22
The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design.
NASA Astrophysics Data System (ADS)
Brassard, Pierre; Fontaine, Gilles
2015-06-01
The accretion-diffusion picture is the model par excellence for describing the presence of planetary debris polluting the atmospheres of relatively cool white dwarfs. In the time-dependent approach used in Paper II of this series (Fontaine et al. 2014), the basic assumption is that the accreted metals are trace elements and do not influence the background structure, which may be considered static in time. Furthermore, the usual assumption of instantaneous mixing in the convection zone is made. As part of the continuing development of our local evolutionary code, diffusion in presence of stellar winds or accretion is now fully coupled to evolution. Convection is treated as a diffusion process, i.e., the assumption of instantaneous mixing is relaxed, and, furthermore, overshooting is included. This allows feedback on the evolving structure from the accreting metals. For instance, depending of its abundance, a given metal may contribute enough to the overall opacity (especially in a He background) to change the size of the convection zone as a function of time. Our better approach also allows to include in a natural way the mechanism of thermohaline convection, which we discuss at some length. Also, it is easy to consider sophisticated time-dependent models of accretion from circumstellar disks, such as those developed by Roman Rafikov at Princeton for instance. The current limitations of our approach are 1) the calculations are extremely computer-intensive, and 2) we have not yet developed detailed EOS megatables for metals beyond oxygen.
Riddles of masculinity: gender, bisexuality, and thirdness.
Fogel, Gerald I
2006-01-01
Clinical examples are used to illuminate several riddles of masculinity-ambiguities, enigmas, and paradoxes in relation to gender, bisexuality, and thirdness-frequently seen in male patients. Basic psychoanalytic assumptions about male psychology are examined in the light of advances in female psychology, using ideas from feminist and gender studies as well as important and now widely accepted trends in contemporary psychoanalytic theory. By reexamining basic assumptions about heterosexual men, as has been done with ideas concerning women and homosexual men, complexity and nuance come to the fore to aid the clinician in treating the complex characterological pictures seen in men today. In a context of rapid historical and theoretical change, the use of persistent gender stereotypes and unnecessarily limiting theoretical formulations, though often unintended, may mask subtle countertransference and theoretical blind spots, and limit optimal clinical effectiveness.
Costing interventions in primary care.
Kernick, D
2000-02-01
Against a background of increasing demands on limited resources, studies that relate benefits of health interventions to the resources they consume will be an important part of any decision-making process in primary care, and an accurate assessment of costs will be an important part of any economic evaluation. Although there is no such thing as a gold standard cost estimate, there are a number of basic costing concepts that underlie any costing study. How costs are derived and combined will depend on the assumptions that have been made in their derivation. It is important to be clear what assumptions have been made and why in order to maintain consistency across comparative studies and prevent inappropriate conclusions being drawn. This paper outlines some costing concepts and principles to enable primary care practitioners and researchers to have a basic understanding of costing exercises and their pitfalls.
NASA Technical Reports Server (NTRS)
Bast, Callie Corinne Scheidt
1994-01-01
This thesis presents the on-going development of methodology for a probabilistic material strength degradation model. The probabilistic model, in the form of a postulated randomized multifactor equation, provides for quantification of uncertainty in the lifetime material strength of aerospace propulsion system components subjected to a number of diverse random effects. This model is embodied in the computer program entitled PROMISS, which can include up to eighteen different effects. Presently, the model includes four effects that typically reduce lifetime strength: high temperature, mechanical fatigue, creep, and thermal fatigue. Statistical analysis was conducted on experimental Inconel 718 data obtained from the open literature. This analysis provided regression parameters for use as the model's empirical material constants, thus calibrating the model specifically for Inconel 718. Model calibration was carried out for four variables, namely, high temperature, mechanical fatigue, creep, and thermal fatigue. Methodology to estimate standard deviations of these material constants for input into the probabilistic material strength model was developed. Using the current version of PROMISS, entitled PROMISS93, a sensitivity study for the combined effects of mechanical fatigue, creep, and thermal fatigue was performed. Results, in the form of cumulative distribution functions, illustrated the sensitivity of lifetime strength to any current value of an effect. In addition, verification studies comparing a combination of mechanical fatigue and high temperature effects by model to the combination by experiment were conducted. Thus, for Inconel 718, the basic model assumption of independence between effects was evaluated. Results from this limited verification study strongly supported this assumption.
Representation of natural numbers in quantum mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benioff, Paul
2001-03-01
This paper represents one approach to making explicit some of the assumptions and conditions implied in the widespread representation of numbers by composite quantum systems. Any nonempty set and associated operations is a set of natural numbers or a model of arithmetic if the set and operations satisfy the axioms of number theory or arithmetic. This paper is limited to k-ary representations of length L and to the axioms for arithmetic modulo k{sup L}. A model of the axioms is described based on an abstract L-fold tensor product Hilbert space H{sup arith}. Unitary maps of this space onto a physicalmore » parameter based product space H{sup phy} are then described. Each of these maps makes states in H{sup phy}, and the induced operators, a model of the axioms. Consequences of the existence of many of these maps are discussed along with the dependence of Grover's and Shor's algorithms on these maps. The importance of the main physical requirement, that the basic arithmetic operations are efficiently implementable, is discussed. This condition states that there exist physically realizable Hamiltonians that can implement the basic arithmetic operations and that the space-time and thermodynamic resources required are polynomial in L.« less
Stochastic geometry in disordered systems, applications to quantum Hall transitions
NASA Astrophysics Data System (ADS)
Gruzberg, Ilya
2012-02-01
A spectacular success in the study of random fractal clusters and their boundaries in statistical mechanics systems at or near criticality using Schramm-Loewner Evolutions (SLE) naturally calls for extensions in various directions. Can this success be repeated for disordered and/or non-equilibrium systems? Naively, when one thinks about disordered systems and their average correlation functions one of the very basic assumptions of SLE, the so called domain Markov property, is lost. Also, in some lattice models of Anderson transitions (the network models) there are no natural clusters to consider. Nevertheless, in this talk I will argue that one can apply the so called conformal restriction, a notion of stochastic conformal geometry closely related to SLE, to study the integer quantum Hall transition and its variants. I will focus on the Chalker-Coddington network model and will demonstrate that its average transport properties can be mapped to a classical problem where the basic objects are geometric shapes (loosely speaking, the current paths) that obey an important restriction property. At the transition point this allows to use the theory of conformal restriction to derive exact expressions for point contact conductances in the presence of various non-trivial boundary conditions.
Polymer physics experiments with single DNA molecules
NASA Astrophysics Data System (ADS)
Smith, Douglas E.
1999-11-01
Bacteriophage DNA molecules were taken as a model flexible polymer chain for the experimental study of polymer dynamics at the single molecule level. Video fluorescence microscopy was used to directly observe the conformational dynamics of fluorescently labeled molecules, optical tweezers were used to manipulate individual molecules, and micro-fabricated flow cells were used to apply controlled hydrodynamic strain to molecules. These techniques constitute a powerful new experimental approach in the study of basic polymer physics questions. I have used these techniques to study the diffusion and relaxation of isolated and entangled polymer molecules and the hydrodynamic deformation of polymers in elongational and shear flows. These studies revealed a rich, and previously unobserved, ``molecular individualism'' in the dynamical behavior of single molecules. Individual measurements on ensembles of identical molecules allowed the average conformation to be determined as well as the underlying probability distributions for molecular conformation. Scaling laws, that predict the dependence of properties on chain length and concentration, were also tested. The basic assumptions of the reptation model were directly confirmed by visualizing the dynamics of entangled chains.
Modeling Payload Stowage Impacts on Fire Risks On-Board the International Space Station
NASA Technical Reports Server (NTRS)
Anton, Kellie e.; Brown, Patrick F.
2010-01-01
The purpose of this presentation is to determine the risks of fire on-board the ISS due to non-standard stowage. ISS stowage is constantly being reexamined for optimality. Non-standard stowage involves stowing items outside of rack drawers, and fire risk is a key concern and is heavily mitigated. A Methodology is needed to account for fire risk due to non-standard stowage to capture the risk. The contents include: 1) Fire Risk Background; 2) General Assumptions; 3) Modeling Techniques; 4) Event Sequence Diagram (ESD); 5) Qualitative Fire Analysis; 6) Sample Qualitative Results for Fire Risk; 7) Qualitative Stowage Analysis; 8) Sample Qualitative Results for Non-Standard Stowage; and 9) Quantitative Analysis Basic Event Data.
Brain segmentation and forebrain development in amniotes.
Puelles, L
2001-08-01
This essay contains a general introduction to the segmental paradigm postulated for interpreting morphologically cellular and molecular data on the developing forebrain of vertebrates. The introduction examines the nature of the problem, indicating the role of topological analysis in conjunction with analysis of various developmental cell processes in the developing brain. Another section explains how morphological analysis in essence depends on assumptions (paradigms), which should be reasonable and well founded in other research, but must remain tentative until time reveals their necessary status as facts for evolving theories (or leads to their substitution by alternative assumptions). The chosen paradigm affects many aspects of the analysis, including the sectioning planes one wants to use and the meaning of what one sees in brain sections. Dorsoventral patterning is presented as the fundament for defining what is longitudinal, whereas less well-understood anteroposterior patterning results from transversal regionalization. The concept of neural segmentation is covered, first historically, and then step by step, explaining the prosomeric model in basic detail, stopping at the diencephalon, the extratelencephalic secondary prosencephalon, and the telencephalon. A new pallial model for telencephalic development and evolution is presented as well, updating the proposed homologies between the sauropsidian and mammalian telencephalon.
Macpherson, Alexander J; Principe, Peter P; Shao, Yang
2013-04-15
Researchers are increasingly using data envelopment analysis (DEA) to examine the efficiency of environmental policies and resource allocations. An assumption of the basic DEA model is that decisionmakers operate within homogeneous environments. But, this assumption is not valid when environmental performance is influenced by variables beyond managerial control. Understanding the influence of these variables is important to distinguish between characterizing environmental conditions and identifying opportunities to improve environmental performance. While environmental assessments often focus on characterizing conditions, the point of using DEA is to identify opportunities to improve environmental performance and thereby prevent (or rectify) an inefficient allocation of resources. We examine the role of exogenous variables such as climate, hydrology, and topography in producing environmental impacts such as deposition, runoff, invasive species, and forest fragmentation within the United States Mid-Atlantic region. We apply a four-stage procedure to adjust environmental impacts in a DEA model that seeks to minimize environmental impacts while obtaining given levels of socioeconomic outcomes. The approach creates a performance index that bundles multiple indicators while adjusting for variables that are outside management control, offering numerous advantages for environmental assessment. Published by Elsevier Ltd.
Crovelli, R.A.
1988-01-01
The geologic appraisal model that is selected for a petroleum resource assessment depends upon purpose of the assessment, basic geologic assumptions of the area, type of available data, time available before deadlines, available human and financial resources, available computer facilities, and, most importantly, the available quantitative methodology with corresponding computer software and any new quantitative methodology that would have to be developed. Therefore, different resource assessment projects usually require different geologic models. Also, more than one geologic model might be needed in a single project for assessing different regions of the study or for cross-checking resource estimates of the area. Some geologic analyses used in the past for petroleum resource appraisal involved play analysis. The corresponding quantitative methodologies of these analyses usually consisted of Monte Carlo simulation techniques. A probabilistic system of petroleum resource appraisal for play analysis has been designed to meet the following requirements: (1) includes a variety of geologic models, (2) uses an analytic methodology instead of Monte Carlo simulation, (3) possesses the capacity to aggregate estimates from many areas that have been assessed by different geologic models, and (4) runs quickly on a microcomputer. Geologic models consist of four basic types: reservoir engineering, volumetric yield, field size, and direct assessment. Several case histories and present studies by the U.S. Geological Survey are discussed. ?? 1988 International Association for Mathematical Geology.
Phylogenetic Analysis Supports the Aerobic-Capacity Model for the Evolution of Endothermy.
Nespolo, Roberto F; Solano-Iguaran, Jaiber J; Bozinovic, Francisco
2017-01-01
The evolution of endothermy is a controversial topic in evolutionary biology, although several hypotheses have been proposed to explain it. To a great extent, the debate has centered on the aerobic-capacity model (AC model), an adaptive hypothesis involving maximum and resting rates of metabolism (MMR and RMR, respectively; hereafter "metabolic traits"). The AC model posits that MMR, a proxy of aerobic capacity and sustained activity, is the target of directional selection and that RMR is also influenced as a correlated response. Associated with this reasoning are the assumptions that (1) factorial aerobic scope (FAS; MMR/RMR) and net aerobic scope (NAS; MMR - RMR), two commonly used indexes of aerobic capacity, show different evolutionary optima and (2) the functional link between MMR and RMR is a basic design feature of vertebrates. To test these assumptions, we performed a comparative phylogenetic analysis in 176 vertebrate species, ranging from fish and amphibians to birds and mammals. Using disparity-through-time analysis, we also explored trait diversification and fitted different evolutionary models to study the evolution of metabolic traits. As predicted, we found (1) a positive phylogenetic correlation between RMR and MMR, (2) diversification of metabolic traits exceeding that of random-walk expectations, (3) that a model assuming selection fits the data better than alternative models, and (4) that a single evolutionary optimum best fits FAS data, whereas a model involving two optima (one for ectotherms and another for endotherms) is the best explanatory model for NAS. These results support the AC model and give novel information concerning the mode and tempo of physiological evolution of vertebrates.
Ling, Julia; Templeton, Jeremy Alan
2015-08-04
Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests.more » The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.« less
Behavioral health at-risk contracting--a rate development and financial reporting guide.
Zinser, G R
1994-01-01
The process of developing rates for behavioral capitation contracts can seem mysterious and intimidating. The following article explains several key features of the method used to develop capitation rates. These include: (1) a basic understanding of the mechanics of rate calculation; (2) awareness of the variables to be considered and assumptions to be made; (3) a source of information to use as a basis for these assumptions; and (4) a system to collect detailed actual experience data.
1986-09-01
Brazilian-American Chamber of Commerce Mr. Frank J. Devine, Executive Director Embraer, Empresa Brasileira De Aeronautica Mr. Salo Roth Vice President...Throughout this study the following assumptions have been made. First, it is assumed that the reader has a basic familiarity with aircraft. Therefore...of the 5 1 weapons acquisition process. Third, the assumption is made that most readers are familiar with U.S. procedures involving the sale of
NASA Astrophysics Data System (ADS)
Korayem, M. H.; Habibi Sooha, Y.; Rastegar, Z.
2018-05-01
Manipulation of the biological particles by atomic force microscopy is used to transfer these particles inside body's cells, diagnosis and destruction of the cancer cells and drug delivery to damaged cells. According to the impossibility of simultaneous observation of this process, the importance of modeling and simulation can be realized. The contact of the tip with biological particle is important during manipulation, therefore, the first step of the modeling is choosing appropriate contact model. Most of the studies about contact between atomic force microscopy and biological particles, consider the biological particle as an elastic material. This is not an appropriate assumption because biological cells are basically soft and this assumption ignores loading history. In this paper, elastic and viscoelastic JKR theories were used in modeling and simulation of the 3D manipulation for three modes of tip-particle sliding, particle-substrate sliding and particle-substrate rolling. Results showed that critical force and time in motion modes (sliding and rolling) for two elastic and viscoelastic states are very close but these magnitudes were lower in the viscoelastic state. Then, three friction models, Coulomb, LuGre and HK, were used for tip-particle sliding mode in the first phase of manipulation to make results closer to reality. In both Coulomb and LuGre models, critical force and time are very close for elastic and viscoelastic states but in general critical force and time prediction of HK model was higher than LuGre and the LuGre model itself had higher prediction than Coulomb.
Material failure modelling in metals at high strain rates
NASA Astrophysics Data System (ADS)
Panov, Vili
2005-07-01
Plate impact tests have been conducted on the OFHC Cu using single-stage gas gun. Using stress gauges, which were supported with PMMA blocks on the back of the target plates, stress-time histories have been recorded. After testing, micro structural observations of the softly recovered OFHC Cu spalled specimen were carried out and evolution of damage has been examined. To account for the physical mechanisms of failure, the concept that thermal activation in material separation during fracture processes has been adopted as basic mechanism for this material failure model development. With this basic assumption, the proposed model is compatible with the Mechanical Threshold Stress model and therefore in this development it was incorporated into the MTS material model in DYNA3D. In order to analyse proposed criterion a series of FE simulations have been performed for OFHC Cu. The numerical analysis results clearly demonstrate the ability of the model to predict the spall process and experimentally observed tensile damage and failure. It is possible to simulate high strain rate deformation processes and dynamic failure in tension for wide range of temperature. The proposed cumulative criterion, introduced in the DYNA3D code, is able to reproduce the ``pull-back'' stresses of the free surface caused by creation of the internal spalling, and enables one to analyse numerically the spalling over a wide range of impact velocities.
The Central Registry for Child Abuse Cases: Rethinking Basic Assumptions
ERIC Educational Resources Information Center
Whiting, Leila
1977-01-01
Class data pools on abused and neglected children and their families are found desirable for program planning, but identification by name is of questionable value and possibly a dangerous invasion of civil liberties. (MS)
Modelling Framework and Assistive Device for Peripheral Intravenous Injections
NASA Astrophysics Data System (ADS)
Kam, Kin F.; Robinson, Martin P.; Gilbert, Mathew A.; Pelah, Adar
2016-02-01
Intravenous access for blood sampling or drug administration that requires peripheral venepuncture is perhaps the most common invasive procedure practiced in hospitals, clinics and general practice surgeries.We describe an idealised mathematical framework for modelling the dynamics of the peripheral venepuncture process. Basic assumptions of the model are confirmed through motion analysis of needle trajectories during venepuncture, taken from video recordings of a skilled practitioner injecting into a practice kit. The framework is also applied to the design and construction of a proposed device for accurate needle guidance during venepuncture administration, assessed as consistent and repeatable in application and does not lead to over puncture. The study provides insights into the ubiquitous peripheral venepuncture process and may contribute to applications in training and in the design of new devices, including for use in robotic automation.
Self-transcendent positive emotions increase spirituality through basic world assumptions.
Van Cappellen, Patty; Saroglou, Vassilis; Iweins, Caroline; Piovesana, Maria; Fredrickson, Barbara L
2013-01-01
Spirituality has mostly been studied in psychology as implied in the process of overcoming adversity, being triggered by negative experiences, and providing positive outcomes. By reversing this pathway, we investigated whether spirituality may also be triggered by self-transcendent positive emotions, which are elicited by stimuli appraised as demonstrating higher good and beauty. In two studies, elevation and/or admiration were induced using different methods. These emotions were compared to two control groups, a neutral state and a positive emotion (mirth). Self-transcendent positive emotions increased participants' spirituality (Studies 1 and 2), especially for the non-religious participants (Study 1). Two basic world assumptions, i.e., belief in life as meaningful (Study 1) and in the benevolence of others and the world (Study 2) mediated the effect of these emotions on spirituality. Spirituality should be understood not only as a coping strategy, but also as an upward spiralling pathway to and from self-transcendent positive emotions.
An assessment of finite-element modeling techniques for thick-solid/thin-shell joints analysis
NASA Technical Reports Server (NTRS)
Min, J. B.; Androlake, S. G.
1993-01-01
The subject of finite-element modeling has long been of critical importance to the practicing designer/analyst who is often faced with obtaining an accurate and cost-effective structural analysis of a particular design. Typically, these two goals are in conflict. The purpose is to discuss the topic of finite-element modeling for solid/shell connections (joints) which are significant for the practicing modeler. Several approaches are currently in use, but frequently various assumptions restrict their use. Such techniques currently used in practical applications were tested, especially to see which technique is the most ideally suited for the computer aided design (CAD) environment. Some basic thoughts regarding each technique are also discussed. As a consequence, some suggestions based on the results are given to lead reliable results in geometrically complex joints where the deformation and stress behavior are complicated.
Modelling students' knowledge organisation: Genealogical conceptual networks
NASA Astrophysics Data System (ADS)
Koponen, Ismo T.; Nousiainen, Maija
2018-04-01
Learning scientific knowledge is largely based on understanding what are its key concepts and how they are related. The relational structure of concepts also affects how concepts are introduced in teaching scientific knowledge. We model here how students organise their knowledge when they represent their understanding of how physics concepts are related. The model is based on assumptions that students use simple basic linking-motifs in introducing new concepts and mostly relate them to concepts that were introduced a few steps earlier, i.e. following a genealogical ordering. The resulting genealogical networks have relatively high local clustering coefficients of nodes but otherwise resemble networks obtained with an identical degree distribution of nodes but with random linking between them (i.e. the configuration-model). However, a few key nodes having a special structural role emerge and these nodes have a higher than average communicability betweenness centralities. These features agree with the empirically found properties of students' concept networks.
Equations of state for explosive detonation products: The PANDA model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerley, G.I.
1994-05-01
This paper discusses a thermochemical model for calculating equations of state (EOS) for the detonation products of explosives. This model, which was first presented at the Eighth Detonation Symposium, is available in the PANDA code and is referred to here as ``the Panda model``. The basic features of the PANDA model are as follows. (1) Statistical-mechanical theories are used to construct EOS tables for each of the chemical species that are to be allowed in the detonation products. (2) The ideal mixing model is used to compute the thermodynamic functions for a mixture of these species, and the composition ofmore » the system is determined from assumption of chemical equilibrium. (3) For hydrocode calculations, the detonation product EOS are used in tabular form, together with a reactive burn model that allows description of shock-induced initiation and growth or failure as well as ideal detonation wave propagation. This model has been implemented in the three-dimensional Eulerian code, CTH.« less
NASA Astrophysics Data System (ADS)
Scharnagl, Benedikt; Durner, Wolfgang
2013-04-01
Models are inherently imperfect because they simplify processes that are themselves imperfectly known and understood. Moreover, the input variables and parameters needed to run a model are typically subject to various sources of error. As a consequence of these imperfections, model predictions will always deviate from corresponding observations. In most applications in soil hydrology, these deviations are clearly not random but rather show a systematic structure. From a statistical point of view, this systematic mismatch may be a reason for concern because it violates one of the basic assumptions made in inverse parameter estimation: the assumption of independence of the residuals. But what are the consequences of simply ignoring the autocorrelation in the residuals, as it is current practice in soil hydrology? Are the parameter estimates still valid even though the statistical foundation they are based on is partially collapsed? Theory and practical experience from other fields of science have shown that violation of the independence assumption will result in overconfident uncertainty bounds and that in some cases it may lead to significantly different optimal parameter values. In our contribution, we present three soil hydrological case studies, in which the effect of autocorrelated residuals on the estimated parameters was investigated in detail. We explicitly accounted for autocorrelated residuals using a formal likelihood function that incorporates an autoregressive model. The inverse problem was posed in a Bayesian framework, and the posterior probability density function of the parameters was estimated using Markov chain Monte Carlo simulation. In contrast to many other studies in related fields of science, and quite surprisingly, we found that the first-order autoregressive model, often abbreviated as AR(1), did not work well in the soil hydrological setting. We showed that a second-order autoregressive, or AR(2), model performs much better in these applications, leading to parameter and uncertainty estimates that satisfy all the underlying statistical assumptions. For theoretical reasons, these estimates are deemed more reliable than those estimates based on the neglect of autocorrelation in the residuals. In compliance with theory and results reported in the literature, our results showed that parameter uncertainty bounds were substantially wider if autocorrelation in the residuals was explicitly accounted for, and also the optimal parameter vales were slightly different in this case. We argue that the autoregressive model presented here should be used as a matter of routine in inverse modeling of soil hydrological processes.
Flapping response characteristics of hingeless rotor blades by a gereralized harmonic balance method
NASA Technical Reports Server (NTRS)
Peters, D. A.; Ormiston, R. A.
1975-01-01
Linearized equations of motion for the flapping response of flexible rotor blades in forward flight are derived in terms of generalized coordinates. The equations are solved using a matrix form of the method of linear harmonic balance, yielding response derivatives for each harmonic of the blade deformations and of the hub forces and moments. Numerical results and approximate closed-form expressions for rotor derivatives are used to illustrate the relationships between rotor parameters, modeling assumptions, and rotor response characteristics. Finally, basic hingeless rotor response derivatives are presented in tabular and graphical form for a wide range of configuration parameters and operating conditions.
NASA Technical Reports Server (NTRS)
Assanis, D. N.; Ekchian, J. A.; Heywood, J. B.; Replogle, K. K.
1984-01-01
Reductions in heat loss at appropriate points in the diesel engine which result in substantially increased exhaust enthalpy were shown. The concepts for this increased enthalpy are the turbocharged, turbocompounded diesel engine cycle. A computer simulation of the heavy duty turbocharged turbo-compounded diesel engine system was undertaken. This allows the definition of the tradeoffs which are associated with the introduction of ceramic materials in various parts of the total engine system, and the study of system optimization. The basic assumptions and the mathematical relationships used in the simulation of the model engine are described.
Simple wealth distribution model causing inequality-induced crisis without external shocks
NASA Astrophysics Data System (ADS)
Benisty, Henri
2017-05-01
We address the issue of the dynamics of wealth accumulation and economic crisis triggered by extreme inequality, attempting to stick to most possibly intrinsic assumptions. Our general framework is that of pure or modified multiplicative processes, basically geometric Brownian motions. In contrast with the usual approach of injecting into such stochastic agent models either specific, idiosyncratic internal nonlinear interaction patterns or macroscopic disruptive features, we propose a dynamic inequality model where the attainment of a sizable fraction of the total wealth by very few agents induces a crisis regime with strong intermittency, the explicit coupling between the richest and the rest being a mere normalization mechanism, hence with minimal extrinsic assumptions. The model thus harnesses the recognized lack of ergodicity of geometric Brownian motions. It also provides a statistical intuition to the consequences of Thomas Piketty's recent "r >g " (return rate > growth rate) paradigmatic analysis of very-long-term wealth trends. We suggest that the "water-divide" of wealth flow may define effective classes, making an objective entry point to calibrate the model. Consistently, we check that a tax mechanism associated to a few percent relative bias on elementary daily transactions is able to slow or stop the build-up of large wealth. When extreme fluctuations are tamed down to a stationary regime with sizable but steadier inequalities, it should still offer opportunities to study the dynamics of crisis and the inner effective classes induced through external or internal factors.
Phillips, Steven; Wilson, William H.
2011-01-01
A complete theory of cognitive architecture (i.e., the basic processes and modes of composition that together constitute cognitive behaviour) must explain the systematicity property—why our cognitive capacities are organized into particular groups of capacities, rather than some other, arbitrary collection. The classical account supposes: (1) syntactically compositional representations; and (2) processes that are sensitive to—compatible with—their structure. Classical compositionality, however, does not explain why these two components must be compatible; they are only compatible by the ad hoc assumption (convention) of employing the same mode of (concatenative) compositionality (e.g., prefix/postfix, where a relation symbol is always prepended/appended to the symbols for the related entities). Architectures employing mixed modes do not support systematicity. Recently, we proposed an alternative explanation without ad hoc assumptions, using category theory. Here, we extend our explanation to domains that are quasi-systematic (e.g., aspects of most languages), where the domain includes some but not all possible combinations of constituents. The central category-theoretic construct is an adjunction involving pullbacks, where the primary focus is on the relationship between processes modelled as functors, rather than the representations. A functor is a structure-preserving map (or construction, for our purposes). An adjunction guarantees that the only pairings of functors are the systematic ones. Thus, (quasi-)systematicity is a necessary consequence of a categorial cognitive architecture whose basic processes are functors that participate in adjunctions. PMID:21857816
Velocity Measurement by Scattering from Index of Refraction Fluctuations Induced in Turbulent Flows
NASA Technical Reports Server (NTRS)
Lading, Lars; Saffman, Mark; Edwards, Robert
1996-01-01
Induced phase screen scattering is defined as scatter light from a weak index of refraction fluctuations induced by turbulence. The basic assumptions and requirements for induced phase screen scattering, including scale requirements, are presented.
Undergraduate Cross Registration.
ERIC Educational Resources Information Center
Grupe, Fritz H.
This report discusses various aspects of undergraduate cross-registration procedures, including the dimensions, values, roles and functions, basic assumptions, and facilitating and encouragment of cross-registration. Dimensions of cross-registration encompass financial exchange, eligibility, program limitations, type of grade and credit; extent of…
The Peace Movement: An Exercise in Micro-Macro Linkages.
ERIC Educational Resources Information Center
Galtung, Johan
1988-01-01
Contends that the basic assumption of the peace movement is the abuse of military power by the state. Argues that the peace movement is most effective through linkages with cultural, political, and economic forces in society. (BSR)
Graduate Education in Psychology: A Comment on Rogers' Passionate Statement
ERIC Educational Resources Information Center
Brown, Robert C., Jr.; Tedeschi, James T.
1972-01-01
Authors' hope that this critical evaluation can place Carl Rogers' assumptions into perspective; they propose a compromise program meant to satisfy the basic aims of a humanistic psychology program. For Rogers' rejoinder see AA 512 869. (MB)
Volcanic Plume Heights on Mars: Limits of Validity for Convective Models
NASA Technical Reports Server (NTRS)
Glaze, Lori S.; Baloga, Stephen M.
2002-01-01
Previous studies have overestimated volcanic plume heights on Mars. In this work, we demonstrate that volcanic plume rise models, as currently formulated, have only limited validity in any environment. These limits are easily violated in the current Mars environment and may also be violated for terrestrial and early Mars conditions. We indicate some of the shortcomings of the model with emphasis on the limited applicability to current Mars conditions. Specifically, basic model assumptions are violated when (1) vertical velocities exceed the speed of sound, (2) radial expansion rates exceed the speed of sound, (3) radial expansion rates approach or exceed the vertical velocity, or (4) plume radius grossly exceeds plume height. All of these criteria are violated for the typical Mars example given here. Solutions imply that the convective rise, model is only valid to a height of approximately 10 kilometers. The reason for the model breakdown is hat the current Mars atmosphere is not of sufficient density to satisfy the conservation equations. It is likely that diffusion and other effects governed by higher-order differential equations are important within the first few kilometers of rise. When the same criteria are applied to eruptions into a higher-density early Mars atmosphere, we find that eruption rates higher than 1.4 x 10(exp 9) kilograms per second also violate model assumptions. This implies a maximum extent of approximately 65 kilometers for convective plumes on early Mars. The estimated plume heights for both current and early Mars are significantly lower than those previously predicted in the literature. Therefore, global-scale distribution of ash seems implausible.
NASA Astrophysics Data System (ADS)
Baraffe, I.; Pratt, J.; Goffrey, T.; Constantino, T.; Folini, D.; Popov, M. V.; Walder, R.; Viallet, M.
2017-08-01
We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time-implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this Letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a young low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ˜50 Myr to ˜4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely, that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long-standing problem of lithium depletion in pre-main-sequence and main-sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baraffe, I.; Pratt, J.; Goffrey, T.
We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time-implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this Letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a youngmore » low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ∼50 Myr to ∼4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely, that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long-standing problem of lithium depletion in pre-main-sequence and main-sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.« less
Radiant exchange in partially specular architectural environments
NASA Astrophysics Data System (ADS)
Beamer, C. Walter; Muehleisen, Ralph T.
2003-10-01
The radiant exchange method, also known as radiosity, was originally developed for thermal radiative heat transfer applications. Later it was used to model architectural lighting systems, and more recently it has been extended to model acoustic systems. While there are subtle differences in these applications, the basic method is based on solving a system of energy balance equations, and it is best applied to spaces with mainly diffuse reflecting surfaces. The obvious drawback to this method is that it is based around the assumption that all surfaces in the system are diffuse reflectors. Because almost all architectural systems have at least some partially specular reflecting surfaces in the system it is important to extend the radiant exchange method to deal with this type of surface reflection. [Work supported by NSF.
Factors influencing the thermally-induced strength degradation of B/Al composites
NASA Technical Reports Server (NTRS)
Dicarlo, J. A.
1982-01-01
Literature data related to the thermally-induced strength degradation of B/Al composites were examined in the light of fracture theories based on reaction-controlled fiber weakening. Under the assumption of a parabolic time-dependent growth for the interfacial reaction product, a Griffith-type fracture model was found to yield simple equations whose predictions were in good agreement with data for boron fiber average strength and for B/Al axial fracture strain. The only variables in these equations were the time and temperature of the thermal exposure and an empirical factor related to fiber surface smoothness prior to composite consolidation. Such variables as fiber diameter and aluminum alloy composition were found to have little influence. The basic and practical implications of the fracture model equations are discussed.
Computation in generalised probabilisitic theories
NASA Astrophysics Data System (ADS)
Lee, Ciarán M.; Barrett, Jonathan
2015-08-01
From the general difficulty of simulating quantum systems using classical systems, and in particular the existence of an efficient quantum algorithm for factoring, it is likely that quantum computation is intrinsically more powerful than classical computation. At present, the best upper bound known for the power of quantum computation is that {{BQP}}\\subseteq {{AWPP}}, where {{AWPP}} is a classical complexity class (known to be included in {{PP}}, hence {{PSPACE}}). This work investigates limits on computational power that are imposed by simple physical, or information theoretic, principles. To this end, we define a circuit-based model of computation in a class of operationally-defined theories more general than quantum theory, and ask: what is the minimal set of physical assumptions under which the above inclusions still hold? We show that given only an assumption of tomographic locality (roughly, that multipartite states and transformations can be characterized by local measurements), efficient computations are contained in {{AWPP}}. This inclusion still holds even without assuming a basic notion of causality (where the notion is, roughly, that probabilities for outcomes cannot depend on future measurement choices). Following Aaronson, we extend the computational model by allowing post-selection on measurement outcomes. Aaronson showed that the corresponding quantum complexity class, {{PostBQP}}, is equal to {{PP}}. Given only the assumption of tomographic locality, the inclusion in {{PP}} still holds for post-selected computation in general theories. Hence in a world with post-selection, quantum theory is optimal for computation in the space of all operational theories. We then consider whether one can obtain relativized complexity results for general theories. It is not obvious how to define a sensible notion of a computational oracle in the general framework that reduces to the standard notion in the quantum case. Nevertheless, it is possible to define computation relative to a ‘classical oracle’. Then, we show there exists a classical oracle relative to which efficient computation in any theory satisfying the causality assumption does not include {{NP}}.
Non-stationary hydrologic frequency analysis using B-spline quantile regression
NASA Astrophysics Data System (ADS)
Nasri, B.; Bouezmarni, T.; St-Hilaire, A.; Ouarda, T. B. M. J.
2017-11-01
Hydrologic frequency analysis is commonly used by engineers and hydrologists to provide the basic information on planning, design and management of hydraulic and water resources systems under the assumption of stationarity. However, with increasing evidence of climate change, it is possible that the assumption of stationarity, which is prerequisite for traditional frequency analysis and hence, the results of conventional analysis would become questionable. In this study, we consider a framework for frequency analysis of extremes based on B-Spline quantile regression which allows to model data in the presence of non-stationarity and/or dependence on covariates with linear and non-linear dependence. A Markov Chain Monte Carlo (MCMC) algorithm was used to estimate quantiles and their posterior distributions. A coefficient of determination and Bayesian information criterion (BIC) for quantile regression are used in order to select the best model, i.e. for each quantile, we choose the degree and number of knots of the adequate B-spline quantile regression model. The method is applied to annual maximum and minimum streamflow records in Ontario, Canada. Climate indices are considered to describe the non-stationarity in the variable of interest and to estimate the quantiles in this case. The results show large differences between the non-stationary quantiles and their stationary equivalents for an annual maximum and minimum discharge with high annual non-exceedance probabilities.
An entropic framework for modeling economies
NASA Astrophysics Data System (ADS)
Caticha, Ariel; Golan, Amos
2014-08-01
We develop an information-theoretic framework for economic modeling. This framework is based on principles of entropic inference that are designed for reasoning on the basis of incomplete information. We take the point of view of an external observer who has access to limited information about broad macroscopic economic features. We view this framework as complementary to more traditional methods. The economy is modeled as a collection of agents about whom we make no assumptions of rationality (in the sense of maximizing utility or profit). States of statistical equilibrium are introduced as those macrostates that maximize entropy subject to the relevant information codified into constraints. The basic assumption is that this information refers to supply and demand and is expressed in the form of the expected values of certain quantities (such as inputs, resources, goods, production functions, utility functions and budgets). The notion of economic entropy is introduced. It provides a measure of the uniformity of the distribution of goods and resources. It captures both the welfare state of the economy as well as the characteristics of the market (say, monopolistic, concentrated or competitive). Prices, which turn out to be the Lagrange multipliers, are endogenously generated by the economy. Further studies include the equilibrium between two economies and the conditions for stability. As an example, the case of the nonlinear economy that arises from linear production and utility functions is treated in some detail.
Investigation of wing crack formation with a combined phase-field and experimental approach
NASA Astrophysics Data System (ADS)
Lee, Sanghyun; Reber, Jacqueline E.; Hayman, Nicholas W.; Wheeler, Mary F.
2016-08-01
Fractures that propagate off of weak slip planes are known as wing cracks and often play important roles in both tectonic deformation and fluid flow across reservoir seals. Previous numerical models have produced the basic kinematics of wing crack openings but generally have not been able to capture fracture geometries seen in nature. Here we present both a phase-field modeling approach and a physical experiment using gelatin for a wing crack formation. By treating the fracture surfaces as diffusive zones instead of as discontinuities, the phase-field model does not require consideration of unpredictable rock properties or stress inhomogeneities around crack tips. It is shown by benchmarking the models with physical experiments that the numerical assumptions in the phase-field approach do not affect the final model predictions of wing crack nucleation and growth. With this study, we demonstrate that it is feasible to implement the formation of wing cracks in large scale phase-field reservoir models.
SW-846 Test Method 1340: In Vitro Bioaccessibility Assay for Lead in Soil
Describes assay procedures written on the assumption that they will be performed by analysts who are formally trained in at least the basic principles of chemical analysis and in the use of the subject technology.
Small Molecule Docking from Theoretical Structural Models
NASA Astrophysics Data System (ADS)
Novoa, Eva Maria; de Pouplana, Lluis Ribas; Orozco, Modesto
Structural approaches to rational drug design rely on the basic assumption that pharmacological activity requires, as necessary but not sufficient condition, the binding of a drug to one or several cellular targets, proteins in most cases. The traditional paradigm assumes that drugs that interact only with a single cellular target are specific and accordingly have little secondary effects, while promiscuous molecules are more likely to generate undesirable side effects. However, current examples indicate that often efficient drugs are able to interact with several biological targets [1] and in fact some dirty drugs, such as chlorpromazine, dextromethorphan, and ibogaine exhibit desired pharmacological properties [2]. These considerations highlight the tremendous difficulty of designing small molecules that both have satisfactory ADME properties and the ability of interacting with a limited set of target proteins with a high affinity, avoiding at the same time undesirable interactions with other proteins. In this complex and challenging scenario, computer simulations emerge as the basic tool to guide medicinal chemists during the drug discovery process.
NASA Astrophysics Data System (ADS)
Reid, J.; Polasky, S.; Hawthorne, P.
2014-12-01
Sustainable development requires providing for human well-being by meeting basic demands for food, energy and consumer goods and services, all while maintaining an environment capable of sustaining the provisioning of those demands for future generations. Failure to meet the basic needs of human well-being is not an ethically viable option and strategies for doubling agricultural production and providing energy and goods for a growing population exist. However, the question is, at what cost to environmental quality? We developed an integrated modeling approach to test strategies for meeting multiple objectives within the limits of the earth system. We use scenarios to explore a range of assumptions on socio-economic factors like population growth, per capita income and technological change; food systems factors like food waste, production intensification and expansion, and meat demand; and technological developments in energy efficiency and wastewater treatment. We use these scenario to test the conditions in which we can fit the simultaneous goals of sustainable development.
Text extraction via an edge-bounded averaging and a parametric character model
NASA Astrophysics Data System (ADS)
Fan, Jian
2003-01-01
We present a deterministic text extraction algorithm that relies on three basic assumptions: color/luminance uniformity of the interior region, closed boundaries of sharp edges and the consistency of local contrast. The algorithm is basically independent of the character alphabet, text layout, font size and orientation. The heart of this algorithm is an edge-bounded averaging for the classification of smooth regions that enhances robustness against noise without sacrificing boundary accuracy. We have also developed a verification process to clean up the residue of incoherent segmentation. Our framework provides a symmetric treatment for both regular and inverse text. We have proposed three heuristics for identifying the type of text from a cluster consisting of two types of pixel aggregates. Finally, we have demonstrated the advantages of the proposed algorithm over adaptive thresholding and block-based clustering methods in terms of boundary accuracy, segmentation coherency, and capability to identify inverse text and separate characters from background patches.
NASA Technical Reports Server (NTRS)
Parker, Peter A.; Geoffrey, Vining G.; Wilson, Sara R.; Szarka, John L., III; Johnson, Nels G.
2010-01-01
The calibration of measurement systems is a fundamental but under-studied problem within industrial statistics. The origins of this problem go back to basic chemical analysis based on NIST standards. In today's world these issues extend to mechanical, electrical, and materials engineering. Often, these new scenarios do not provide "gold standards" such as the standard weights provided by NIST. This paper considers the classic "forward regression followed by inverse regression" approach. In this approach the initial experiment treats the "standards" as the regressor and the observed values as the response to calibrate the instrument. The analyst then must invert the resulting regression model in order to use the instrument to make actual measurements in practice. This paper compares this classical approach to "reverse regression," which treats the standards as the response and the observed measurements as the regressor in the calibration experiment. Such an approach is intuitively appealing because it avoids the need for the inverse regression. However, it also violates some of the basic regression assumptions.
A New Framework for Cumulus Parametrization - A CPT in action
NASA Astrophysics Data System (ADS)
Jakob, C.; Peters, K.; Protat, A.; Kumar, V.
2016-12-01
The representation of convection in climate model remains a major Achilles Heel in our pursuit of better predictions of global and regional climate. The basic principle underpinning the parametrisation of tropical convection in global weather and climate models is that there exist discernible interactions between the resolved model scale and the parametrised cumulus scale. Furthermore, there must be at least some predictive power in the larger scales for the statistical behaviour on small scales for us to be able to formally close the parametrised equations. The presentation will discuss a new framework for cumulus parametrisation based on the idea of separating the prediction of cloud area from that of velocity. This idea is put into practice by combining an existing multi-scale stochastic cloud model with observations to arrive at the prediction of the area fraction for deep precipitating convection. Using mid-tropospheric humidity and vertical motion as predictors, the model is shown to reproduce the observed behaviour of both mean and variability of deep convective area fraction well. The framework allows for the inclusion of convective organisation and can - in principle - be made resolution-aware or resolution-independent. When combined with simple assumptions about cloud-base vertical motion the model can be used as a closure assumption in any existing cumulus parametrisation. Results of applying this idea in the the ECHAM model indicate significant improvements in the simulation of tropical variability, including but not limited to the MJO. This presentation will highlight how the close collaboration of the observational, theoretical and model development community in the spirit of the climate process teams can lead to significant progress in long-standing issues in climate modelling while preserving the freedom of individual groups in pursuing their specific implementation of an agreed framework.
A Gibbs Energy Minimization Approach for Modeling of Chemical Reactions in a Basic Oxygen Furnace
NASA Astrophysics Data System (ADS)
Kruskopf, Ari; Visuri, Ville-Valtteri
2017-12-01
In modern steelmaking, the decarburization of hot metal is converted into steel primarily in converter processes, such as the basic oxygen furnace. The objective of this work was to develop a new mathematical model for top blown steel converter, which accounts for the complex reaction equilibria in the impact zone, also known as the hot spot, as well as the associated mass and heat transport. An in-house computer code of the model has been developed in Matlab. The main assumption of the model is that all reactions take place in a specified reaction zone. The mass transfer between the reaction volume, bulk slag, and metal determine the reaction rates for the species. The thermodynamic equilibrium is calculated using the partitioning of Gibbs energy (PGE) method. The activity model for the liquid metal is the unified interaction parameter model and for the liquid slag the modified quasichemical model (MQM). The MQM was validated by calculating iso-activity lines for the liquid slag components. The PGE method together with the MQM was validated by calculating liquidus lines for solid components. The results were compared with measurements from literature. The full chemical reaction model was validated by comparing the metal and slag compositions to measurements from industrial scale converter. The predictions were found to be in good agreement with the measured values. Furthermore, the accuracy of the model was found to compare favorably with the models proposed in the literature. The real-time capability of the proposed model was confirmed in test calculations.
Hydrogen isotope retention in beryllium for tokamak plasma-facing applications
NASA Astrophysics Data System (ADS)
Anderl, R. A.; Causey, R. A.; Davis, J. W.; Doerner, R. P.; Federici, G.; Haasz, A. A.; Longhurst, G. R.; Wampler, W. R.; Wilson, K. L.
Beryllium has been used as a plasma-facing material to effect substantial improvements in plasma performance in the Joint European Torus (JET), and it is planned as a plasma-facing material for the first wall (FW) and other components of the International Thermonuclear Experimental Reactor (ITER). The interaction of hydrogenic ions, and charge-exchange neutral atoms from plasmas, with beryllium has been studied in recent years with widely varying interpretations of results. In this paper we review experimental data regarding hydrogenic atom inventories in experiments pertinent to tokamak applications and show that with some very plausible assumptions, the experimental data appear to exhibit rather predictable trends. A phenomenon observed in high ion-flux experiments is the saturation of the beryllium surface such that inventories of implanted particles become insensitive to increased flux and to continued implantation fluence. Methods for modeling retention and release of implanted hydrogen in beryllium are reviewed and an adaptation is suggested for modeling the saturation effects. The TMAP4 code used with these modifications has succeeded in simulating experimental data taken under saturation conditions where codes without this feature have not. That implementation also works well under more routine conditions where the conventional recombination-limited release model is applicable. Calculations of tritium inventory and permeation in the ITER FW during the basic performance phase (BPP) using both the conventional recombination model and the saturation effects assumptions show a difference of several orders of magnitude in both inventory and permeation rate to the coolant.
Unsteady flow model for circulation-control airfoils
NASA Technical Reports Server (NTRS)
Rao, B. M.
1979-01-01
An analysis and a numerical lifting surface method are developed for predicting the unsteady airloads on two-dimensional circulation control airfoils in incompressible flow. The analysis and the computer program are validated by correlating the computed unsteady airloads with test data and also with other theoretical solutions. Additionally, a mathematical model for predicting the bending-torsion flutter of a two-dimensional airfoil (a reference section of a wing or rotor blade) and a computer program using an iterative scheme are developed. The flutter program has a provision for using the CC airfoil airloads program or the Theodorsen hard flap solution to compute the unsteady lift and moment used in the flutter equations. The adopted mathematical model and the iterative scheme are used to perform a flutter analysis of a typical CC rotor blade reference section. The program seems to work well within the basic assumption of the incompressible flow.
Modelling of the luminescent properties of nanophosphor coatings with different porosity
NASA Astrophysics Data System (ADS)
Kubrin, R.; Graule, T.
2016-10-01
Coatings of Y2O3:Eu nanophosphor with the effective refractive index of 1.02 were obtained by flame aerosol deposition (FAD). High-pressure cold compaction decreased the layer porosity from 97.3 to 40 vol % and brought about dramatic changes in the photoluminescent performance. Modelling of interdependence between the quantum yield, decay time of luminescence, and porosity of the nanophosphor films required a few basic simplifying assumptions. We confirmed that the properties of porous nanostructured coatings are most appropriately described by the nanocrystal cavity model of the radiative decay. All known effective medium equations resulted in seemingly underestimated values of the effective refractive index. While the best fit was obtained with the linear permittivity mixing rule, the influence of further effects, previously not accounted for, could not be excluded. We discuss the peculiarities in optical response of nanophopshors and suggest the directions for future research.
Kinetics of carbon clustering in detonation of high explosives: Does theory match experiment?
NASA Astrophysics Data System (ADS)
Velizhanin, Kirill; Watkins, Erik; Dattelbaum, Dana; Gustavsen, Richard; Aslam, Tariq; Podlesak, David; Firestone, Millicent; Huber, Rachel; Ringstrand, Bryan; Willey, Trevor; Bagge-Hansen, Michael; Hodgin, Ralph; Lauderbach, Lisa; van Buuren, Tony; Sinclair, Nicholas; Rigg, Paulo; Seifert, Soenke; Gog, Thomas
2017-06-01
Chemical reactions in detonation of carbon-rich high explosives yield carbon clusters as major constituents of the products. Efforts to model carbon clustering as a diffusion-limited irreversible coagulation of carbon clusters go back to the seminal paper by Shaw and Johnson. However, first direct experimental observations of the kinetics of clustering yielded cluster growth one to two orders of magnitude slower than theoretical predictions. Multiple efforts were undertaken to test and revise the basic assumptions of the model in order to achieve better agreement with experiment. We discuss our very recent direct experimental observations of carbon clustering dynamics and demonstrate that these new results are in much better agreement with the modified Shaw-Johnson model. The implications of this much better agreement on our present understanding of detonation carbon clustering processes and possible ways to increase the agreement between theory and experiment even further are discussed.
REVIEWS OF TOPICAL PROBLEMS: Nonlinear dynamics of the brain: emotion and cognition
NASA Astrophysics Data System (ADS)
Rabinovich, Mikhail I.; Muezzinoglu, M. K.
2010-07-01
Experimental investigations of neural system functioning and brain activity are standardly based on the assumption that perceptions, emotions, and cognitive functions can be understood by analyzing steady-state neural processes and static tomographic snapshots. The new approaches discussed in this review are based on the analysis of transient processes and metastable states. Transient dynamics is characterized by two basic properties, structural stability and information sensitivity. The ideas and methods that we discuss provide an explanation for the occurrence of and successive transitions between metastable states observed in experiments, and offer new approaches to behavior analysis. Models of the emotional and cognitive functions of the brain are suggested. The mathematical object that represents the observed transient brain processes in the phase space of the model is a structurally stable heteroclinic channel. The possibility of using the suggested models to construct a quantitative theory of some emotional and cognitive functions is illustrated.
Modeling the natural UV irradiation and comparative UV measurements at Moussala BEO (BG)
NASA Astrophysics Data System (ADS)
Tyutyundzhiev, N.; Angelov, Ch; Lovchinov, K.; Nitchev, Hr; Petrov, M.; Arsov, T.
2018-03-01
Studies of and modeling the impact of natural UV irradiation on the human population are of significant importance for human activity and economics. The sharp increase of environmental problems – extraordinary temperature changes, solar irradiation abnormalities, icy rains – raises the question of developing novel means of assessing and predicting potential UV effects. In this paper, we discuss new UV irradiation modeling based on recent real-time measurements at Moussala Basic Environmental Observatory (BEO) on Moussala Peak (2925 m ASL) in Rila Mountain, Bulgaria, and highlight the development and initial validation of portable embedded devices for UV-A, UV-B monitoring using open-source software architecture, narrow bandpass UV sensors, and the popular Arduino controllers. Despite the high temporal resolution of the VIS and UV irradiation measurements, the results obtained reveal the need of new assumptions in order to minimize the discrepancy with available databases.
Hydration of nonelectrolytes in binary aqueous solutions
NASA Astrophysics Data System (ADS)
Rudakov, A. M.; Sergievskii, V. V.
2010-10-01
Literature data on the thermodynamic properties of binary aqueous solutions of nonelectrolytes that show negative deviations from Raoult's law due largely to the contribution of the hydration of the solute are briefly surveyed. Attention is focused on simulating the thermodynamic properties of solutions using equations of the cluster model. It is shown that the model is based on the assumption that there exists a distribution of stoichiometric hydrates over hydration numbers. In terms of the theory of ideal associated solutions, the equations for activity coefficients, osmotic coefficients, vapor pressure, and excess thermodynamic functions (volume, Gibbs energy, enthalpy, entropy) are obtained in analytical form. Basic parameters in the equations are the hydration numbers of the nonelectrolyte (the mathematical expectation of the distribution of hydrates) and the dispersions of the distribution. It is concluded that the model equations adequately describe the thermodynamic properties of a wide range of nonelectrolytes partly or completely soluble in water.
Simulated impacts of climate on hydrology can vary greatly as a function of the scale of the input data, model assumptions, and model structure. Four models are commonly used to simulate streamflow in model assumptions, and model structure. Four models are commonly used to simu...
Psychotherapy research needs theory. Outline for an epistemology of the clinical exchange.
Salvatore, Sergio
2011-09-01
This paper provides an analysis of a basic assumption grounding the clinical research: the ontological autonomy of psychotherapy-based on the idea that the clinical exchange is sufficiently distinguished from other social objects (i.e. exchange between teacher and pupils, or between buyer and seller, or interaction during dinner, and so forth). A criticism of such an assumption is discussed together with the proposal of a different epistemological interpretation, based on the distinction between communicative dynamics and the process of psychotherapy-psychotherapy is a goal-oriented process based on the general dynamics of human communication. Theoretical and methodological implications are drawn from such a view: It allows further sources of knowledge to be integrated within clinical research (i.e. those coming from other domains of analysis of human communication); it also enables a more abstract definition of the psychotherapy process to be developed, leading to innovative views of classical critical issues, like the specific-nonspecific debate. The final part of the paper is devoted to presenting a model of human communication--the Semiotic Dialogical Dialectic Theory--which is meant as the framework for the analysis of psychotherapy.
Klinger, Christen M.; Ramirez-Macias, Inmaculada; Herman, Emily K.; Turkewitz, Aaron P.; Field, Mark C.; Dacks, Joel B.
2016-01-01
With advances in DNA sequencing technology, it is increasingly common and tractable to informatically look for genes of interest in the genomic databases of parasitic organisms and infer cellular states. Assignment of a putative gene function based on homology to functionally characterized genes in other organisms, though powerful, relies on the implicit assumption of functional homology, i.e. that orthology indicates conserved function. Eukaryotes reveal a dazzling array of cellular features and structural organization, suggesting a concomitant diversity in their underlying molecular machinery. Significantly, examples of novel functions for pre-existing or new paralogues are not uncommon. Do these examples undermine the basic assumption of functional homology, especially in parasitic protists, which are often highly derived? Here we examine the extent to which functional homology exists between organisms spanning the eukaryotic lineage. By comparing membrane trafficking proteins between parasitic protists and traditional model organisms, where direct functional evidence is available, we find that function is indeed largely conserved between orthologues, albeit with significant adaptation arising from the unique biological features within each lineage. PMID:27444378
On a viable first-order formulation of relativistic viscous fluids and its applications to cosmology
NASA Astrophysics Data System (ADS)
Disconzi, Marcelo M.; Kephart, Thomas W.; Scherrer, Robert J.
We consider a first-order formulation of relativistic fluids with bulk viscosity based on a stress-energy tensor introduced by Lichnerowicz. Choosing a barotropic equation-of-state, we show that this theory satisfies basic physical requirements and, under the further assumption of vanishing vorticity, that the equations of motion are causal, both in the case of a fixed background and when the equations are coupled to Einstein's equations. Furthermore, Lichnerowicz's proposal does not fit into the general framework of first-order theories studied by Hiscock and Lindblom, and hence their instability results do not apply. These conclusions apply to the full-fledged nonlinear theory, without any equilibrium or near equilibrium assumptions. Similarities and differences between the approach explored here and other theories of relativistic viscosity, including the Mueller-Israel-Stewart formulation, are addressed. Cosmological models based on the Lichnerowicz stress-energy tensor are studied. As the topic of (relativistic) viscous fluids is also of interest outside the general relativity and cosmology communities, such as, for instance, in applications involving heavy-ion collisions, we make our presentation largely self-contained.
Perkel, R L
1996-03-01
Managed care presents physicians with potential ethical dilemmas different from dilemmas in traditional fee-for-service practice. The ethical assumptions of managed care are explored, with special attention to the evolving dual responsibilities of physicians as patient advocates and as entrepreneurs. A number of proposals are described that delineate issues in support of and in opposition to managed care. Through an understanding of how to apply basic ethics principles to managed care participation, physicians may yet hold on to the basic ethic of the fiduciary doctor-patient relationship.
Hazards and occupational risk in hard coal mines - a critical analysis of legal requirements
NASA Astrophysics Data System (ADS)
Krause, Marcin
2017-11-01
This publication concerns the problems of occupational safety and health in hard coal mines, the basic elements of which are the mining hazards and the occupational risk. The work includes a comparative analysis of selected provisions of general and industry-specific law regarding the analysis of hazards and occupational risk assessment. Based on a critical analysis of legal requirements, basic assumptions regarding the practical guidelines for occupational risk assessment in underground coal mines have been proposed.
ERIC Educational Resources Information Center
Collins, Michael
1989-01-01
Describes a Canadian curriculum development project; analyzes underlying policy assumptions. Advocates involvement of prison educators and inmates in the process if curriculum is to meet the educational needs of inmates. (Author/LAM)
Computer Applications in Teaching and Learning.
ERIC Educational Resources Information Center
Halley, Fred S.; And Others
Some examples of the usage of computers in teaching and learning are examination generation, automatic exam grading, student tracking, problem generation, computational examination generators, program packages, simulation, and programing skills for problem solving. These applications are non-trivial and do fulfill the basic assumptions necessary…
Probabilistic Simulation of Territorial Seismic Scenarios
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baratta, Alessandro; Corbi, Ileana
2008-07-08
The paper is focused on a stochastic process for the prevision of seismic scenarios on the territory and developed by means of some basic assumptions in the procedure and by elaborating the fundamental parameters recorded during some ground motions occurred in a seismic area.
Elements of a Research Report.
ERIC Educational Resources Information Center
Schurter, William J.
This guide for writing research or technical reports discusses eleven basic elements of such reports and provides examples of "good" and "bad" wordings. These elements are the title, problem statement, purpose statement, need statement, hypothesis, assumptions, procedures, limitations, terminology, conclusion and recommendations. This guide is…
Killeen, Peter R.
1995-01-01
The mechanics of behavior developed by Killeen (1994) is extended to deal with deprivation and satiation and with recovery of arousal at the beginning of sessions. The extended theory is validated against satiation curves and within-session changes in response rates. Anomalies, such as (a) the positive correlation between magnitude of an incentive and response rates in some contexts and a negative correlation in other contexts and (b) the greater prominence of incentive effects when magnitude is varied within the session rather than between sessions, are explained in terms of the basic interplay of drive and incentive motivation. The models are applied to data from closed economies in which changes of satiation levels play a key role in determining the changes in behavior. Relaxation of various assumptions leads to closed-form models for response rates and demand functions in these contexts, ones that show reasonable accord with the data and reinforce arguments for unit price as a controlling variable. The central role of deprivation level in this treatment distinguishes it from economic models. It is argued that traditional experiments should be redesigned to reveal basic principles, that ecologic experiments should be redesigned to test the applicability of those principles in more natural contexts, and that behavioral economics should consist of the applications of these principles to economic contexts, not the adoption of economic models as alternatives to behavioral analysis. PMID:16812776
Dynamics of a Tularemia Outbreak in a Closely Monitored Free-Roaming Population of Wild House Mice.
Dobay, Akos; Pilo, Paola; Lindholm, Anna K; Origgi, Francesco; Bagheri, Homayoun C; König, Barbara
2015-01-01
Infectious disease outbreaks can be devastating because of their sudden occurrence, as well as the complexity of monitoring and controlling them. Outbreaks in wildlife are even more challenging to observe and describe, especially when small animals or secretive species are involved. Modeling such infectious disease events is relevant to investigating their dynamics and is critical for decision makers to accomplish outbreak management. Tularemia, caused by the bacterium Francisella tularensis, is a potentially lethal zoonosis. Of the few animal outbreaks that have been reported in the literature, only those affecting zoo animals have been closely monitored. Here, we report the first estimation of the basic reproduction number R0 of an outbreak in wildlife caused by F. tularensis using quantitative modeling based on a susceptible-infected-recovered framework. We applied that model to data collected during an extensive investigation of an outbreak of tularemia caused by F. tularensis subsp. holarctica (also designated as type B) in a closely monitored, free-roaming house mouse (Mus musculus domesticus) population in Switzerland. Based on our model and assumptions, the best estimated basic reproduction number R0 of the current outbreak is 1.33. Our results suggest that tularemia can cause severe outbreaks in small rodents. We also concluded that the outbreak self-exhausted in approximately three months without administrating antibiotics.
Dynamics of a Tularemia Outbreak in a Closely Monitored Free-Roaming Population of Wild House Mice
Dobay, Akos; Pilo, Paola; Lindholm, Anna K.; Origgi, Francesco; Bagheri, Homayoun C.; König, Barbara
2015-01-01
Infectious disease outbreaks can be devastating because of their sudden occurrence, as well as the complexity of monitoring and controlling them. Outbreaks in wildlife are even more challenging to observe and describe, especially when small animals or secretive species are involved. Modeling such infectious disease events is relevant to investigating their dynamics and is critical for decision makers to accomplish outbreak management. Tularemia, caused by the bacterium Francisella tularensis, is a potentially lethal zoonosis. Of the few animal outbreaks that have been reported in the literature, only those affecting zoo animals have been closely monitored. Here, we report the first estimation of the basic reproduction number R 0 of an outbreak in wildlife caused by F. tularensis using quantitative modeling based on a susceptible-infected-recovered framework. We applied that model to data collected during an extensive investigation of an outbreak of tularemia caused by F. tularensis subsp. holarctica (also designated as type B) in a closely monitored, free-roaming house mouse (Mus musculus domesticus) population in Switzerland. Based on our model and assumptions, the best estimated basic reproduction number R 0 of the current outbreak is 1.33. Our results suggest that tularemia can cause severe outbreaks in small rodents. We also concluded that the outbreak self-exhausted in approximately three months without administrating antibiotics. PMID:26536232
Ghasemizadeh, Reza; Hellweger, Ferdinand; Butscher, Christoph; Padilla, Ingrid; Vesper, Dorothy; Field, Malcolm; Alshawabkeh, Akram
2013-01-01
Karst systems have a high degree of heterogeneity and anisotropy, which makes them behave very differently from other aquifers. Slow seepage through the rock matrix and fast flow through conduits and fractures result in a high variation in spring response to precipitation events. Contaminant storage occurs in the rock matrix and epikarst, but contaminant transport occurs mostly along preferential pathways that are typically inaccessible locations, which makes modeling of karst systems challenging. Computer models for understanding and predicting hydraulics and contaminant transport in aquifers make assumptions about the distribution and hydraulic properties of geologic features that may not always apply to karst aquifers. This paper reviews the basic concepts, mathematical descriptions, and modeling approaches for karst systems. The North Coast Limestone aquifer system of Puerto Rico (USA) is introduced as a case study to illustrate and discuss the application of groundwater models in karst aquifer systems to evaluate aquifer contamination. PMID:23645996
Micro- and meso-scale pore structure in mortar in relation to aggregate content
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Yun, E-mail: yun.gao@ugent.be; De Schutter, Geert; Ye, Guang
2013-10-15
Mortar is often viewed as a three-phase composite consisting of aggregate, bulk paste, and an interfacial transition zone (ITZ). However, this description is inconsistent with experimental findings because of the basic assumption that larger pores are only present within the ITZ. In this paper, we use backscattered electron (BSE) imaging to investigate the micro- and meso-scale structure of mortar with varying aggregate content. The results indicate that larger pores are present not only within the ITZ but also within areas far from aggregates. This phenomenon is discussed in detail based on a series of analytical calculations, such as the effectivemore » water binder ratio and the inter-aggregate spacing. We developed a modified computer model that includes a two-phase structure for bulk paste. This model interprets previous mercury intrusion porosimetry data very well. -- Highlights: •Based on BSE, we examine the HCSS model. •We develop the HCSS-DBLB model. •We use the modified model to interpret the MIP data.« less
Sawan, Mouna; Jeon, Yun-Hee; Chen, Timothy F
2018-03-01
Psychotropic medicines are commonly used in nursing homes, despite marginal clinical benefits and association with harm in the elderly. Organizational culture is proposed as a factor explaining the high-level use of psychotropic medicines. Schein describes three levels of culture: artifacts, espoused values, and basic assumptions. This integrative review aimed to investigate the facets and role of organizational culture in the use of psychotropic medicines in nursing homes. Five databases were searched for qualitative, quantitative, and mixed method empirical studies up to 13 February 2017. Articles were included if they examined an aspect of organizational culture according to Schein's theory and the use of psychotropic medicines in nursing homes for the management of behavioral and sleep disturbances in residents. Article screening and data extraction were performed independently by one reviewer and checked by the research team. The integrative review method, an approach similar to the method of constant comparison analysis was utilized for data analysis. Twenty-four studies met the inclusion criteria: 13 used quantitative methods, 9 used qualitative methods, 1 was quasi-qualitative, and 1 used mixed methods. Included studies were found to only address two aspects of organizational culture in relation to the use of psychotropic medicines: artifacts and espoused values. No studies addressed the basic assumptions, the unsaid taken-for-granted beliefs, which provide explanations for in/consistencies between the ideal use of psychotropic medicines and the actual use of psychotropic medicines. Previous studies suggest that organizational culture influences the use of psychotropic medicines in nursing homes; however, what is known is descriptive of culture only at the surface level, that is the artifacts and espoused values. Hence, future research that explains the impact of the basic assumptions of culture on the use of psychotropic medicines is important.
Yun, Jian; Shang, Song-Chao; Wei, Xiao-Dan; Liu, Shuang; Li, Zhi-Jie
2016-01-01
Language is characterized by both ecological properties and social properties, and competition is the basic form of language evolution. The rise and decline of one language is a result of competition between languages. Moreover, this rise and decline directly influences the diversity of human culture. Mathematics and computer modeling for language competition has been a popular topic in the fields of linguistics, mathematics, computer science, ecology, and other disciplines. Currently, there are several problems in the research on language competition modeling. First, comprehensive mathematical analysis is absent in most studies of language competition models. Next, most language competition models are based on the assumption that one language in the model is stronger than the other. These studies tend to ignore cases where there is a balance of power in the competition. The competition between two well-matched languages is more practical, because it can facilitate the co-development of two languages. A third issue with current studies is that many studies have an evolution result where the weaker language inevitably goes extinct. From the integrated point of view of ecology and sociology, this paper improves the Lotka-Volterra model and basic reaction-diffusion model to propose an "ecology-society" computational model for describing language competition. Furthermore, a strict and comprehensive mathematical analysis was made for the stability of the equilibria. Two languages in competition may be either well-matched or greatly different in strength, which was reflected in the experimental design. The results revealed that language coexistence, and even co-development, are likely to occur during language competition.
Survival analysis in hematologic malignancies: recommendations for clinicians
Delgado, Julio; Pereira, Arturo; Villamor, Neus; López-Guillermo, Armando; Rozman, Ciril
2014-01-01
The widespread availability of statistical packages has undoubtedly helped hematologists worldwide in the analysis of their data, but has also led to the inappropriate use of statistical methods. In this article, we review some basic concepts of survival analysis and also make recommendations about how and when to perform each particular test using SPSS, Stata and R. In particular, we describe a simple way of defining cut-off points for continuous variables and the appropriate and inappropriate uses of the Kaplan-Meier method and Cox proportional hazard regression models. We also provide practical advice on how to check the proportional hazards assumption and briefly review the role of relative survival and multiple imputation. PMID:25176982
CFD Investigation of Pollutant Emission in Can-Type Combustor Firing Natural Gas, LNG and Syngas
NASA Astrophysics Data System (ADS)
Hasini, H.; Fadhil, SSA; Mat Zian, N.; Om, NI
2016-03-01
CFD investigation of flow, combustion process and pollutant emission using natural gas, liquefied natural gas and syngas of different composition is carried out. The combustor is a can-type combustor commonly used in thermal power plant gas turbine. The investigation emphasis on the comparison of pollutant emission such in particular CO2, and NOx between different fuels. The numerical calculation for basic flow and combustion process is done using the framework of ANSYS Fluent with appropriate model assumptions. Prediction of pollutant species concentration at combustor exit shows significant reduction of CO2 and NOx for syngas combustion compared to conventional natural gas and LNG combustion.
Trujillo, Caleb; Cooper, Melanie M; Klymkowsky, Michael W
2012-01-01
Biological systems, from the molecular to the ecological, involve dynamic interaction networks. To examine student thinking about networks we used graphical responses, since they are easier to evaluate for implied, but unarticulated assumptions. Senior college level molecular biology students were presented with simple molecular level scenarios; surprisingly, most students failed to articulate the basic assumptions needed to generate reasonable graphical representations; their graphs often contradicted their explicit assumptions. We then developed a tiered Socratic tutorial based on leading questions designed to provoke metacognitive reflection. The activity is characterized by leading questions (prompts) designed to provoke meta-cognitive reflection. When applied in a group or individual setting, there was clear improvement in targeted areas. Our results highlight the promise of using graphical responses and Socratic prompts in a tutorial context as both a formative assessment for students and an informative feedback system for instructors, in part because graphical responses are relatively easy to evaluate for implied, but unarticulated assumptions. Copyright © 2011 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Resnick, Lauren B.; And Others
This paper discusses a radically different set of assumptions to improve educational outcomes for disadvantaged students. It is argued that disadvantaged children, when exposed to carefully organized thinking-oriented instruction, can acquire the traditional basic skills in the process of reasoning and solving problems. The paper is presented in…
Measurement of Inequality: The Gini Coefficient and School Finance Studies.
ERIC Educational Resources Information Center
Lows, Raymond L.
1984-01-01
Discusses application of the "Lorenz Curve" (a graphical representation of the concentration of wealth) with the "Gini Coefficient" (an index of inequality) to measure social inequality in school finance studies. Examines the basic assumptions of these measures and suggests a minor reconception. (MCG)
Beyond the Virtues-Principles Debate.
ERIC Educational Resources Information Center
Keat, Marilyn S.
1992-01-01
Indicates basic ontological assumptions in the virtues-principles debate in moral philosophy, noting Aristotle's and Kant's fundamental ideas about morality and considering a hermeneutic synthesis of theories. The article discusses what acceptance of the synthesis might mean in the theory and practice of moral pedagogy, offering examples of…
The Structuring Principle: Political Socialization and Belief Systems
ERIC Educational Resources Information Center
Searing, Donald D.; And Others
1973-01-01
Assesses the significance of data on childhood political learning to political theory by testing the structuring principle,'' considered one of the central assumptions of political socialization research. This principle asserts that basic orientations acquired during childhood structure the later learning of specific issue beliefs.'' The…
ERIC Educational Resources Information Center
Hastings, Elizabeth
1981-01-01
The author outlines the experiences of disability and demonstrates that generally unpleasant experiences are the direct result of a basic and false assumption on the part of society. Experiences of the disabled are discussed in areas the author categorizes as exclusion or segregation, deprivation, prejudice, poverty, frustration, and…
Some Remarks on the Theory of Political Education. German Studies Notes.
ERIC Educational Resources Information Center
Holtmann, Antonius
This theoretical discussion explores pedagogical assumptions of political education in West Germany. Three major methodological orientations are discussed: the normative-ontological, empirical-analytical, and dialectical-historical. The author recounts the aims, methods, and basic presuppositions of each of these approaches. Topics discussed…
Assessment of the Natural Environment.
ERIC Educational Resources Information Center
Cantrell, Mary Lynn; Cantrell, Robert P.
1985-01-01
Basic assumptions of an ecological-behavioral view of assessing behavior disordered students are reviewed along with a proposed method for ecological analysis and specific techniques for measuring ecological variables (such as environmental units, behaviors of significant others, and behavioral expectations). The use of such information in program…
Sherlock Holmes as a Social Scientist.
ERIC Educational Resources Information Center
Ward, Veronica; Orbell, John
1988-01-01
Presents a way of teaching the scientific method through studying the adventures of Sherlock Holmes. Asserting that Sherlock Holmes used the scientific method to solve cases, the authors construct Holmes' method through excerpts from novels featuring his adventures. Discusses basic assumptions, paradigms, theory building, and testing. (SLM)
Basic principles of respiratory function monitoring in ventilated newborns: A review.
Schmalisch, Gerd
2016-09-01
Respiratory monitoring during mechanical ventilation provides a real-time picture of patient-ventilator interaction and is a prerequisite for lung-protective ventilation. Nowadays, measurements of airflow, tidal volume and applied pressures are standard in neonatal ventilators. The measurement of lung volume during mechanical ventilation by tracer gas washout techniques is still under development. The clinical use of capnography, although well established in adults, has not been embraced by neonatologists because of technical and methodological problems in very small infants. While the ventilatory parameters are well defined, the calculation of other physiological parameters are based upon specific assumptions which are difficult to verify. Incomplete knowledge of the theoretical background of these calculations and their limitations can lead to incorrect interpretations with clinical consequences. Therefore, the aim of this review was to describe the basic principles and the underlying assumptions of currently used methods for respiratory function monitoring in ventilated newborns and to highlight methodological limitations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lectures on Dark Matter Physics
NASA Astrophysics Data System (ADS)
Lisanti, Mariangela
Rotation curve measurements from the 1970s provided the first strong indication that a significant fraction of matter in the Universe is non-baryonic. In the intervening years, a tremendous amount of progress has been made on both the theoretical and experimental fronts in the search for this missing matter, which we now know constitutes nearly 85% of the Universe's matter density. These series of lectures provide an introduction to the basics of dark matter physics. They are geared for the advanced undergraduate or graduate student interested in pursuing research in high-energy physics. The primary goal is to build an understanding of how observations constrain the assumptions that can be made about the astro- and particle physics properties of dark matter. The lectures begin by delineating the basic assumptions that can be inferred about dark matter from rotation curves. A detailed discussion of thermal dark matter follows, motivating Weakly Interacting Massive Particles, as well as lighter-mass alternatives. As an application of these concepts, the phenomenology of direct and indirect detection experiments is discussed in detail.
Testing the basic assumption of the hydrogeomorphic approach to assessing wetland functions.
Hruby, T
2001-05-01
The hydrogeomorphic (HGM) approach for developing "rapid" wetland function assessment methods stipulates that the variables used are to be scaled based on data collected at sites judged to be the best at performing the wetland functions (reference standard sites). A critical step in the process is to choose the least altered wetlands in a hydrogeomorphic subclass to use as a reference standard against which other wetlands are compared. The basic assumption made in this approach is that wetlands judged to have had the least human impact have the highest level of sustainable performance for all functions. The levels at which functions are performed in these least altered wetlands are assumed to be "characteristic" for the subclass and "sustainable." Results from data collected in wetlands in the lowlands of western Washington suggest that the assumption may not be appropriate for this region. Teams developing methods for assessing wetland functions did not find that the least altered wetlands in a subclass had a range of performance levels that could be identified as "characteristic" or "sustainable." Forty-four wetlands in four hydrogeomorphic subclasses (two depressional subclasses and two riverine subclasses) were rated by teams of experts on the severity of their human alterations and on the level of performance of 15 wetland functions. An ordinal scale of 1-5 was used to quantify alterations in water regime, soils, vegetation, buffers, and contributing basin. Performance of functions was judged on an ordinal scale of 1-7. Relatively unaltered wetlands were judged to perform individual functions at levels that spanned all of the seven possible ratings in all four subclasses. The basic assumption of the HGM approach, that the least altered wetlands represent "characteristic" and "sustainable" levels of functioning that are different from those found in altered wetlands, was not confirmed. Although the intent of the HGM approach is to use level of functioning as a metric to assess the ecological integrity or "health" of the wetland ecosystem, the metric does not seem to work in western Washington for that purpose.
Factors influencing the thermally-induced strength degradation of B/Al composites
NASA Technical Reports Server (NTRS)
Dicarlo, J. A.
1983-01-01
Literature data related to the thermally-induced strength degradation of B/Al composites were examined in the light of fracture theories based on reaction-controlled fiber weakening. Under the assumption of a parabolic time-dependent growth for the interfacial reaction product, a Griffith-type fracture model was found to yield simple equations whose predictions were in good agreement with data for boron fiber average strength and for B/Al axial fracture strain. The only variables in these equations were the time and temperature of the thermal exposure and an empirical factor related to fiber surface smoothness prior to composite consolidation. Such variables as fiber diameter and aluminum alloy composition were found to have little influence. The basic and practical implications of the fracture model equations are discussed. Previously announced in STAR as N82-24297
Estimating indices of range shifts in birds using dynamic models when detection is imperfect
Clement, Matthew J.; Hines, James E.; Nichols, James D.; Pardieck, Keith L.; Ziolkowski, David J.
2016-01-01
There is intense interest in basic and applied ecology about the effect of global change on current and future species distributions. Projections based on widely used static modeling methods implicitly assume that species are in equilibrium with the environment and that detection during surveys is perfect. We used multiseason correlated detection occupancy models, which avoid these assumptions, to relate climate data to distributional shifts of Louisiana Waterthrush in the North American Breeding Bird Survey (BBS) data. We summarized these shifts with indices of range size and position and compared them to the same indices obtained using more basic modeling approaches. Detection rates during point counts in BBS surveys were low, and models that ignored imperfect detection severely underestimated the proportion of area occupied and slightly overestimated mean latitude. Static models indicated Louisiana Waterthrush distribution was most closely associated with moderate temperatures, while dynamic occupancy models indicated that initial occupancy was associated with diurnal temperature ranges and colonization of sites was associated with moderate precipitation. Overall, the proportion of area occupied and mean latitude changed little during the 1997–2013 study period. Near-term forecasts of species distribution generated by dynamic models were more similar to subsequently observed distributions than forecasts from static models. Occupancy models incorporating a finite mixture model on detection – a new extension to correlated detection occupancy models – were better supported and may reduce bias associated with detection heterogeneity. We argue that replacing phenomenological static models with more mechanistic dynamic models can improve projections of future species distributions. In turn, better projections can improve biodiversity forecasts, management decisions, and understanding of global change biology.
NASA Technical Reports Server (NTRS)
Matney, Mark
2011-01-01
A number of statistical tools have been developed over the years for assessing the risk of reentering objects to human populations. These tools make use of the characteristics (e.g., mass, material, shape, size) of debris that are predicted by aerothermal models to survive reentry. The statistical tools use this information to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of the analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. Because this information is used in making policy and engineering decisions, it is important that these assumptions be tested using empirical data. This study uses the latest database of known uncontrolled reentry locations measured by the United States Department of Defense. The predicted ground footprint distributions of these objects are based on the theory that their orbits behave basically like simple Kepler orbits. However, there are a number of factors in the final stages of reentry - including the effects of gravitational harmonics, the effects of the Earth s equatorial bulge on the atmosphere, and the rotation of the Earth and atmosphere - that could cause them to diverge from simple Kepler orbit behavior and possibly change the probability of reentering over a given location. In this paper, the measured latitude and longitude distributions of these objects are directly compared with the predicted distributions, providing a fundamental empirical test of the model assumptions.
Aspects of fluency in writing.
Uppstad, Per Henning; Solheim, Oddny Judith
2007-03-01
The notion of 'fluency' is most often associated with spoken-language phenomena such as stuttering. The present article investigates the relevance of considering fluency in writing. The basic argument for raising this question is empirical-it follows from a focus on difficulties in written and spoken language as manifestations of different problems which should be investigated separately on the basis of their symptoms. Key-logging instruments provide new possibilities for the study of writing. The obvious use of this new technology is to study writing as it unfolds in real time, instead of focusing only on aspects of the end product. A more sophisticated application is to exploit the key-logging instrument in order to test basic assumptions of contemporary theories of spelling. The present study is a dictation task involving words and non-words, intended to investigate spelling in nine-year-old pupils with regard to their mastery of the doubling of consonants in Norwegian. In this study, we report on differences with regard to temporal measures between a group of strong writers and a group of poor ones. On the basis of these pupils' writing behavior, the relevance of the concept of 'fluency' in writing is highlighted. The interpretation of the results questions basic assumptions of the cognitive hypothesis about spelling; the article concludes by hypothesizing a different conception of spelling.
Beresniak, Ariel; Medina-Lara, Antonieta; Auray, Jean Paul; De Wever, Alain; Praet, Jean-Claude; Tarricone, Rosanna; Torbica, Aleksandra; Dupont, Danielle; Lamure, Michel; Duru, Gerard
2015-01-01
Quality-adjusted life-years (QALYs) have been used since the 1980s as a standard health outcome measure for conducting cost-utility analyses, which are often inadequately labeled as 'cost-effectiveness analyses'. This synthetic outcome, which combines the quantity of life lived with its quality expressed as a preference score, is currently recommended as reference case by some health technology assessment (HTA) agencies. While critics of the QALY approach have expressed concerns about equity and ethical issues, surprisingly, very few have tested the basic methodological assumptions supporting the QALY equation so as to establish its scientific validity. The main objective of the ECHOUTCOME European project was to test the validity of the underlying assumptions of the QALY outcome and its relevance in health decision making. An experiment has been conducted with 1,361 subjects from Belgium, France, Italy, and the UK. The subjects were asked to express their preferences regarding various hypothetical health states derived from combining different health states with time durations in order to compare observed utility values of the couples (health state, time) and calculated utility values using the QALY formula. Observed and calculated utility values of the couples (health state, time) were significantly different, confirming that preferences expressed by the respondents were not consistent with the QALY theoretical assumptions. This European study contributes to establishing that the QALY multiplicative model is an invalid measure. This explains why costs/QALY estimates may vary greatly, leading to inconsistent recommendations relevant to providing access to innovative medicines and health technologies. HTA agencies should consider other more robust methodological approaches to guide reimbursement decisions.
Unifying error structures in commonly used biotracer mixing models.
Stock, Brian C; Semmens, Brice X
2016-10-01
Mixing models are statistical tools that use biotracers to probabilistically estimate the contribution of multiple sources to a mixture. These biotracers may include contaminants, fatty acids, or stable isotopes, the latter of which are widely used in trophic ecology to estimate the mixed diet of consumers. Bayesian implementations of mixing models using stable isotopes (e.g., MixSIR, SIAR) are regularly used by ecologists for this purpose, but basic questions remain about when each is most appropriate. In this study, we describe the structural differences between common mixing model error formulations in terms of their assumptions about the predation process. We then introduce a new parameterization that unifies these mixing model error structures, as well as implicitly estimates the rate at which consumers sample from source populations (i.e., consumption rate). Using simulations and previously published mixing model datasets, we demonstrate that the new error parameterization outperforms existing models and provides an estimate of consumption. Our results suggest that the error structure introduced here will improve future mixing model estimates of animal diet. © 2016 by the Ecological Society of America.
Ziegler, Sigurd; Pedersen, Mads L; Mowinckel, Athanasia M; Biele, Guido
2016-12-01
Attention deficit hyperactivity disorder (ADHD) is characterized by altered decision-making (DM) and reinforcement learning (RL), for which competing theories propose alternative explanations. Computational modelling contributes to understanding DM and RL by integrating behavioural and neurobiological findings, and could elucidate pathogenic mechanisms behind ADHD. This review of neurobiological theories of ADHD describes predictions for the effect of ADHD on DM and RL as described by the drift-diffusion model of DM (DDM) and a basic RL model. Empirical studies employing these models are also reviewed. While theories often agree on how ADHD should be reflected in model parameters, each theory implies a unique combination of predictions. Empirical studies agree with the theories' assumptions of a lowered DDM drift rate in ADHD, while findings are less conclusive for boundary separation. The few studies employing RL models support a lower choice sensitivity in ADHD, but not an altered learning rate. The discussion outlines research areas for further theoretical refinement in the ADHD field. Copyright © 2016 Elsevier Ltd. All rights reserved.
Calculation of Temperature Rise in Calorimetry.
ERIC Educational Resources Information Center
Canagaratna, Sebastian G.; Witt, Jerry
1988-01-01
Gives a simple but fuller account of the basis for accurately calculating temperature rise in calorimetry. Points out some misconceptions regarding these calculations. Describes two basic methods, the extrapolation to zero time and the equal area method. Discusses the theoretical basis of each and their underlying assumptions. (CW)
Helicopter Toy and Lift Estimation
ERIC Educational Resources Information Center
Shakerin, Said
2013-01-01
A $1 plastic helicopter toy (called a Wacky Whirler) can be used to demonstrate lift. Students can make basic measurements of the toy, use reasonable assumptions and, with the lift formula, estimate the lift, and verify that it is sufficient to overcome the toy's weight. (Contains 1 figure.)
The Rural School Principalship: Unique Challenges, Opportunities.
ERIC Educational Resources Information Center
Hill, Jonathan
1993-01-01
Presents findings based on author's research and experience as principal in California's Mojave Desert. Five basic characteristics distinguish the rural principalship: lack of an assistant principal or other support staff; assumption of other duties, including central office tasks, teaching, or management of another site; less severe student…
Teacher Education: Of the People, by the People, and for the People.
ERIC Educational Resources Information Center
Clinton, Hillary Rodham
1985-01-01
Effective inservice programs are necessary to ensure that current reforms in education are properly implemented. Inservice programs must meet the needs of both the educational system and educators. Six basic policy assumptions dealing with what is needed in inservice education are discussed. (DF)
School Discipline Disproportionality: Culturally Competent Interventions for African American Males
ERIC Educational Resources Information Center
Simmons-Reed, Evette A.; Cartledge, Gwendolyn
2014-01-01
Exclusionary policies are practiced widely in schools despite being associated with extremely poor outcomes for culturally and linguistically diverse students, particularly African American males with and without disabilities. This article discusses zero tolerance policies, the related research questioning their basic assumptions, and the negative…
Educational Evaluation: Analysis and Responsibility.
ERIC Educational Resources Information Center
Apple, Michael W., Ed.; And Others
This book presents controversial aspects of evaluation and aims at broadening perspectives and insights in the evaluation field. Chapter 1 criticizes modes of evaluation and the basic rationality behind them and focuses on assumptions that have problematic consequences. Chapter 2 introduces concepts of evaluation and examines methods of grading…
General Nature of Multicollinearity in Multiple Regression Analysis.
ERIC Educational Resources Information Center
Liu, Richard
1981-01-01
Discusses multiple regression, a very popular statistical technique in the field of education. One of the basic assumptions in regression analysis requires that independent variables in the equation should not be highly correlated. The problem of multicollinearity and some of the solutions to it are discussed. (Author)
Feminism, Communication and the Politics of Knowledge.
ERIC Educational Resources Information Center
Gallagher, Margaret
Recent retrieval of pre-nineteenth century feminist thought provides a telling lesson in the politics of knowledge creation and control. From a feminist perspective, very little research carried out within the critical research paradigm questions the "basic assumptions, conventional wisdom, media myths and the accepted way of doing…
Qualitative Research in Counseling Psychology: Conceptual Foundations
ERIC Educational Resources Information Center
Morrow, Susan L.
2007-01-01
Beginning with calls for methodological diversity in counseling psychology, this article addresses the history and current state of qualitative research in counseling psychology. It identifies the historical and disciplinary origins as well as basic assumptions and underpinnings of qualitative research in general, as well as within counseling…
Cervera, Javier; Manzanares, Jose Antonio; Mafe, Salvador
2015-02-19
We analyze the coupling of model nonexcitable (non-neural) cells assuming that the cell membrane potential is the basic individual property. We obtain this potential on the basis of the inward and outward rectifying voltage-gated channels characteristic of cell membranes. We concentrate on the electrical coupling of a cell ensemble rather than on the biochemical and mechanical characteristics of the individual cells, obtain the map of single cell potentials using simple assumptions, and suggest procedures to collectively modify this spatial map. The response of the cell ensemble to an external perturbation and the consequences of cell isolation, heterogeneity, and ensemble size are also analyzed. The results suggest that simple coupling mechanisms can be significant for the biophysical chemistry of model biomolecular ensembles. In particular, the spatiotemporal map of single cell potentials should be relevant for the uptake and distribution of charged nanoparticles over model cell ensembles and the collective properties of droplet networks incorporating protein ion channels inserted in lipid bilayers.
Transmission Heterogeneity and Autoinoculation in a Multisite Infection Model of HPV
Brouwer, Andrew F.; Meza, Rafael; Eisenberg, Marisa C.
2015-01-01
The human papillomavirus (HPV) is sexually transmitted and can infect oral, genital, and anal sites in the human epithelium. Here, we develop a multisite transmission model that includes autoinoculation, to study HPV and other multisite diseases. Under a homogeneous-contacts assumption, we analyze the basic reproduction number R0, as well as type and target reproduction numbers, for a two-site model. In particular, we find that R0 occupies a space between taking the maximum of next generation matrix terms for same site transmission and taking the geometric average of cross-site transmission terms in such a way that heterogeneity in the same-site transmission rates increases R0 while heterogeneity in the cross-site transmission decreases it. Additionally, autoinoculation adds considerable complexity to the form of R0. We extend this analysis to a heterosexual population, which additionally yields dynamics analogous to those of vector–host models. We also examine how these issues of heterogeneity may affect disease control, using type and target reproduction numbers. PMID:26518265
Modeling Manpower and Equipment Productivity in Tall Building Construction Projects
NASA Astrophysics Data System (ADS)
Mudumbai Krishnaswamy, Parthasarathy; Rajiah, Murugasan; Vasan, Ramya
2017-12-01
Tall building construction projects involve two critical resources of manpower and equipment. Their usage, however, widely varies due to several factors affecting their productivity. Currently, no systematic study for estimating and increasing their productivity is available. What is prevalent is the use of empirical data, experience of similar projects and assumptions. As tall building projects are here to stay and increase, to meet the emerging demands in ever shrinking urban spaces, it is imperative to explore ways and means of scientific productivity models for basic construction activities: concrete, reinforcement, formwork, block work and plastering for the input of specific resources in a mixed environment of manpower and equipment usage. Data pertaining to 72 tall building projects in India were collected and analyzed. Then, suitable productivity estimation models were developed using multiple linear regression analysis and validated using independent field data. It is hoped that the models developed in the study will be useful for quantity surveyors, cost engineers and project managers to estimate productivity of resources in tall building projects.
Intergenerational resource transfers with random offspring numbers
Arrow, Kenneth J.; Levin, Simon A.
2009-01-01
A problem common to biology and economics is the transfer of resources from parents to children. We consider the issue under the assumption that the number of offspring is unknown and can be represented as a random variable. There are 3 basic assumptions. The first assumption is that a given body of resources can be divided into consumption (yielding satisfaction) and transfer to children. The second assumption is that the parents' welfare includes a concern for the welfare of their children; this is recursive in the sense that the children's welfares include concern for their children and so forth. However, the welfare of a child from a given consumption is counted somewhat differently (generally less) than that of the parent (the welfare of a child is “discounted”). The third assumption is that resources transferred may grow (or decline). In economic language, investment, including that in education or nutrition, is productive. Under suitable restrictions, precise formulas for the resulting allocation of resources are found, demonstrating that, depending on the shape of the utility curve, uncertainty regarding the number of offspring may or may not favor increased consumption. The results imply that wealth (stock of resources) will ultimately have a log-normal distribution. PMID:19617553
Telepresence for space: The state of the concept
NASA Technical Reports Server (NTRS)
Smith, Randy L.; Gillan, Douglas J.; Stuart, Mark A.
1990-01-01
The purpose here is to examine the concept of telepresence critically. To accomplish this goal, first, the assumptions that underlie telepresence and its applications are examined, and second, the issues raised by that examination are discussed. Also, these assumptions and issues are used as a means of shifting the focus in telepresence from development to user-based research. The most basic assumption of telepresence is that the information being provided to the human must be displayed in a natural fashion, i.e., the information should be displayed to the same human sensory modalities, and in the same fashion, as if the person where actually at the remote site. A further fundamental assumption for the functional use of telepresence is that a sense of being present in the work environment will produce superior performance. In other words, that sense of being there would allow the human operator of a distant machine to take greater advantage of his or her considerable perceptual, cognitive, and motor capabilities in the performance of a task than would more limited task-related feedback. Finally, a third fundamental assumption of functional telepresence is that the distant machine under the operator's control must substantially resemble a human in dexterity.
Framework for Uncertainty Assessment - Hanford Site-Wide Groundwater Flow and Transport Modeling
NASA Astrophysics Data System (ADS)
Bergeron, M. P.; Cole, C. R.; Murray, C. J.; Thorne, P. D.; Wurstner, S. K.
2002-05-01
Pacific Northwest National Laboratory is in the process of development and implementation of an uncertainty estimation methodology for use in future site assessments that addresses parameter uncertainty as well as uncertainties related to the groundwater conceptual model. The long-term goals of the effort are development and implementation of an uncertainty estimation methodology for use in future assessments and analyses being made with the Hanford site-wide groundwater model. The basic approach in the framework developed for uncertainty assessment consists of: 1) Alternate conceptual model (ACM) identification to identify and document the major features and assumptions of each conceptual model. The process must also include a periodic review of the existing and proposed new conceptual models as data or understanding become available. 2) ACM development of each identified conceptual model through inverse modeling with historical site data. 3) ACM evaluation to identify which of conceptual models are plausible and should be included in any subsequent uncertainty assessments. 4) ACM uncertainty assessments will only be carried out for those ACMs determined to be plausible through comparison with historical observations and model structure identification measures. The parameter uncertainty assessment process generally involves: a) Model Complexity Optimization - to identify the important or relevant parameters for the uncertainty analysis; b) Characterization of Parameter Uncertainty - to develop the pdfs for the important uncertain parameters including identification of any correlations among parameters; c) Propagation of Uncertainty - to propagate parameter uncertainties (e.g., by first order second moment methods if applicable or by a Monte Carlo approach) through the model to determine the uncertainty in the model predictions of interest. 5)Estimation of combined ACM and scenario uncertainty by a double sum with each component of the inner sum (an individual CCDF) representing parameter uncertainty associated with a particular scenario and ACM and the outer sum enumerating the various plausible ACM and scenario combinations in order to represent the combined estimate of uncertainty (a family of CCDFs). A final important part of the framework includes identification, enumeration, and documentation of all the assumptions, which include those made during conceptual model development, required by the mathematical model, required by the numerical model, made during the spatial and temporal descretization process, needed to assign the statistical model and associated parameters that describe the uncertainty in the relevant input parameters, and finally those assumptions required by the propagation method. Pacific Northwest National Laboratory is operated for the U.S. Department of Energy under Contract DE-AC06-76RL01830.
Three regularities of recognition memory: the role of bias.
Hilford, Andrew; Maloney, Laurence T; Glanzer, Murray; Kim, Kisok
2015-12-01
A basic assumption of Signal Detection Theory is that decisions are made on the basis of likelihood ratios. In a preceding paper, Glanzer, Hilford, and Maloney (Psychonomic Bulletin & Review, 16, 431-455, 2009) showed that the likelihood ratio assumption implies that three regularities will occur in recognition memory: (1) the Mirror Effect, (2) the Variance Effect, (3) the normalized Receiver Operating Characteristic (z-ROC) Length Effect. The paper offered formal proofs and computational demonstrations that decisions based on likelihood ratios produce the three regularities. A survey of data based on group ROCs from 36 studies validated the likelihood ratio assumption by showing that its three implied regularities are ubiquitous. The study noted, however, that bias, another basic factor in Signal Detection Theory, can obscure the Mirror Effect. In this paper we examine how bias affects the regularities at the theoretical level. The theoretical analysis shows: (1) how bias obscures the Mirror Effect, not the other two regularities, and (2) four ways to counter that obscuring. We then report the results of five experiments that support the theoretical analysis. The analyses and the experimental results also demonstrate: (1) that the three regularities govern individual, as well as group, performance, (2) alternative explanations of the regularities are ruled out, and (3) that Signal Detection Theory, correctly applied, gives a simple and unified explanation of recognition memory data.
A Probabilistic Approach to Model Update
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Voracek, David F.
2001-01-01
Finite element models are often developed for load validation, structural certification, response predictions, and to study alternate design concepts. In rare occasions, models developed with a nominal set of parameters agree with experimental data without the need to update parameter values. Today, model updating is generally heuristic and often performed by a skilled analyst with in-depth understanding of the model assumptions. Parameter uncertainties play a key role in understanding the model update problem and therefore probabilistic analysis tools, developed for reliability and risk analysis, may be used to incorporate uncertainty in the analysis. In this work, probability analysis (PA) tools are used to aid the parameter update task using experimental data and some basic knowledge of potential error sources. Discussed here is the first application of PA tools to update parameters of a finite element model for a composite wing structure. Static deflection data at six locations are used to update five parameters. It is shown that while prediction of individual response values may not be matched identically, the system response is significantly improved with moderate changes in parameter values.
Parra-Robles, J; Ajraoui, S; Deppe, M H; Parnell, S R; Wild, J M
2010-06-01
Models of lung acinar geometry have been proposed to analytically describe the diffusion of (3)He in the lung (as measured with pulsed gradient spin echo (PGSE) methods) as a possible means of characterizing lung microstructure from measurement of the (3)He ADC. In this work, major limitations in these analytical models are highlighted in simple diffusion weighted experiments with (3)He in cylindrical models of known geometry. The findings are substantiated with numerical simulations based on the same geometry using finite difference representation of the Bloch-Torrey equation. The validity of the existing "cylinder model" is discussed in terms of the physical diffusion regimes experienced and the basic reliance of the cylinder model and other ADC-based approaches on a Gaussian diffusion behaviour is highlighted. The results presented here demonstrate that physical assumptions of the cylinder model are not valid for large diffusion gradient strengths (above approximately 15 mT/m), which are commonly used for (3)He ADC measurements in human lungs. (c) 2010 Elsevier Inc. All rights reserved.
1983-05-01
in the presence of fillers or without it. The basic assumption made is that the heat of reaction is proportional to the extent of the reaction...disperse the SillllV* rVdi\\tion ^^9 • .canning machan ^m. ill isolate the frequency range falling on the detector In this manner. the spectrum...the molar orms with only has n absorb ing (nxp) and # by the udied. Of t have a all of the analysis a complete the same There are two basic
Mathematical Modeling: Are Prior Experiences Important?
ERIC Educational Resources Information Center
Czocher, Jennifer A.; Moss, Diana L.
2017-01-01
Why are math modeling problems the source of such frustration for students and teachers? The conceptual understanding that students have when engaging with a math modeling problem varies greatly. They need opportunities to make their own assumptions and design the mathematics to fit these assumptions (CCSSI 2010). Making these assumptions is part…
Assumptions to the Annual Energy Outlook
2017-01-01
This report presents the major assumptions of the National Energy Modeling System (NEMS) used to generate the projections in the Annual Energy Outlook, including general features of the model structure, assumptions concerning energy markets, and the key input data and parameters that are the most significant in formulating the model results.
Economic Theory and Broadcasting.
ERIC Educational Resources Information Center
Bates, Benjamin J.
Focusing on access to audience through broadcast time, this paper examines the status of research into the economics of broadcasting. The paper first discusses the status of theory in the study of broadcast economics, both as described directly and as it exists in the statement of the basic assumptions generated by prior work and general…
Tiedeman's Approach to Career Development.
ERIC Educational Resources Information Center
Harren, Vincent A.
Basic to Tiedeman's approach to career development and decision making is the assumption that one is responsible for one's own behavior because one has the capacity for choice and lives in a world which is not deterministic. Tiedeman, a cognitive-developmental theorist, views continuity of development as internal or psychological while…
Linking Educational Philosophy with Micro-Level Technology: The Search for a Complete Method.
ERIC Educational Resources Information Center
Januszewski, Alan
Traditionally, educational technologists have not been concerned with social or philosophical questions, and the field does not have a basic educational philosophy. Instead, it is dominated by a viewpoint characterized as "technical rationality" or "technicism"; the most important assumption of this viewpoint is that science…
Network Analysis in Comparative Social Sciences
ERIC Educational Resources Information Center
Vera, Eugenia Roldan; Schupp, Thomas
2006-01-01
This essay describes the pertinence of Social Network Analysis (SNA) for the social sciences in general, and discusses its methodological and conceptual implications for comparative research in particular. The authors first present a basic summary of the theoretical and methodological assumptions of SNA, followed by a succinct overview of its…
Conservatism in America--What Does it Mean for Teacher Education?
ERIC Educational Resources Information Center
Dolce, Carl J.
The current conflict among opposing sets of cultural ideals is illustrated by several interrelated conditions. The conservative phenomenon is more complex than the traditional liberal-conservative dichotomy would suggest. Changes in societal conditions invite a reexamination of basic assumptions across the broad spectrum of political ideology.…
A SYSTEMS ANALYSIS OF SCHOOL BOARD ACTION.
ERIC Educational Resources Information Center
SCRIBNER, JAY D.
THE BASIC ASSUMPTION OF THE FUNCTIONAL-SYSTEMS THEORY IS THAT STRUCTURES FULFILL FUNCTIONS IN SYSTEMS AND THAT SUBSYSTEMS OPERATE SEPARATELY WITHIN ANY TYPE OF STRUCTURE. RELYING MAINLY ON GABRIEL ALMOND'S PARADIGM, THE AUTHOR ATTEMPTS TO DETERMINE THE USEFULNESS OF THE FUNCTIONAL-SYSTEMS THEORY IN CONDUCTING EMPIRICAL RESEARCH OF SCHOOL BOARDS.…
Distance-Based and Distributed Learning: A Decision Tool for Education Leaders.
ERIC Educational Resources Information Center
McGraw, Tammy M.; Ross, John D.
This decision tool presents a progression of data collection and decision-making strategies that can increase the effectiveness of distance-based or distributed learning instruction. A narrative and flow chart cover the following steps: (1) basic assumptions, including purpose of instruction, market scan, and financial resources; (2) needs…
A Guide to Curriculum Planning in Mathematics. Bulletin No. 6284.
ERIC Educational Resources Information Center
Chambers, Donald L.; And Others
This guide was written under the basic assumptions that the mathematics curriculum must continuously change and that mathematics is most effectively learned through a spiral approach. Further, it is assumed that the audience will be members of district mathematics curriculum committees. Instructional objectives have been organized to reveal the…
Describes procedures written based on the assumption that they will be performed by analysts who are formally trained in at least the basic principles of chemical analysis and in the use of the subject technology.
Document-Oriented E-Learning Components
ERIC Educational Resources Information Center
Piotrowski, Michael
2009-01-01
This dissertation questions the common assumption that e-learning requires a "learning management system" (LMS) such as Moodle or Blackboard. Based on an analysis of the current state of the art in LMSs, we come to the conclusion that the functionality of conventional e-learning platforms consists of basic content management and…
Moral Development in Higher Education
ERIC Educational Resources Information Center
Liddell, Debora L.; Cooper, Diane L.
2012-01-01
In this article, the authors lay out the basic foundational concepts and assumptions that will guide the reader through the chapters to come as the chapter authors explore "how" moral growth can be facilitated through various initiatives on the college campus. This article presents a brief review of the theoretical frameworks that provide the…
Measuring Protein Interactions by Optical Biosensors
Zhao, Huaying; Boyd, Lisa F.; Schuck, Peter
2017-01-01
This unit gives an introduction to the basic techniques of optical biosensing for measuring equilibrium and kinetics of reversible protein interactions. Emphasis is given to the description of robust approaches that will provide reliable results with few assumptions. How to avoid the most commonly encountered problems and artifacts is also discussed. PMID:28369667
Teaching Literature: Some Honest Doubts.
ERIC Educational Resources Information Center
Rutledge, Donald G.
1968-01-01
The possibility that many English teachers take their subject too seriously should be considered. The assumption that literature can to any degree either improve or adversely affect students is doubtful, but the exclusive study of "great literature" in our secondary schools may invite basic reflections too early: a year's steady diet of "King…
East Europe Report, Political, Sociological and Military Affairs, No. 2219
1983-10-24
takes place in training booths and classrooms. On the way to warrant officer one must take sociology, Russian, basic construction, materials...polemics. I admit that I like this much more than the obligatory hearty kiss on both cheeks along with, of course, the assumption that polemicists have
The Binding Properties of Quechua Suffixes.
ERIC Educational Resources Information Center
Weber, David
This paper sketches an explicitly non-lexicalist application of grammatical theory to Huallaga (Huanuco) Quechua (HgQ). The advantages of applying binding theory to many suffixes that have previously been treated only as objects of the morphology are demonstrated. After an introduction, section 2 outlines basic assumptions about the nature of HgQ…
Describes procedures written based on the assumption that they will be performed by analysts who are formally trained in at least the basic principles of chemical analysis and in the use of the subject technology.
Creating a Healthy Camp Community: A Nurse's Role.
ERIC Educational Resources Information Center
Lishner, Kris Miller; Bruya, Margaret Auld
This book provides an organized, systematic overview of the basic aspects of health program management, nursing practice, and human relations issues in camp nursing. A foremost assumption is that health care in most camps needs improvement. Good health is dependent upon interventions involving social, environmental, and lifestyle factors that…
Fatherless America: Confronting Our Most Urgent Social Problem.
ERIC Educational Resources Information Center
Blankenhorn, David
The United States is rapidly becoming a fatherless society. Fatherlessness is the leading cause of declining child well-being, providing the impetus behind social problems such as crime, domestic violence, and adolescent pregnancy. Challenging the basic assumptions of opinion leaders in academia and in the media, this book debunks the prevailing…
Teaching Strategy: A New Planet.
ERIC Educational Resources Information Center
O'Brien, Edward L.
1998-01-01
Presents a lesson for middle and secondary school students in which they respond to a hypothetical scenario that enables them to develop a list of basic rights. Expounds that students compare their list of rights to the Universal Declaration of Human Rights in order to explore the assumptions about human rights. (CMK)
Session overview: forest ecosystems
John J. Battles; Robert C. Heald
2004-01-01
The core assumption of this symposium is that science can provide insight to management. Nowhere is this link more formally established than in regard to the science and management of forest ecosystems. The basic questions addressed are integral to our understanding of nature; the applications of this understanding are crucial to effective stewardship of natural...
A Comprehensive Real-World Distillation Experiment
ERIC Educational Resources Information Center
Kazameas, Christos G.; Keller, Kaitlin N.; Luyben, William L.
2015-01-01
Most undergraduate mass transfer and separation courses cover the design of distillation columns, and many undergraduate laboratories have distillation experiments. In many cases, the treatment is restricted to simple column configurations and simplifying assumptions are made so as to convey only the basic concepts. In industry, the analysis of a…
Alternate hosts of Blepharipa pratensis (Meigen)
Paul A. Godwin; Thomas M. Odell
1977-01-01
A current tactic for biological control of the gypsy moth, Lymantria dispar Linnaeus, is to release its parasites in forests susceptible to gypsy moth damage before the gypsy moth arrives. The basic assumption in these anticipatory releases is that the parasites can find and utilize native insects as hosts in the interim. Blepharipa...
Children and Adolescents: Should We Teach Them or Let Them Learn?
ERIC Educational Resources Information Center
Rohwer, William D., Jr.
Research to date has provided too few answers for vital educational questions concerning teaching children or letting them learn. A basic problem is that experimentation usually begins by accepting conventional assumptions about schooling, ignoring experiments that would entail disturbing the ordering of current educational priorities.…
Interpretations and pitfalls in modelling vector-transmitted infections.
Amaku, M; Azevedo, F; Burattini, M N; Coutinho, F A B; Lopez, L F; Massad, E
2015-07-01
In this paper we propose a debate on the role of mathematical models in evaluating control strategies for vector-borne infections. Mathematical models must have their complexity adjusted to their goals, and we have basically two classes of models. At one extreme we have models that are intended to check if our intuition about why a certain phenomenon occurs is correct. At the other extreme, we have models whose goals are to predict future outcomes. These models are necessarily very complex. There are models in between these classes. Here we examine two models, one of each class and study the possible pitfalls that may be incurred. We begin by showing how to simplify the description of a complicated model for a vector-borne infection. Next, we examine one example found in a recent paper that illustrates the dangers of basing control strategies on models without considering their limitations. The model in this paper is of the second class. Following this, we review an interesting paper (a model of the first class) that contains some biological assumptions that are inappropriate for dengue but may apply to other vector-borne infections. In conclusion, we list some misgivings about modelling presented in this paper for debate.
Model Considerations for Memory-based Automatic Music Transcription
NASA Astrophysics Data System (ADS)
Albrecht, Štěpán; Šmídl, Václav
2009-12-01
The problem of automatic music description is considered. The recorded music is modeled as a superposition of known sounds from a library weighted by unknown weights. Similar observation models are commonly used in statistics and machine learning. Many methods for estimation of the weights are available. These methods differ in the assumptions imposed on the weights. In Bayesian paradigm, these assumptions are typically expressed in the form of prior probability density function (pdf) on the weights. In this paper, commonly used assumptions about music signal are summarized and complemented by a new assumption. These assumptions are translated into pdfs and combined into a single prior density using combination of pdfs. Validity of the model is tested in simulation using synthetic data.
Development of state and transition model assumptions used in National Forest Plan revision
Eric B. Henderson
2008-01-01
State and transition models are being utilized in forest management analysis processes to evaluate assumptions about disturbances and succession. These models assume valid information about seral class successional pathways and timing. The Forest Vegetation Simulator (FVS) was used to evaluate seral class succession assumptions for the Hiawatha National Forest in...
Assumptions to the annual energy outlook 1999 : with projections to 2020
DOT National Transportation Integrated Search
1998-12-16
This paper presents the major assumptions of the National Energy Modeling System (NEMS) used to : generate the projections in the Annual Energy Outlook 19991 (AEO99), including general features of : the model structure, assumptions concerning energy ...
Assumptions to the annual energy outlook 2000 : with projections to 2020
DOT National Transportation Integrated Search
2000-01-01
This paper presents the major assumptions of the National Energy Modeling System (NEMS) used to : generate the projections in the Annual Energy Outlook 20001 (AEO2000), including general features of : the model structure, assumptions concerning energ...
Assumptions to the annual energy outlook 2001 : with projections to 2020
DOT National Transportation Integrated Search
2000-12-01
This report presents the major assumptions of the National Energy Modeling System (NEMS) used to : generate the projections in the Annual Energy Outlook 20011 (AEO2001), including general features of : the model structure, assumptions concerning ener...
Assumptions for the annual energy outlook 2003 : with projections to 2025
DOT National Transportation Integrated Search
2003-01-01
This report presents the major assumptions of the National Energy Modeling System (NEMS) used to : generate the projections in the Annual Energy Outlook 20031 (AEO2003), including general features of : the model structure, assumptions concerning ener...
Cloud-System Resolving Models: Status and Prospects
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Moncreiff, Mitch
2008-01-01
Cloud-system resolving models (CRM), which are based on the nonhydrostatic equations of motion and typically have a grid-spacing of about a kilometer, originated as cloud-process models in the 1970s. This paper reviews the status and prospects of CRMs across a wide range of issues, such as microphysics and precipitation; interaction between clouds and radiation; and the effects of boundary-layer and surface-processes on cloud systems. Since CRMs resolve organized convection, tropical waves and the large-scale circulation, there is the prospect for several advances in both basic knowledge of scale-interaction requisite to parameterizing mesoscale processes in climate models. In superparameterization, CRMs represent convection, explicitly replacing many of the assumptions necessary in contemporary parameterization. Global CRMs have been run on an experimental basis, giving prospect to a new generation of climate weather prediction in a decade, and climate models due course. CRMs play a major role in the retrieval of surface-rain and latent heating from satellite measurements. Finally, enormous wide dynamic ranges of CRM simulations present new challenges for model validation against observations.
Linear models for assessing mechanisms of sperm competition: the trouble with transformations.
Eggert, Anne-Katrin; Reinhardt, Klaus; Sakaluk, Scott K
2003-01-01
Although sperm competition is a pervasive selective force shaping the reproductive tactics of males, the mechanisms underlying different patterns of sperm precedence remain obscure. Parker et al. (1990) developed a series of linear models designed to identify two of the more basic mechanisms: sperm lotteries and sperm displacement; the models can be tested experimentally by manipulating the relative numbers of sperm transferred by rival males and determining the paternity of offspring. Here we show that tests of the model derived for sperm lotteries can result in misleading inferences about the underlying mechanism of sperm precedence because the required inverse transformations may lead to a violation of fundamental assumptions of linear regression. We show that this problem can be remedied by reformulating the model using the actual numbers of offspring sired by each male, and log-transforming both sides of the resultant equation. Reassessment of data from a previous study (Sakaluk and Eggert 1996) using the corrected version of the model revealed that we should not have excluded a simple sperm lottery as a possible mechanism of sperm competition in decorated crickets, Gryllodes sigillatus.
Adaptive hidden Markov model with anomaly States for price manipulation detection.
Cao, Yi; Li, Yuhua; Coleman, Sonya; Belatreche, Ammar; McGinnity, Thomas Martin
2015-02-01
Price manipulation refers to the activities of those traders who use carefully designed trading behaviors to manually push up or down the underlying equity prices for making profits. With increasing volumes and frequency of trading, price manipulation can be extremely damaging to the proper functioning and integrity of capital markets. The existing literature focuses on either empirical studies of market abuse cases or analysis of particular manipulation types based on certain assumptions. Effective approaches for analyzing and detecting price manipulation in real time are yet to be developed. This paper proposes a novel approach, called adaptive hidden Markov model with anomaly states (AHMMAS) for modeling and detecting price manipulation activities. Together with wavelet transformations and gradients as the feature extraction methods, the AHMMAS model caters to price manipulation detection and basic manipulation type recognition. The evaluation experiments conducted on seven stock tick data from NASDAQ and the London Stock Exchange and 10 simulated stock prices by stochastic differential equation show that the proposed AHMMAS model can effectively detect price manipulation patterns and outperforms the selected benchmark models.
O'Malley, Lauren; Korniss, G; Caraco, Thomas
2009-07-01
Both community ecology and conservation biology seek further understanding of factors governing the advance of an invasive species. We model biological invasion as an individual-based, stochastic process on a two-dimensional landscape. An ecologically superior invader and a resident species compete for space preemptively. Our general model includes the basic contact process and a variant of the Eden model as special cases. We employ the concept of a "roughened" front to quantify effects of discreteness and stochasticity on invasion; we emphasize the probability distribution of the front-runner's relative position. That is, we analyze the location of the most advanced invader as the extreme deviation about the front's mean position. We find that a class of models with different assumptions about neighborhood interactions exhibits universal characteristics. That is, key features of the invasion dynamics span a class of models, independently of locally detailed demographic rules. Our results integrate theories of invasive spatial growth and generate novel hypotheses linking habitat or landscape size (length of the invading front) to invasion velocity, and to the relative position of the most advanced invader.
Ross, macdonald, and a theory for the dynamics and control of mosquito-transmitted pathogens.
Smith, David L; Battle, Katherine E; Hay, Simon I; Barker, Christopher M; Scott, Thomas W; McKenzie, F Ellis
2012-01-01
Ronald Ross and George Macdonald are credited with developing a mathematical model of mosquito-borne pathogen transmission. A systematic historical review suggests that several mathematicians and scientists contributed to development of the Ross-Macdonald model over a period of 70 years. Ross developed two different mathematical models, Macdonald a third, and various "Ross-Macdonald" mathematical models exist. Ross-Macdonald models are best defined by a consensus set of assumptions. The mathematical model is just one part of a theory for the dynamics and control of mosquito-transmitted pathogens that also includes epidemiological and entomological concepts and metrics for measuring transmission. All the basic elements of the theory had fallen into place by the end of the Global Malaria Eradication Programme (GMEP, 1955-1969) with the concept of vectorial capacity, methods for measuring key components of transmission by mosquitoes, and a quantitative theory of vector control. The Ross-Macdonald theory has since played a central role in development of research on mosquito-borne pathogen transmission and the development of strategies for mosquito-borne disease prevention.
Ross, Macdonald, and a Theory for the Dynamics and Control of Mosquito-Transmitted Pathogens
Smith, David L.; Battle, Katherine E.; Hay, Simon I.; Barker, Christopher M.; Scott, Thomas W.; McKenzie, F. Ellis
2012-01-01
Ronald Ross and George Macdonald are credited with developing a mathematical model of mosquito-borne pathogen transmission. A systematic historical review suggests that several mathematicians and scientists contributed to development of the Ross-Macdonald model over a period of 70 years. Ross developed two different mathematical models, Macdonald a third, and various “Ross-Macdonald” mathematical models exist. Ross-Macdonald models are best defined by a consensus set of assumptions. The mathematical model is just one part of a theory for the dynamics and control of mosquito-transmitted pathogens that also includes epidemiological and entomological concepts and metrics for measuring transmission. All the basic elements of the theory had fallen into place by the end of the Global Malaria Eradication Programme (GMEP, 1955–1969) with the concept of vectorial capacity, methods for measuring key components of transmission by mosquitoes, and a quantitative theory of vector control. The Ross-Macdonald theory has since played a central role in development of research on mosquito-borne pathogen transmission and the development of strategies for mosquito-borne disease prevention. PMID:22496640
NASA Astrophysics Data System (ADS)
Baloković, M.; Brightman, M.; Harrison, F. A.; Comastri, A.; Ricci, C.; Buchner, J.; Gandhi, P.; Farrah, D.; Stern, D.
2018-02-01
The basic unified model of active galactic nuclei (AGNs) invokes an anisotropic obscuring structure, usually referred to as a torus, to explain AGN obscuration as an angle-dependent effect. We present a new grid of X-ray spectral templates based on radiative transfer calculations in neutral gas in an approximately toroidal geometry, appropriate for CCD-resolution X-ray spectra (FWHM ≥ 130 eV). Fitting the templates to broadband X-ray spectra of AGNs provides constraints on two important geometrical parameters of the gas distribution around the supermassive black hole: the average column density and the covering factor. Compared to the currently available spectral templates, our model is more flexible, and capable of providing constraints on the main torus parameters in a wider range of AGNs. We demonstrate the application of this model using hard X-ray spectra from NuSTAR (3–79 keV) for four AGNs covering a variety of classifications: 3C 390.3, NGC 2110, IC 5063, and NGC 7582. This small set of examples was chosen to illustrate the range of possible torus configurations, from disk-like to sphere-like geometries with column densities below, as well as above, the Compton-thick threshold. This diversity of torus properties challenges the simple assumption of a standard geometrically and optically thick toroidal structure commonly invoked in the basic form of the unified model of AGNs. Finding broad consistency between our constraints and those from infrared modeling, we discuss how the approach from the X-ray band complements similar measurements of AGN structures at other wavelengths.
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung; Wang, Xiao-Yen; Chow, Chuen-Yen
1998-01-01
A new high resolution and genuinely multidimensional numerical method for solving conservation laws is being, developed. It was designed to avoid the limitations of the traditional methods. and was built from round zero with extensive physics considerations. Nevertheless, its foundation is mathmatically simple enough that one can build from it a coherent, robust. efficient and accurate numerical framework. Two basic beliefs that set the new method apart from the established methods are at the core of its development. The first belief is that, in order to capture physics more efficiently and realistically, the modeling, focus should be placed on the original integral form of the physical conservation laws, rather than the differential form. The latter form follows from the integral form under the additional assumption that the physical solution is smooth, an assumption that is difficult to realize numerically in a region of rapid chance. such as a boundary layer or a shock. The second belief is that, with proper modeling of the integral and differential forms themselves, the resulting, numerical solution should automatically be consistent with the properties derived front the integral and differential forms, e.g., the jump conditions across a shock and the properties of characteristics. Therefore a much simpler and more robust method can be developed by not using the above derived properties explicitly.
NASA Astrophysics Data System (ADS)
Sharma, T.; Chhabra, S., Jr.; Karmakar, S.; Ghosh, S.
2015-12-01
We have quantified the historical climate change and Land Use Land Cover (LULC) change impacts on the hydrologic variables of Indian subcontinent by using Variable Infiltration Capacity (VIC) mesoscale model at 0.5° spatial resolution and daily temporal resolution. The results indicate that the climate change in India has predominating effects on the basic water balance components such as water yield, evapotranspiration and soil moisture. This analysis is with the assumption of naturalised hydrologic cycle, i.e., the impacts of human interventions like construction of controlled (primarily dams, diversions and reservoirs) and water withdrawals structures are not taken into account. The assumption is unrealistic since there are numerous anthropogenic disturbances which result in large changes on vegetation composition and distribution patterns. These activities can directly or indirectly influence the dynamics of water cycle; subsequently affecting the hydrologic processes like plant transpiration, infiltration, evaporation, runoff and sublimation. Here, we have quantified the human interventions by using the reservoir and irrigation module of VIC model which incorporates the irrigation schemes, reservoir characteristics and water withdrawals. The impact of human interventions on hydrologic variables in many grids are found more predominant than climate change and might be detrimental to water resources at regional level. This spatial pattern of impacts will facilitate water manager and planners to design and station hydrologic structures for a sustainable water resources management.
Mathematical analysis of frontal affinity chromatography in particle and membrane configurations.
Tejeda-Mansir, A; Montesinos, R M; Guzmán, R
2001-10-30
The scaleup and optimization of large-scale affinity-chromatographic operations in the recovery, separation and purification of biochemical components is of major industrial importance. The development of mathematical models to describe affinity-chromatographic processes, and the use of these models in computer programs to predict column performance is an engineering approach that can help to attain these bioprocess engineering tasks successfully. Most affinity-chromatographic separations are operated in the frontal mode, using fixed-bed columns. Purely diffusive and perfusion particles and membrane-based affinity chromatography are among the main commercially available technologies for these separations. For a particular application, a basic understanding of the main similarities and differences between particle and membrane frontal affinity chromatography and how these characteristics are reflected in the transport models is of fundamental relevance. This review presents the basic theoretical considerations used in the development of particle and membrane affinity chromatography models that can be applied in the design and operation of large-scale affinity separations in fixed-bed columns. A transport model for column affinity chromatography that considers column dispersion, particle internal convection, external film resistance, finite kinetic rate, plus macropore and micropore resistances is analyzed as a framework for exploring further the mathematical analysis. Such models provide a general realistic description of almost all practical systems. Specific mathematical models that take into account geometric considerations and transport effects have been developed for both particle and membrane affinity chromatography systems. Some of the most common simplified models, based on linear driving-force (LDF) and equilibrium assumptions, are emphasized. Analytical solutions of the corresponding simplified dimensionless affinity models are presented. Particular methods for estimating the parameters that characterize the mass-transfer and adsorption mechanisms in affinity systems are described.
O'Connor, Brian P
2004-02-01
Levels-of-analysis issues arise whenever individual-level data are collected from more than one person from the same dyad, family, classroom, work group, or other interaction unit. Interdependence in data from individuals in the same interaction units also violates the independence-of-observations assumption that underlies commonly used statistical tests. This article describes the data analysis challenges that are presented by these issues and presents SPSS and SAS programs for conducting appropriate analyses. The programs conduct the within-and-between-analyses described by Dansereau, Alutto, and Yammarino (1984) and the dyad-level analyses described by Gonzalez and Griffin (1999) and Griffin and Gonzalez (1995). Contrasts with general multilevel modeling procedures are then discussed.
Molecular dynamics test of the Brownian description of Na(+) motion in water
NASA Technical Reports Server (NTRS)
Wilson, M. A.; Pohorille, A.; Pratt, L. R.
1985-01-01
The present paper provides the results of molecular dynamics calculations on a Na(+) ion in aqueous solution. Attention is given to the sodium-oxygen and sodium-hydrogen radial distribution functions, the velocity autocorrelation function for the Na(+) ion, the autocorrelation function of the force on the stationary ion, and the accuracy of Brownian motion assumptions which are basic to hydrodynamic models of ion dyanmics in solution. It is pointed out that the presented calculations provide accurate data for testing theories of ion dynamics in solution. The conducted tests show that it is feasible to calculate Brownian friction constants for ions in aqueous solutions. It is found that for Na(+) under the considered conditions the Brownian mobility is in error by only 60 percent.
On S.N. Bernstein's derivation of Mendel's Law and 'rediscovery' of the Hardy-Weinberg distribution.
Stark, Alan; Seneta, Eugene
2012-04-01
Around 1923 the soon-to-be famous Soviet mathematician and probabilist Sergei N. Bernstein started to construct an axiomatic foundation of a theory of heredity. He began from the premise of stationarity (constancy of type proportions) from the first generation of offspring. This led him to derive the Mendelian coefficients of heredity. It appears that he had no direct influence on the subsequent development of population genetics. A basic assumption of Bernstein was that parents coupled randomly to produce offspring. This paper shows that a simple model of non-random mating, which nevertheless embodies a feature of the Hardy-Weinberg Law, can produce Mendelian coefficients of heredity while maintaining the population distribution. How W. Johannsen's monograph influenced Bernstein is discussed.
Combustion Technology for Incinerating Wastes from Air Force Industrial Processes.
1984-02-01
The assumption of equilibrium between environmental compartments. * The statistical extrapolations yielding "safe" doses of various constituents...would be contacted to identify the assumptions and data requirements needed to design, construct and implement the model. The model’s primary objective...Recovery Planning Model (RRPLAN) is described. This section of the paper summarizes the model’s assumptions , major components and modes of operation
Experimental Basis for IED Particle Model
NASA Astrophysics Data System (ADS)
Zheng-Johansson, J.
2009-03-01
The internally electrodynamic (IED) particle model is built on three experimental facts: a) electric charges present in all matter particles, b) an accelerated charge generates electromagnetic (EM) waves by Maxwell's equations and Planck energy equation, and c) source motion gives Doppler effect. A set of well-kwon basic particle equations have been predicted based on first-principles solutions for IED particle (e.g. J Phys CS128, 012019, 2008); the equations are long experimentally validated. A critical review of the key experiments suggests that the IED process underlies these equations not just sufficiently but also necessarily. E.g.: 1) A free IED electron solution is a plane wave ψ= Ce^i(kdX-φT) requisite for producing the diffraction fringe in a Davisson-Germer experiment, and of also all basic point-like attributes facilitated by a linear momentum kd and the model structure. It needs not further be a wave packet which produces not a diffraction fringe. 2)The radial partial EM waves, hence the total ψ, of an IED electron will, on both EM theory and experiment basis -not by assumption, enter two slits at the same time, as is requisite for an electron to interfere with itself as shown in double slit experiments. 3) On annihilation, an electron converts (from mass m) to a radiation energy φ without an acceleration which is externally observable and yet requisite by EM theory. So a charge oscillation of frequency φ and its EM waves must regularly present internal of a normal electron, whence the IED model.
Finite-temperature phase transitions of third and higher order in gauge theories at large N
Nishimura, Hiromichi; Pisarski, Robert D.; Skokov, Vladimir V.
2018-02-15
We study phase transitions in SU(∞) gauge theories at nonzero temperature using matrix models. Our basic assumption is that the effective potential is dominated by double trace terms for the Polyakov loops. As a function of the various parameters, related to terms linear, quadratic, and quartic in the Polyakov loop, the phase diagram exhibits a universal structure. In a large region of this parameter space, there is a continuous phase transition whose order is larger than second. This is a generalization of the phase transition of Gross, Witten, and Wadia (GWW). Depending upon the detailed form of the matrix model,more » the eigenvalue density and the behavior of the specific heat near the transition differ drastically. Here, we speculate that in the pure gauge theory, that although the deconfining transition is thermodynamically of first order, it can be nevertheless conformally symmetric at infnite N.« less
Zhang, Ruixun; Brennan, Thomas J.; Lo, Andrew W.
2014-01-01
Risk aversion is one of the most basic assumptions of economic behavior, but few studies have addressed the question of where risk preferences come from and why they differ from one individual to the next. Here, we propose an evolutionary explanation for the origin of risk aversion. In the context of a simple binary-choice model, we show that risk aversion emerges by natural selection if reproductive risk is systematic (i.e., correlated across individuals in a given generation). In contrast, risk neutrality emerges if reproductive risk is idiosyncratic (i.e., uncorrelated across each given generation). More generally, our framework implies that the degree of risk aversion is determined by the stochastic nature of reproductive rates, and we show that different statistical properties lead to different utility functions. The simplicity and generality of our model suggest that these implications are primitive and cut across species, physiology, and genetic origins. PMID:25453072
Finite-temperature phase transitions of third and higher order in gauge theories at large N
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nishimura, Hiromichi; Pisarski, Robert D.; Skokov, Vladimir V.
We study phase transitions in SU(∞) gauge theories at nonzero temperature using matrix models. Our basic assumption is that the effective potential is dominated by double trace terms for the Polyakov loops. As a function of the various parameters, related to terms linear, quadratic, and quartic in the Polyakov loop, the phase diagram exhibits a universal structure. In a large region of this parameter space, there is a continuous phase transition whose order is larger than second. This is a generalization of the phase transition of Gross, Witten, and Wadia (GWW). Depending upon the detailed form of the matrix model,more » the eigenvalue density and the behavior of the specific heat near the transition differ drastically. Here, we speculate that in the pure gauge theory, that although the deconfining transition is thermodynamically of first order, it can be nevertheless conformally symmetric at infnite N.« less
Zhang, Ruixun; Brennan, Thomas J; Lo, Andrew W
2014-12-16
Risk aversion is one of the most basic assumptions of economic behavior, but few studies have addressed the question of where risk preferences come from and why they differ from one individual to the next. Here, we propose an evolutionary explanation for the origin of risk aversion. In the context of a simple binary-choice model, we show that risk aversion emerges by natural selection if reproductive risk is systematic (i.e., correlated across individuals in a given generation). In contrast, risk neutrality emerges if reproductive risk is idiosyncratic (i.e., uncorrelated across each given generation). More generally, our framework implies that the degree of risk aversion is determined by the stochastic nature of reproductive rates, and we show that different statistical properties lead to different utility functions. The simplicity and generality of our model suggest that these implications are primitive and cut across species, physiology, and genetic origins.
Comparison of smallpox outbreak control strategies using a spatial metapopulation model.
Hall, I M; Egan, J R; Barrass, I; Gani, R; Leach, S
2007-10-01
To determine the potential benefits of regionally targeted mass vaccination as an adjunct to other smallpox control strategies we employed a spatial metapopulation patch model based on the administrative districts of Great Britain. We counted deaths due to smallpox and to vaccination to identify strategies that minimized total deaths. Results confirm that case isolation, and the tracing, vaccination and observation of case contacts can be optimal for control but only for optimistic assumptions concerning, for example, the basic reproduction number for smallpox (R0=3) and smaller numbers of index cases ( approximately 10). For a wider range of scenarios, including larger numbers of index cases and higher reproduction numbers, the addition of mass vaccination targeted only to infected districts provided an appreciable benefit (5-80% fewer deaths depending on where the outbreak started with a trigger value of 1-10 isolated symptomatic individuals within a district).
Hydraulics of epiphreatic flow of a karst aquifer
NASA Astrophysics Data System (ADS)
Gabrovšek, Franci; Peric, Borut; Kaufmann, Georg
2018-05-01
The nature of epiphreatic flow remains an important research challenge in karst hydrology. This study focuses on the flood propagation along the epiphreatic system of Reka-Timavo system (Kras/Carso Plateau, Slovenia/Italy). It is based on long-term monitoring of basic physical parameters (pressure/level, temperature, specific electric conductivity) of ground water in six active caves belonging to the flow system. The system vigorously responds to flood events, with stage rising >100 m in some of the caves. Besides presenting the response of the system to flood events of different scales, the work focuses on the interpretation of recorded hydrographs in view of the known distribution and size of conduits and basic hydraulic relations. Furthermore, the hydrographs were used to infer the unknown geometry between the observation points. This way, the main flow restrictors, overflow passages and large epiphreatic storages were identified. The assumptions were tested with a hydraulic model, where the inversion procedure was used for an additional parameter optimisation. Time series of temperature and specific electric conductivity were used to assess the apparent velocities of flow between consecutive points.
Generalizing the Network Scale-Up Method: A New Estimator for the Size of Hidden Populations*
Feehan, Dennis M.; Salganik, Matthew J.
2018-01-01
The network scale-up method enables researchers to estimate the size of hidden populations, such as drug injectors and sex workers, using sampled social network data. The basic scale-up estimator offers advantages over other size estimation techniques, but it depends on problematic modeling assumptions. We propose a new generalized scale-up estimator that can be used in settings with non-random social mixing and imperfect awareness about membership in the hidden population. Further, the new estimator can be used when data are collected via complex sample designs and from incomplete sampling frames. However, the generalized scale-up estimator also requires data from two samples: one from the frame population and one from the hidden population. In some situations these data from the hidden population can be collected by adding a small number of questions to already planned studies. For other situations, we develop interpretable adjustment factors that can be applied to the basic scale-up estimator. We conclude with practical recommendations for the design and analysis of future studies. PMID:29375167
ERIC Educational Resources Information Center
Wang, Yan; Rodríguez de Gil, Patricia; Chen, Yi-Hsin; Kromrey, Jeffrey D.; Kim, Eun Sook; Pham, Thanh; Nguyen, Diep; Romano, Jeanine L.
2017-01-01
Various tests to check the homogeneity of variance assumption have been proposed in the literature, yet there is no consensus as to their robustness when the assumption of normality does not hold. This simulation study evaluated the performance of 14 tests for the homogeneity of variance assumption in one-way ANOVA models in terms of Type I error…
147Sm-143Nd systematics of Earth are inconsistent with a superchondritic Sm/Nd ratio
Huang, Shichun; Jacobsen, Stein B.; Mukhopadhyay, Sujoy
2013-01-01
The relationship between the compositions of the Earth and chondritic meteorites is at the center of many important debates. A basic assumption in most models for the Earth’s composition is that the refractory elements are present in chondritic proportions relative to each other. This assumption is now challenged by recent 142Nd/144Nd ratio studies suggesting that the bulk silicate Earth (BSE) might have an Sm/Nd ratio 6% higher than chondrites (i.e., the BSE is superchondritic). This has led to the proposal that the present-day 143Nd/144Nd ratio of BSE is similar to that of some deep mantle plumes rather than chondrites. Our reexamination of the long-lived 147Sm-143Nd isotope systematics of the depleted mantle and the continental crust shows that the BSE, reconstructed using the depleted mantle and continental crust, has 143Nd/144Nd and Sm/Nd ratios close to chondritic values. The small difference in the ratio of 142Nd/144Nd between ordinary chondrites and the Earth must be due to a process different from mantle-crust differentiation, such as incomplete mixing of distinct nucleosynthetic components in the solar nebula. PMID:23479630
NASA Astrophysics Data System (ADS)
van der Sluijs, Jeroen P.; Arjan Wardekker, J.
2015-04-01
In order to enable anticipation and proactive adaptation, local decision makers increasingly seek detailed foresight about regional and local impacts of climate change. To this end, the Netherlands Models and Data-Centre implemented a pilot chain of sequentially linked models to project local climate impacts on hydrology, agriculture and nature under different national climate scenarios for a small region in the east of the Netherlands named Baakse Beek. The chain of models sequentially linked in that pilot includes a (future) weather generator and models of respectively subsurface hydrogeology, ground water stocks and flows, soil chemistry, vegetation development, crop yield and nature quality. These models typically have mismatching time step sizes and grid cell sizes. The linking of these models unavoidably involves the making of model assumptions that can hardly be validated, such as those needed to bridge the mismatches in spatial and temporal scales. Here we present and apply a method for the systematic critical appraisal of model assumptions that seeks to identify and characterize the weakest assumptions in a model chain. The critical appraisal of assumptions presented in this paper has been carried out ex-post. For the case of the climate impact model chain for Baakse Beek, the three most problematic assumptions were found to be: land use and land management kept constant over time; model linking of (daily) ground water model output to the (yearly) vegetation model around the root zone; and aggregation of daily output of the soil hydrology model into yearly input of a so called ‘mineralization reduction factor’ (calculated from annual average soil pH and daily soil hydrology) in the soil chemistry model. Overall, the method for critical appraisal of model assumptions presented and tested in this paper yields a rich qualitative insight in model uncertainty and model quality. It promotes reflectivity and learning in the modelling community, and leads to well informed recommendations for model improvement.
NASA Astrophysics Data System (ADS)
Lopez-Yglesias, Xerxes
Part I: Particles are a key feature of planetary atmospheres. On Earth they represent the greatest source of uncertainty in the global energy budget. This uncertainty can be addressed by making more measurement, by improving the theoretical analysis of measurements, and by better modeling basic particle nucleation and initial particle growth within an atmosphere. This work will focus on the latter two methods of improvement. Uncertainty in measurements is largely due to particle charging. Accurate descriptions of particle charging are challenging because one deals with particles in a gas as opposed to a vacuum, so different length scales come into play. Previous studies have considered the effects of transition between the continuum and kinetic regime and the effects of two and three body interactions within the kinetic regime. These studies, however, use questionable assumptions about the charging process which resulted in skewed observations, and bias in the proposed dynamics of aerosol particles. These assumptions affect both the ions and particles in the system. Ions are assumed to be point monopoles that have a single characteristic speed rather than follow a distribution. Particles are assumed to be perfect conductors that have up to five elementary charges on them. The effects of three body interaction, ion-molecule-particle, are also overestimated. By revising this theory so that the basic physical attributes of both ions and particles and their interactions are better represented, we are able to make more accurate predictions of particle charging in both the kinetic and continuum regimes. The same revised theory that was used above to model ion charging can also be applied to the flux of neutral vapor phase molecules to a particle or initial cluster. Using these results we can model the vapor flux to a neutral or charged particle due to diffusion and electromagnetic interactions. In many classical theories currently applied to these models, the finite size of the molecule and the electromagnetic interaction between the molecule and particle, especially for the neutral particle case, are completely ignored, or, as is often the case for a permanent dipole vapor species, strongly underestimated. Comparing our model to these classical models we determine an "enhancement factor" to characterize how important the addition of these physical parameters and processes is to the understanding of particle nucleation and growth. Part II: Whispering gallery mode (WGM) optical biosensors are capable of extraordinarily sensitive specific and non-specific detection of species suspended in a gas or fluid. Recent experimental results suggest that these devices may attain single-molecule sensitivity to protein solutions in the form of stepwise shifts in their resonance wavelength, lambdaR, but present sensor models predict much smaller steps than were reported. This study examines the physical interaction between a WGM sensor and a molecule adsorbed to its surface, exploring assumptions made in previous efforts to model WGM sensor behavior, and describing computational schemes that model the experiments for which single protein sensitivity was reported. The resulting model is used to simulate sensor performance, within constraints imposed by the limited material property data. On this basis, we conclude that nonlinear optical effects would be needed to attain the reported sensitivity, and that, in the experiments for which extreme sensitivity was reported, a bound protein experiences optical energy fluxes too high for such effects to be ignored.
Economics in the future: towards a new paradigm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dopfer, K.
1976-01-01
The main ideas of economics in the future are set out by the editor, Kurt Dopfer, in the introduction: Towards a New Paradigm. The four postulates are: the need for an holistic approach; the need for a long-run view in economics; the need to view economics as an empirical science; and the need to view economics as political economy. In the second part, ''Selected Topics'', the authors present a detailed account of the future development of the major areas of economics. Jan Tinbergen suggests that economics needs a firm empirical basis if there is to be further development of theoriesmore » for new or neglected areas such as environmental conservation, income distribution or socio-economic development of less-developed countries. Harvey Leibenstein stresses that economic theory will be robbed of its empirical relevance unless it revises some of its basic assumptions, and he suggests a new ''micro-micro-economics''. Sir Roy Harrod draws attention to the fact that contemporary economic theory is still overwhelmingly static and that further development in the area of ''economic dynamics'' is crucial if economics is ever to serve as a basis for economic policy. Gunnar Myrdal delineates a political economy which is based upon an understanding of institutions, relies on empirically well-founded assumptions and pushes itself further within a framework of interdisciplinary research. K. William Kapp shows some of the future possibilities that an interdisciplinary systems approach may offer to future economics and combines them with the concept of ''normative planning''. Shigeto Tsuru foresees the necessity of a new political economy whose basic features reflect the Chinese rather than the Japanese model of capitalism. 122 notes and references.« less
NASA Astrophysics Data System (ADS)
Contreras, S.; Baugh, C. M.; Norberg, P.; Padilla, N.
2015-09-01
We demonstrate how the properties of a galaxy depend on the mass of its host dark matter subhalo, using two independent models of galaxy formation. For the cases of stellar mass and black hole mass, the median property value displays a monotonic dependence on subhalo mass. The slope of the relation changes for subhalo masses for which heating by active galactic nuclei becomes important. The median property values are predicted to be remarkably similar for central and satellite galaxies. The two models predict considerable scatter around the median property value, though the size of the scatter is model dependent. There is only modest evolution with redshift in the median galaxy property at a fixed subhalo mass. Properties such as cold gas mass and star formation rate, however, are predicted to have a complex dependence on subhalo mass. In these cases, subhalo mass is not a good indicator of the value of the galaxy property. We illustrate how the predictions in the galaxy property-subhalo mass plane differ from the assumptions made in some empirical models of galaxy clustering by reconstructing the model output using a basic subhalo abundance matching scheme. In its simplest form, abundance matching generally does not reproduce the clustering predicted by the models, typically resulting in an overprediction of the clustering signal. Using the predictions of the galaxy formation model for the correlations between pairs of galaxy properties, the basic abundance matching scheme can be extended to reproduce the model predictions more faithfully for a wider range of galaxy properties. Our results have implications for the analysis of galaxy clustering, particularly for low abundance samples.
Strauß, Jakob Friedrich; Crain, Philip; Schulenburg, Hinrich; Telschow, Arndt
2016-08-01
Most mathematical models on the evolution of virulence are based on epidemiological models that assume parasite transmission follows the mass action principle. In experimental evolution, however, mass action is often violated due to controlled infection protocols. This "theory-experiment mismatch" raises the question whether there is a need for new mathematical models to accommodate the particular characteristics of experimental evolution. Here, we explore the experimental evolution model system of Bacillus thuringiensis as a parasite and Caenorhabditis elegans as a host. Recent experimental studies with strict control of parasite transmission revealed that one-sided adaptation of B. thuringiensis with non-evolving hosts selects for intermediate or no virulence, sometimes coupled with parasite extinction. In contrast, host-parasite coevolution selects for high virulence and for hosts with strong resistance against B. thuringiensis. In order to explain the empirical results, we propose a new mathematical model that mimics the basic experimental set-up. The key assumptions are: (i) controlled parasite transmission (no mass action), (ii) discrete host generations, and (iii) context-dependent cost of toxin production. Our model analysis revealed the same basic trends as found in the experiments. Especially, we could show that resistant hosts select for highly virulent bacterial strains. Moreover, we found (i) that the evolved level of virulence is independent of the initial level of virulence, and (ii) that the average amount of bacteria ingested significantly affects the evolution of virulence with fewer bacteria ingested selecting for highly virulent strains. These predictions can be tested in future experiments. This study highlights the usefulness of custom-designed mathematical models in the analysis and interpretation of empirical results from experimental evolution. Copyright © 2016 The Authors. Published by Elsevier GmbH.. All rights reserved.
Animal models in myopia research.
Schaeffel, Frank; Feldkaemper, Marita
2015-11-01
Our current understanding of the development of refractive errors, in particular myopia, would be substantially limited had Wiesel and Raviola not discovered by accident that monkeys develop axial myopia as a result of deprivation of form vision. Similarly, if Josh Wallman and colleagues had not found that simple plastic goggles attached to the chicken eye generate large amounts of myopia, the chicken model would perhaps not have become such an important animal model. Contrary to previous assumptions about the mechanisms of myopia, these animal models suggested that eye growth is visually controlled locally by the retina, that an afferent connection to the brain is not essential and that emmetropisation uses more sophisticated cues than just the magnitude of retinal blur. While animal models have shown that the retina can determine the sign of defocus, the underlying mechanism is still not entirely clear. Animal models have also provided knowledge about the biochemical nature of the signal cascade converting the output of retinal image processing to changes in choroidal thickness and scleral growth; however, a critical question was, and still is, can the results from animal models be applied to myopia in children? While the basic findings from chickens appear applicable to monkeys, some fundamental questions remain. If eye growth is guided by visual feedback, why is myopic development not self-limiting? Why does undercorrection not arrest myopic progression even though positive lenses induce myopic defocus, which leads to the development of hyperopia in emmetropic animals? Why do some spectacle or contact lens designs reduce myopic progression and others not? It appears that some major differences exist between animals reared with imposed defocus and children treated with various optical corrections, although without the basic knowledge obtained from animal models, we would be lost in an abundance of untestable hypotheses concerning human myopia. © 2015 Optometry Australia.
Gagnon, B; Abrahamowicz, M; Xiao, Y; Beauchamp, M-E; MacDonald, N; Kasymjanova, G; Kreisman, H; Small, D
2010-03-30
C-reactive protein (CRP) is gaining credibility as a prognostic factor in different cancers. Cox's proportional hazard (PH) model is usually used to assess prognostic factors. However, this model imposes a priori assumptions, which are rarely tested, that (1) the hazard ratio associated with each prognostic factor remains constant across the follow-up (PH assumption) and (2) the relationship between a continuous predictor and the logarithm of the mortality hazard is linear (linearity assumption). We tested these two assumptions of the Cox's PH model for CRP, using a flexible statistical model, while adjusting for other known prognostic factors, in a cohort of 269 patients newly diagnosed with non-small cell lung cancer (NSCLC). In the Cox's PH model, high CRP increased the risk of death (HR=1.11 per each doubling of CRP value, 95% CI: 1.03-1.20, P=0.008). However, both the PH assumption (P=0.033) and the linearity assumption (P=0.015) were rejected for CRP, measured at the initiation of chemotherapy, which kept its prognostic value for approximately 18 months. Our analysis shows that flexible modeling provides new insights regarding the value of CRP as a prognostic factor in NSCLC and that Cox's PH model underestimates early risks associated with high CRP.
Academic Public Relations Curricula: How They Compare with the Bateman-Cutlip Commission Standards.
ERIC Educational Resources Information Center
McCartney, Hunter P.
To see what effect the 1975 Bateman-Cutlip Commission's recommendations have had on improving public relations education in the United States, 173 questionnaires were sent to colleges or universities with accredited or comprehensive programs in public relations. Responding to five basic assumptions underlying the commission's recommendations,…
Faculty and Student Attitudes about Transfer of Learning
ERIC Educational Resources Information Center
Lightner, Robin; Benander, Ruth; Kramer, Eugene F.
2008-01-01
Transfer of learning is using previous knowledge in novel contexts. While this is a basic assumption of the educational process, students may not always perceive all the options for using what they have learned in different, novel situations. Within the framework of transfer of learning, this study outlines an attitudinal survey concerning faculty…
New Directions in Teacher Education: Foundations, Curriculum, Policy.
ERIC Educational Resources Information Center
Denton, Jon, Ed.; And Others
This publication includes presentations made at the Aikin-Stinnett Lecture Series and follow-up papers sponsored by the Instructional Research Laboratory at Texas A&M University. The papers in this collection focus upon the basic assumptions and conceptual bases of teacher education and the use of research in providing a foundation for…
Perspective Making: Constructivism as a Meaning-Making Structure for Simulation Gaming
ERIC Educational Resources Information Center
Lainema, Timo
2009-01-01
Constructivism has recently gained popularity, although it is not a completely new learning paradigm. Much of the work within e-learning, for example, uses constructivism as a reference "discipline" (explicitly or implicitly). However, some of the work done within the simulation gaming (SG) community discusses what the basic assumptions and…
Looking for Skinner and Finding Freud
ERIC Educational Resources Information Center
Overskeid, Geir
2007-01-01
Sigmund Freud and B. F. Skinner are often seen as psychology's polar opposites. It seems this view is fallacious. Indeed, Freud and Skinner had many things in common, including basic assumptions shaped by positivism and determinism. More important, Skinner took a clear interest in psychoanalysis and wanted to be analyzed but was turned down. His…
Student Teachers' Beliefs about the Teacher's Role in Inclusive Education
ERIC Educational Resources Information Center
Domovic, Vlatka; Vidovic Vlasta, Vizek; Bouillet, Dejana
2017-01-01
The main aim of this research is to examine the basic features of student teachers' professional beliefs about the teacher's role in relation to teaching mainstream pupils and pupils with developmental disabilities. The starting assumption of this analysis is that teacher professional development is largely dependent upon teachers' beliefs about…
Cable in Boston; A Basic Viability Report.
ERIC Educational Resources Information Center
Hauben, Jan Ward; And Others
The viability of urban cable television (CATV) as an economic phenomenon is examined via a case study of its feasibility in Boston, a microcosm of general urban environment. To clarify cable's economics, a unitary concept of viability is used in which all local characteristics, cost assumptions, and growth estimates are structured dynamically as a…
"I Fell off [the Mothering] Track": Barriers to "Effective Mothering" among Prostituted Women
ERIC Educational Resources Information Center
Dalla, Rochelle
2004-01-01
Ecological theory and basic assumptions for the promotion of effective mothering among low-income and working-poor women are applied in relation to a particularly vulnerable population: street-level prostitution-involved women. Qualitative data from 38 street-level prostituted women shows barriers to effective mothering at the individual,…
ERIC Educational Resources Information Center
Pickel, Andreas
2012-01-01
The social sciences rely on assumptions of a unified self for their explanatory logics. Recent work in the new multidisciplinary field of social neuroscience challenges precisely this unproblematic character of the subjective self as basic, well-defined entity. If disciplinary self-insulation is deemed unacceptable, the philosophical challenge…
ERIC Educational Resources Information Center
Pavlik, John V.
2015-01-01
Emerging technologies are fueling a third paradigm of education. Digital, networked and mobile media are enabling a disruptive transformation of the teaching and learning process. This paradigm challenges traditional assumptions that have long characterized educational institutions and processes, including basic notions of space, time, content,…
What Are We Looking For?--Pro Critical Realism in Text Interpretation
ERIC Educational Resources Information Center
Siljander, Pauli
2011-01-01
A visible role in the theoretical discourses on education has been played in the last couple of decades by the constructivist epistemologies, which have questioned the basic assumptions of realist epistemologies. The increased popularity of interpretative approaches especially has put the realist epistemologies on the defensive. Basing itself on…
The Hidden Reason Behind Children's Misbehavior.
ERIC Educational Resources Information Center
Nystul, Michael S.
1986-01-01
Discusses hidden reason theory based on the assumptions that: (1) the nature of people is positive; (2) a child's most basic psychological need is involvement; and (3) a child has four possible choices in life (good somebody, good nobody, bad somebody, or severely mentally ill.) A three step approach for implementing hidden reason theory is…
ERIC Educational Resources Information Center
Burnett, I. Emett, Jr.; Pankake, Anita M.
Although much of the current school reform movement relies on the basic assumption of effective elementary school administration, insufficient effort has been made to synthesize key concepts found in organizational theory and management studies with relevant effective schools research findings. This paper attempts such a synthesis to help develop…
Response: Training Doctoral Students to Be Scientists
ERIC Educational Resources Information Center
Pollio, David E.
2012-01-01
The purpose of this article is to begin framing doctoral training for a science of social work. This process starts by examining two seemingly simple questions: "What is a social work scientist?" and "How do we train social work scientists?" In answering the first question, some basic assumptions and concepts about what constitutes a "social work…
ERIC Educational Resources Information Center
Lotan, Gurit; Ells, Carolyn
2010-01-01
In this article, the authors challenge professionals to re-examine assumptions about basic concepts and their implications in supporting adults with intellectual and developmental disabilities. The authors focus on decisions with significant implications, such as planning transition from school to adult life, changing living environments, and…
A Convergence of Two Cultures in the Implementation of P.L. 94-142.
ERIC Educational Resources Information Center
Haas, Toni J.
The Education for All Handicapped Children Act (PL 94-142) demanded basic changes in the practices, purposes, and institutional structures of schools to accommodate handicapped students, but did not adequately address the differences between general and special educators in expectations, training, or assumptions about the functions of schooling…
From Earth to Space--Advertising Films Created in a Computer-Based Primary School Task
ERIC Educational Resources Information Center
Öman, Anne
2017-01-01
Today, teachers orchestrate computer-based tasks in software applications in Swedish primary schools. Meaning is made through various modes, and multimodal perspectives on literacy have the basic assumption that meaning is made through many representational and communicational resources. The case study presented in this paper has analysed pupils'…
Child Sexual Abuse: Intervention and Treatment Issues. The User Manual Series.
ERIC Educational Resources Information Center
Faller, Kathleen Coulborn
This manual describes professional practices in intervention and treatment of sexual abuse and discusses how to address the problems of sexually abused children and their families. It makes an assumption that the reader has basic information about sexual abuse. The discussion focuses primarily on the child's guardian as the abuser. The manual…
A Comparative Analysis of Selected Mechanical Aspects of the Ice Skating Stride.
ERIC Educational Resources Information Center
Marino, G. Wayne
This study quantitatively analyzes selected aspects of the skating strides of above-average and below-average ability skaters. Subproblems were to determine how stride length and stride rate are affected by changes in skating velocity, to ascertain whether the basic assumption that stride length accurately approximates horizontal movement of the…
Implementing a Redesign Strategy: Lessons from Educational Change.
ERIC Educational Resources Information Center
Basom, Richard E., Jr.; Crandall, David P.
The effective implementation of school redesign, based on a social systems approach, is discussed in this paper. A basic assumption is that the interdependence of system elements has implications for a complex change process. Seven barriers to redesign and five critical issues for successful redesign strategy are presented. Seven linear steps for…
ERIC Educational Resources Information Center
Bossard, James H. S.
2017-01-01
The basic assumption underlying this article is that the really significant changes in human history are those that occur, not in the mechanical gadgets which men use nor in the institutionalized arrangements by which they live, but in their attitudes and in the values which they accept. The revolutions of the past that have had the greatest…
Civility in Politics and Education. Routledge Studies in Contemporary Philosophy
ERIC Educational Resources Information Center
Mower, Deborah, Ed.; Robison, Wade L., Ed.
2011-01-01
This book examines the concept of civility and the conditions of civil disagreement in politics and education. Although many assume that civility is merely polite behavior, it functions to aid rational discourse. Building on this basic assumption, the book offers multiple accounts of civility and its contribution to citizenship, deliberative…
Improving Clinical Teaching: The ADN Experience. Pathways to Practice.
ERIC Educational Resources Information Center
Haase, Patricia T.; And Others
Three Florida associate degree in nursing (ADN) demonstration projects of the Nursing Curriculum Project (NCP) are described, and the history of the ADN program and current controversies are reviewed. In 1976, the NCP of the Southern Regional Education Board issued basic assumptions about the role of the ADN graduate, relating them to client…
Development and Validation of a Clarinet Performance Adjudication Scale
ERIC Educational Resources Information Center
Abeles, Harold F.
1973-01-01
A basic assumption of this study is that there are generally agreed upon performance standards as evidenced by the use of adjudicators for evaluations at contests and festivals. An evaluation instrument was developed to enable raters to measure effectively those aspects of performance that have common standards of proficiency. (Author/RK)
Organize Your School for Improvement
ERIC Educational Resources Information Center
Truby, William F.
2017-01-01
W. Edwards Deming has suggested 96% of organization performance is a function of the organization's structure. He contends only about 4% of an organization's performance is attributable to the people. This is a fundamental difference as most school leaders work with the basic assumption that 80% of a school's performance is related to staff and…
Resegregation in Norfolk, Virginia. Does Restoring Neighborhood Schools Work?
ERIC Educational Resources Information Center
Meldrum, Christina; Eaton, Susan E.
This report reviews school department data and interviews with officials and others involved in the Norfolk (Virginia) school resegregation plan designed to stem White flight and increase parental involvement. The report finds that all the basic assumptions the local community and the court had about the potential benefits of undoing the city's…
An Economic Theory of School Governance.
ERIC Educational Resources Information Center
Rada, Roger D.
Working from the basic assumption that the primary motivation for those involved in school governance is self-interest, this paper develops and discusses 15 hypotheses that form the essential elements of an economic theory of school governance. The paper opens with a review of previous theories of governance and their origins in social science…
ERIC Educational Resources Information Center
Cameron, Kim S.
A way to assess and improve organizational effectiveness is discussed, with a focus on factors that inhibit successful organizational performance. The basic assumption is that it is easier, more accurate, and more beneficial for individuals and organizations to identify criteria of ineffectiveness (faults and weaknesses) than to identify criteria…
Describes procedures written based on the assumption that they will be performed by analysts who are formally trained in at least the basic principles of chemical analysis and in the use of the subject technology.
ERIC Educational Resources Information Center
Camerer, Rudi
2014-01-01
The testing of intercultural competence has long been regarded as the field of psychometric test procedures, which claim to analyse an individual's personality by specifying and quantifying personality traits with the help of self-answer questionnaires and the statistical evaluation of these. The underlying assumption is that what is analysed and…
Lifeboat Counseling: The Issue of Survival Decisions
ERIC Educational Resources Information Center
Dowd, E. Thomas; Emener, William G.
1978-01-01
Rehabilitation counseling, as a profession, needs to look at future world possibilities, especially in light of overpopulation, and be aware that the need may arise for adjusting basic assumptions about human life--from the belief that every individual has a right to a meaningful life to the notion of selecting who shall live. (DTT)
Challenges of Adopting Constructive Alignment in Action Learning Education
ERIC Educational Resources Information Center
Remneland Wikhamn, Björn
2017-01-01
This paper will critically examine how the two influential pedagogical approaches of action-based learning and constructive alignment relate to each other, and how they may differ in focus and basic assumptions. From the outset, they are based on similar underpinnings, with the student and the learning outcomes in the center. Drawing from…
Education in Conflict and Crisis for National Security.
ERIC Educational Resources Information Center
McClelland, Charles A.
A basic assumption is that the level of conflict within and between nations will escalate over the next 50 years. Trying to "muddle through" using the tools and techniques of organized violence may yield national suicide. Therefore, complex conflict resolution skills need to be developed and used by some part of society to quell disorder…
Textbooks as a Possible Influence on Unscientific Ideas about Evolution
ERIC Educational Resources Information Center
Tshuma, Tholani; Sanders, Martie
2015-01-01
While school textbooks are assumed to be written for and used by students, it is widely acknowledged that they also serve a vital support function for teachers, particularly in times of curriculum change. A basic assumption is that biology textbooks are scientifically accurate. Furthermore, because of the negative impact of…
A basic review on the inferior alveolar nerve block techniques.
Khalil, Hesham
2014-01-01
The inferior alveolar nerve block is the most common injection technique used in dentistry and many modifications of the conventional nerve block have been recently described in the literature. Selecting the best technique by the dentist or surgeon depends on many factors including the success rate and complications related to the selected technique. Dentists should be aware of the available current modifications of the inferior alveolar nerve block techniques in order to effectively choose between these modifications. Some operators may encounter difficulty in identifying the anatomical landmarks which are useful in applying the inferior alveolar nerve block and rely instead on assumptions as to where the needle should be positioned. Such assumptions can lead to failure and the failure rate of inferior alveolar nerve block has been reported to be 20-25% which is considered very high. In this basic review, the anatomical details of the inferior alveolar nerve will be given together with a description of its both conventional and modified blocking techniques; in addition, an overview of the complications which may result from the application of this important technique will be mentioned.
A basic review on the inferior alveolar nerve block techniques
Khalil, Hesham
2014-01-01
The inferior alveolar nerve block is the most common injection technique used in dentistry and many modifications of the conventional nerve block have been recently described in the literature. Selecting the best technique by the dentist or surgeon depends on many factors including the success rate and complications related to the selected technique. Dentists should be aware of the available current modifications of the inferior alveolar nerve block techniques in order to effectively choose between these modifications. Some operators may encounter difficulty in identifying the anatomical landmarks which are useful in applying the inferior alveolar nerve block and rely instead on assumptions as to where the needle should be positioned. Such assumptions can lead to failure and the failure rate of inferior alveolar nerve block has been reported to be 20-25% which is considered very high. In this basic review, the anatomical details of the inferior alveolar nerve will be given together with a description of its both conventional and modified blocking techniques; in addition, an overview of the complications which may result from the application of this important technique will be mentioned. PMID:25886095
Brand, Samuel P C; Rock, Kat S; Keeling, Matt J
2016-04-01
Epidemiological modelling has a vital role to play in policy planning and prediction for the control of vectors, and hence the subsequent control of vector-borne diseases. To decide between competing policies requires models that can generate accurate predictions, which in turn requires accurate knowledge of vector natural histories. Here we highlight the importance of the distribution of times between life-history events, using short-lived midge species as an example. In particular we focus on the distribution of the extrinsic incubation period (EIP) which determines the time between infection and becoming infectious, and the distribution of the length of the gonotrophic cycle which determines the time between successful bites. We show how different assumptions for these periods can radically change the basic reproductive ratio (R0) of an infection and additionally the impact of vector control on the infection. These findings highlight the need for detailed entomological data, based on laboratory experiments and field data, to correctly construct the next-generation of policy-informing models.
Latent Class Models in action: bridging social capital & Internet usage.
Neves, Barbara Barbosa; Fonseca, Jaime R S
2015-03-01
This paper explores how Latent Class Models (LCM) can be applied in social research, when the basic assumptions of regression models cannot be validated. We examine the usefulness of this method with data collected from a study on the relationship between bridging social capital and the Internet. Social capital is defined here as the resources that are potentially available in one's social ties. Bridging is a dimension of social capital, usually related to weak ties (acquaintances), and a source of instrumental resources such as information. The study surveyed a stratified random sample of 417 inhabitants of Lisbon, Portugal. We used LCM to create the variable bridging social capital, but also to estimate the relationship between bridging social capital and Internet usage when we encountered convergence problems with the logistic regression analysis. We conclude by showing a positive relationship between bridging and Internet usage, and by discussing the potential of LCM for social science research. Copyright © 2014 Elsevier Inc. All rights reserved.
Austin, Peter C
2018-01-01
The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.
Austin, Peter C.
2017-01-01
The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest. PMID:29321694
Modeling Episodic Surface Runoff in an Arid Environment
NASA Astrophysics Data System (ADS)
Waichler, S. R.; Wigmosta, M. S.
2003-12-01
Methods were developed for estimating episodic surface runoff in arid eastern Washington, USA. Small (1--10 km2) catchments in this region with mean annual precipitation around 180 mm produce runoff in about half the years, and such events usually occur during winter when a widespread cold snap and possible snow accumulation is followed by warmer temperatures and rainfall. Existence of frozen soil appears to be a key factor, and a moving average of air temperature is an effective predictor of soil temperature. The watershed model DHSVM simulates snow accumulation and ablation reasonably well at a monitoring location, but the same model applied in distributed mode across a 850 km2 basin overpredicts runoff. Inadequate definition of local meteorology appears to limit the accuracy of runoff predictions. However, runoff estimates of sufficient quality to support modeling of long-term groundwater recharge and sediment transport may be found in focusing on recurrence intervals and volumes rather than hydrographs. Usefulness of upland watershed modeling to environmental management of the Hanford Site and an adjacent military reservation will likely improve through sensitivity analysis of basic assumptions about upland water balance.
ERIC Educational Resources Information Center
Berenson, Mark L.
2013-01-01
There is consensus in the statistical literature that severe departures from its assumptions invalidate the use of regression modeling for purposes of inference. The assumptions of regression modeling are usually evaluated subjectively through visual, graphic displays in a residual analysis but such an approach, taken alone, may be insufficient…
Sucharitakul, Kanes; Boily, Marie-Claude; Dimitrov, Dobromir
2018-01-01
Background Many mathematical models have investigated the population-level impact of expanding antiretroviral therapy (ART), using different assumptions about HIV disease progression on ART and among ART dropouts. We evaluated the influence of these assumptions on model projections of the number of infections and deaths prevented by expanded ART. Methods A new dynamic model of HIV transmission among men who have sex with men (MSM) was developed, which incorporated each of four alternative assumptions about disease progression used in previous models: (A) ART slows disease progression; (B) ART halts disease progression; (C) ART reverses disease progression by increasing CD4 count; (D) ART reverses disease progression, but disease progresses rapidly once treatment is stopped. The model was independently calibrated to HIV prevalence and ART coverage data from the United States under each progression assumption in turn. New HIV infections and HIV-related deaths averted over 10 years were compared for fixed ART coverage increases. Results Little absolute difference (<7 percentage points (pp)) in HIV infections averted over 10 years was seen between progression assumptions for the same increases in ART coverage (varied between 33% and 90%) if ART dropouts reinitiated ART at the same rate as ART-naïve MSM. Larger differences in the predicted fraction of HIV-related deaths averted were observed (up to 15pp). However, if ART dropouts could only reinitiate ART at CD4<200 cells/μl, assumption C predicted substantially larger fractions of HIV infections and deaths averted than other assumptions (up to 20pp and 37pp larger, respectively). Conclusion Different disease progression assumptions on and post-ART interruption did not affect the fraction of HIV infections averted with expanded ART, unless ART dropouts only re-initiated ART at low CD4 counts. Different disease progression assumptions had a larger influence on the fraction of HIV-related deaths averted with expanded ART. PMID:29554136
Non-stationary noise estimation using dictionary learning and Gaussian mixture models
NASA Astrophysics Data System (ADS)
Hughes, James M.; Rockmore, Daniel N.; Wang, Yang
2014-02-01
Stationarity of the noise distribution is a common assumption in image processing. This assumption greatly simplifies denoising estimators and other model parameters and consequently assuming stationarity is often a matter of convenience rather than an accurate model of noise characteristics. The problematic nature of this assumption is exacerbated in real-world contexts, where noise is often highly non-stationary and can possess time- and space-varying characteristics. Regardless of model complexity, estimating the parameters of noise dis- tributions in digital images is a difficult task, and estimates are often based on heuristic assumptions. Recently, sparse Bayesian dictionary learning methods were shown to produce accurate estimates of the level of additive white Gaussian noise in images with minimal assumptions. We show that a similar model is capable of accu- rately modeling certain kinds of non-stationary noise processes, allowing for space-varying noise in images to be estimated, detected, and removed. We apply this modeling concept to several types of non-stationary noise and demonstrate the model's effectiveness on real-world problems, including denoising and segmentation of images according to noise characteristics, which has applications in image forensics.
Boerebach, Benjamin C. M.; Lombarts, Kiki M. J. M. H.; Scherpbier, Albert J. J.; Arah, Onyebuchi A.
2013-01-01
Background In fledgling areas of research, evidence supporting causal assumptions is often scarce due to the small number of empirical studies conducted. In many studies it remains unclear what impact explicit and implicit causal assumptions have on the research findings; only the primary assumptions of the researchers are often presented. This is particularly true for research on the effect of faculty’s teaching performance on their role modeling. Therefore, there is a need for robust frameworks and methods for transparent formal presentation of the underlying causal assumptions used in assessing the causal effects of teaching performance on role modeling. This study explores the effects of different (plausible) causal assumptions on research outcomes. Methods This study revisits a previously published study about the influence of faculty’s teaching performance on their role modeling (as teacher-supervisor, physician and person). We drew eight directed acyclic graphs (DAGs) to visually represent different plausible causal relationships between the variables under study. These DAGs were subsequently translated into corresponding statistical models, and regression analyses were performed to estimate the associations between teaching performance and role modeling. Results The different causal models were compatible with major differences in the magnitude of the relationship between faculty’s teaching performance and their role modeling. Odds ratios for the associations between teaching performance and the three role model types ranged from 31.1 to 73.6 for the teacher-supervisor role, from 3.7 to 15.5 for the physician role, and from 2.8 to 13.8 for the person role. Conclusions Different sets of assumptions about causal relationships in role modeling research can be visually depicted using DAGs, which are then used to guide both statistical analysis and interpretation of results. Since study conclusions can be sensitive to different causal assumptions, results should be interpreted in the light of causal assumptions made in each study. PMID:23936020
Explaining evolution via constrained persistent perfect phylogeny
2014-01-01
Background The perfect phylogeny is an often used model in phylogenetics since it provides an efficient basic procedure for representing the evolution of genomic binary characters in several frameworks, such as for example in haplotype inference. The model, which is conceptually the simplest, is based on the infinite sites assumption, that is no character can mutate more than once in the whole tree. A main open problem regarding the model is finding generalizations that retain the computational tractability of the original model but are more flexible in modeling biological data when the infinite site assumption is violated because of e.g. back mutations. A special case of back mutations that has been considered in the study of the evolution of protein domains (where a domain is acquired and then lost) is persistency, that is the fact that a character is allowed to return back to the ancestral state. In this model characters can be gained and lost at most once. In this paper we consider the computational problem of explaining binary data by the Persistent Perfect Phylogeny model (referred as PPP) and for this purpose we investigate the problem of reconstructing an evolution where some constraints are imposed on the paths of the tree. Results We define a natural generalization of the PPP problem obtained by requiring that for some pairs (character, species), neither the species nor any of its ancestors can have the character. In other words, some characters cannot be persistent for some species. This new problem is called Constrained PPP (CPPP). Based on a graph formulation of the CPPP problem, we are able to provide a polynomial time solution for the CPPP problem for matrices whose conflict graph has no edges. Using this result, we develop a parameterized algorithm for solving the CPPP problem where the parameter is the number of characters. Conclusions A preliminary experimental analysis shows that the constrained persistent perfect phylogeny model allows to explain efficiently data that do not conform with the classical perfect phylogeny model. PMID:25572381
He, Xin; Frey, Eric C
2006-08-01
Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.
NASA Astrophysics Data System (ADS)
Medlyn, B.; Jiang, M.; Zaehle, S.
2017-12-01
There is now ample experimental evidence that the response of terrestrial vegetation to rising atmospheric CO2 concentration is modified by soil nutrient availability. How to represent nutrient cycling processes is thus a key consideration for vegetation models. We have previously used model intercomparison to demonstrate that models incorporating different assumptions predict very different responses at Free-Air CO2 Enrichment experiments. Careful examination of model outputs has provided some insight into the reasons for the different model outcomes, but it is difficult to attribute outcomes to specific assumptions. Here we investigate the impact of individual assumptions in a generic plant carbon-nutrient cycling model. The G'DAY (Generic Decomposition And Yield) model is modified to incorporate alternative hypotheses for nutrient cycling. We analyse the impact of these assumptions in the model using a simple analytical approach known as "two-timing". This analysis identifies the quasi-equilibrium behaviour of the model at the time scales of the component pools. The analysis provides a useful mathematical framework for probing model behaviour and identifying the most critical assumptions for experimental study.
Bayesian Methods for the Physical Sciences. Learning from Examples in Astronomy and Physics.
NASA Astrophysics Data System (ADS)
Andreon, Stefano; Weaver, Brian
2015-05-01
Chapter 1: This chapter presents some basic steps for performing a good statistical analysis, all summarized in about one page. Chapter 2: This short chapter introduces the basics of probability theory inan intuitive fashion using simple examples. It also illustrates, again with examples, how to propagate errors and the difference between marginal and profile likelihoods. Chapter 3: This chapter introduces the computational tools and methods that we use for sampling from the posterior distribution. Since all numerical computations, and Bayesian ones are no exception, may end in errors, we also provide a few tips to check that the numerical computation is sampling from the posterior distribution. Chapter 4: Many of the concepts of building, running, and summarizing the resultsof a Bayesian analysis are described with this step-by-step guide using a basic (Gaussian) model. The chapter also introduces examples using Poisson and Binomial likelihoods, and how to combine repeated independent measurements. Chapter 5: All statistical analyses make assumptions, and Bayesian analyses are no exception. This chapter emphasizes that results depend on data and priors (assumptions). We illustrate this concept with examples where the prior plays greatly different roles, from major to negligible. We also provide some advice on how to look for information useful for sculpting the prior. Chapter 6: In this chapter we consider examples for which we want to estimate more than a single parameter. These common problems include estimating location and spread. We also consider examples that require the modeling of two populations (one we are interested in and a nuisance population) or averaging incompatible measurements. We also introduce quite complex examples dealing with upper limits and with a larger-than-expected scatter. Chapter 7: Rarely is a sample randomly selected from the population we wish to study. Often, samples are affected by selection effects, e.g., easier-to-collect events or objects are over-represented in samples and difficult-to-collect are under-represented if not missing altogether. In this chapter we show how to account for non-random data collection to infer the properties of the population from the studied sample. Chapter 8: In this chapter we introduce regression models, i.e., how to fit (regress) one, or more quantities, against each other through a functional relationship and estimate any unknown parameters that dictate this relationship. Questions of interest include: how to deal with samples affected by selection effects? How does a rich data structure influence the fitted parameters? And what about non-linear multiple-predictor fits, upper/lower limits, measurements errors of different amplitudes and an intrinsic variety in the studied populations or an extra source of variability? A number of examples illustrate how to answer these questions and how to predict the value of an unavailable quantity by exploiting the existence of a trend with another, available, quantity. Chapter 9: This chapter provides some advice on how the careful scientist should perform model checking and sensitivity analysis, i.e., how to answer the following questions: is the considered model at odds with the current available data (the fitted data), for example because it is over-simplified compared to some specific complexity pointed out by the data? Furthermore, are the data informative about the quantity being measured or are results sensibly dependent on details of the fitted model? And, finally, what about if assumptions are uncertain? A number of examples illustrate how to answer these questions. Chapter 10: This chapter compares the performance of Bayesian methods against simple, non-Bayesian alternatives, such as maximum likelihood, minimal chi square, ordinary and weighted least square, bivariate correlated errors and intrinsic scatter, and robust estimates of location and scale. Performances are evaluated in terms of quality of the prediction, accuracy of the estimates, and fairness and noisiness of the quoted errors. We also focus on three failures of maximum likelihood methods occurring with small samples, with mixtures, and with regressions with errors in the predictor quantity.
Optimizing Experimental Design for Comparing Models of Brain Function
Daunizeau, Jean; Preuschoff, Kerstin; Friston, Karl; Stephan, Klaas
2011-01-01
This article presents the first attempt to formalize the optimization of experimental design with the aim of comparing models of brain function based on neuroimaging data. We demonstrate our approach in the context of Dynamic Causal Modelling (DCM), which relates experimental manipulations to observed network dynamics (via hidden neuronal states) and provides an inference framework for selecting among candidate models. Here, we show how to optimize the sensitivity of model selection by choosing among experimental designs according to their respective model selection accuracy. Using Bayesian decision theory, we (i) derive the Laplace-Chernoff risk for model selection, (ii) disclose its relationship with classical design optimality criteria and (iii) assess its sensitivity to basic modelling assumptions. We then evaluate the approach when identifying brain networks using DCM. Monte-Carlo simulations and empirical analyses of fMRI data from a simple bimanual motor task in humans serve to demonstrate the relationship between network identification and the optimal experimental design. For example, we show that deciding whether there is a feedback connection requires shorter epoch durations, relative to asking whether there is experimentally induced change in a connection that is known to be present. Finally, we discuss limitations and potential extensions of this work. PMID:22125485
Advanced simulation noise model for modern fighter aircraft
NASA Astrophysics Data System (ADS)
Ikelheimer, Bruce
2005-09-01
NoiseMap currently represents the state of the art for military airfield noise analysis. While this model is sufficient for the current fleet of aircraft, it has limits in its capability to model the new generation of fighter aircraft like the JSF and the F-22. These aircraft's high-powered engines produce noise with significant nonlinear content. Combining this with their ability to vector the thrust means they have noise characteristics that are outside of the basic modeling assumptions of the currently available noise models. Wyle Laboratories, Penn State University, and University of Alabama are in the process of developing a new noise propagation model for the Strategic Environmental Research and Development Program. Source characterization will be through complete spheres (or hemispheres if there is not sufficient data) for each aircraft state (including thrust vector angles). Fixed and rotor wing aircraft will be included. Broadband, narrowband, and pure tone propagation will be included. The model will account for complex terrain and weather effects, as well as the effects of nonlinear propagation. It will be a complete model capable of handling a range of noise sources from small subsonic general aviation aircraft to the latest fighter aircraft like the JSF.
Collective behaviour in vertebrates: a sensory perspective
Collignon, Bertrand; Fernández-Juricic, Esteban
2016-01-01
Collective behaviour models can predict behaviours of schools, flocks, and herds. However, in many cases, these models make biologically unrealistic assumptions in terms of the sensory capabilities of the organism, which are applied across different species. We explored how sensitive collective behaviour models are to these sensory assumptions. Specifically, we used parameters reflecting the visual coverage and visual acuity that determine the spatial range over which an individual can detect and interact with conspecifics. Using metric and topological collective behaviour models, we compared the classic sensory parameters, typically used to model birds and fish, with a set of realistic sensory parameters obtained through physiological measurements. Compared with the classic sensory assumptions, the realistic assumptions increased perceptual ranges, which led to fewer groups and larger group sizes in all species, and higher polarity values and slightly shorter neighbour distances in the fish species. Overall, classic visual sensory assumptions are not representative of many species showing collective behaviour and constrain unrealistically their perceptual ranges. More importantly, caution must be exercised when empirically testing the predictions of these models in terms of choosing the model species, making realistic predictions, and interpreting the results. PMID:28018616
NASA Astrophysics Data System (ADS)
Schmid, Gernot; Hirtl, Rene
2016-06-01
The reference levels and maximum permissible exposure values for magnetic fields that are currently used have been derived from basic restrictions under the assumption of upright standing body models in a standard posture, i.e. with arms laterally down and without contact with metallic objects. Moreover, if anatomical modelling of the body was used at all, the skin was represented as a single homogeneous tissue layer. In the present paper we addressed the possible impacts of posture and skin modelling in scenarios of exposure to a 50 Hz uniform magnetic field on the in situ electric field strength in peripheral tissues, which must be limited in order to avoid peripheral nerve stimulation. We considered different body postures including situations where body parts form large induction loops (e.g. clasped hands) with skin-to-skin and skin-to-metal contact spots and compared the results obtained with a homogeneous single-layer skin model to results obtained with a more realistic two-layer skin representation consisting of a low-conductivity stratum corneum layer on top of a combined layer for the cellular epidermis and dermis. Our results clearly indicated that postures with loops formed of body parts may lead to substantially higher maximum values of induced in situ electric field strengths than in the case of standard postures due to a highly concentrated current density and in situ electric field strength in the skin-to-skin and skin-to-metal contact regions. With a homogeneous single-layer skin, as is used for even the most recent anatomical body models in exposure assessment, the in situ electric field strength may exceed the basic restrictions in such situations, even when the reference levels and maximum permissible exposure values are not exceeded. However, when using the more realistic two-layer skin model the obtained in situ electric field strengths were substantially lower and no violations of the basic restrictions occurred, which can be explained by the current-limiting effect of the low-conductivity stratum corneum layer.
Gagnon, B; Abrahamowicz, M; Xiao, Y; Beauchamp, M-E; MacDonald, N; Kasymjanova, G; Kreisman, H; Small, D
2010-01-01
Background: C-reactive protein (CRP) is gaining credibility as a prognostic factor in different cancers. Cox's proportional hazard (PH) model is usually used to assess prognostic factors. However, this model imposes a priori assumptions, which are rarely tested, that (1) the hazard ratio associated with each prognostic factor remains constant across the follow-up (PH assumption) and (2) the relationship between a continuous predictor and the logarithm of the mortality hazard is linear (linearity assumption). Methods: We tested these two assumptions of the Cox's PH model for CRP, using a flexible statistical model, while adjusting for other known prognostic factors, in a cohort of 269 patients newly diagnosed with non-small cell lung cancer (NSCLC). Results: In the Cox's PH model, high CRP increased the risk of death (HR=1.11 per each doubling of CRP value, 95% CI: 1.03–1.20, P=0.008). However, both the PH assumption (P=0.033) and the linearity assumption (P=0.015) were rejected for CRP, measured at the initiation of chemotherapy, which kept its prognostic value for approximately 18 months. Conclusion: Our analysis shows that flexible modeling provides new insights regarding the value of CRP as a prognostic factor in NSCLC and that Cox's PH model underestimates early risks associated with high CRP. PMID:20234363
NASA Astrophysics Data System (ADS)
Bieler, Andre; Altwegg, Kathrin; Balsiger, Hans; Berthelier, Jean-Jacques; Calmonte, Ursina; Combi, Michael; De Keyser, Johan; Fiethe, Björn; Fougere, Nicolas; Fuselier, Stephen; Gasc, Sébastien; Gombosi, Tamas; Hansen, Kenneth; Hässig, Myrtha; Huang, Zhenguang; Jäckel, Annette; Jia, Xianzhe; Le Roy, Lena; Mall, Urs A.; Rème, Henri; Rubin, Martin; Tenishev, Valeriy; Tóth, Gábor; Tzou, Chia-Yu; Wurz, Peter
2015-11-01
67P/Churyumov-Gerasimenko (67P) is a Jupiter-family comet and the object of investigation of the European Space Agency mission Rosetta. This report presents the first full 3D simulation results of 67P's neutral gas coma. In this study we include results from a direct simulation Monte Carlo method, a hydrodynamic code, and a purely geometric calculation which computes the total illuminated surface area on the nucleus. All models include the triangulated 3D shape model of 67P as well as realistic illumination and shadowing conditions. The basic concept is the assumption that these illumination conditions on the nucleus are the main driver for the gas activity of the comet. As a consequence, the total production rate of 67P varies as a function of solar insolation. The best agreement between the model and the data is achieved when gas fluxes on the night side are in the range of 7% to 10% of the maximum flux, accounting for contributions from the most volatile components. To validate the output of our numerical simulations we compare the results of all three models to in situ gas number density measurements from the ROSINA COPS instrument. We are able to reproduce the overall features of these local neutral number density measurements of ROSINA COPS for the time period between early August 2014 and January 1 2015 with all three models. Some details in the measurements are not reproduced and warrant further investigation and refinement of the models. However, the overall assumption that illumination conditions on the nucleus are at least an important driver of the gas activity is validated by the models. According to our simulation results we find the total production rate of 67P to be constant between August and November 2014 with a value of about 1 × 1026 molecules s-1.
Modeling of Stiffness and Strength of Bone at Nanoscale.
Abueidda, Diab W; Sabet, Fereshteh A; Jasiuk, Iwona M
2017-05-01
Two distinct geometrical models of bone at the nanoscale (collagen fibril and mineral platelets) are analyzed computationally. In the first model (model I), minerals are periodically distributed in a staggered manner in a collagen matrix while in the second model (model II), minerals form continuous layers outside the collagen fibril. Elastic modulus and strength of bone at the nanoscale, represented by these two models under longitudinal tensile loading, are studied using a finite element (FE) software abaqus. The analysis employs a traction-separation law (cohesive surface modeling) at various interfaces in the models to account for interfacial delaminations. Plane stress, plane strain, and axisymmetric versions of the two models are considered. Model II is found to have a higher stiffness than model I for all cases. For strength, the two models alternate the superiority of performance depending on the inputs and assumptions used. For model II, the axisymmetric case gives higher results than the plane stress and plane strain cases while an opposite trend is observed for model I. For axisymmetric case, model II shows greater strength and stiffness compared to model I. The collagen-mineral arrangement of bone at nanoscale forms a basic building block of bone. Thus, knowledge of its mechanical properties is of high scientific and clinical interests.
Statistical foundations of liquid-crystal theory
Seguin, Brian; Fried, Eliot
2013-01-01
We develop a mechanical theory for systems of rod-like particles. Central to our approach is the assumption that the external power expenditure for any subsystem of rods is independent of the underlying frame of reference. This assumption is used to derive the basic balance laws for forces and torques. By considering inertial forces on par with other forces, these laws hold relative to any frame of reference, inertial or noninertial. Finally, we introduce a simple set of constitutive relations to govern the interactions between rods and find restrictions necessary and sufficient for these laws to be consistent with thermodynamics. Our framework provides a foundation for a statistical mechanical derivation of the macroscopic balance laws governing liquid crystals. PMID:23772091
Haller, Jozsef
2013-04-01
Aggression research was for long dominated by the assumption that aggression-related psychopathologies result from the excessive activation of aggression-promoting brain mechanisms. This assumption was recently challenged by findings with models of aggression that mimic etiological factors of aggression-related psychopathologies. Subjects submitted to such procedures show abnormal attack features (mismatch between provocation and response, disregard of species-specific rules, and insensitivity toward the social signals of opponents). We review here 12 such laboratory models and the available human findings on the neural background of abnormal aggression. We focus on the hypothalamus, a region tightly involved in the execution of attacks. Data show that the hypothalamic mechanisms controlling attacks (general activation levels, local serotonin, vasopressin, substance P, glutamate, GABA, and dopamine neurotransmission) undergo etiological factor-dependent changes. Findings suggest that the emotional component of attacks differentiates two basic types of hypothalamic mechanisms. Aggression associated with increased arousal (emotional/reactive aggression) is paralleled by increased mediobasal hypothalamic activation, increased hypothalamic vasopressinergic, but diminished hypothalamic serotonergic neurotransmission. In aggression models associated with low arousal (unemotional/proactive aggression), the lateral but not the mediobasal hypothalamus is over-activated. In addition, the anti-aggressive effect of serotonergic neurotransmission is lost and paradoxical changes were noticed in vasopressinergic neurotransmission. We conclude that there is no single 'neurobiological road' to abnormal aggression: the neural background shows qualitative, etiological factor-dependent differences. Findings obtained with different models should be viewed as alternative mechanisms rather than conflicting data. The relevance of these findings for understanding and treating of aggression-related psychopathologies is discussed. This article is part of a Special Issue entitled 'Extrasynaptic ionotropic receptors'. Copyright © 2012 Elsevier Inc. All rights reserved.
Arctic Ice Dynamics Joint Experiment (AIDJEX) assumptions revisited and found inadequate
NASA Astrophysics Data System (ADS)
Coon, Max; Kwok, Ron; Levy, Gad; Pruis, Matthew; Schreyer, Howard; Sulsky, Deborah
2007-11-01
This paper revisits the Arctic Ice Dynamics Joint Experiment (AIDJEX) assumptions about pack ice behavior with an eye to modeling sea ice dynamics. The AIDJEX assumptions were that (1) enough leads were present in a 100 km by 100 km region to make the ice isotropic on that scale; (2) the ice had no tensile strength; and (3) the ice behavior could be approximated by an isotropic yield surface. These assumptions were made during the development of the AIDJEX model in the 1970s, and are now found inadequate. The assumptions were made in part because of insufficient large-scale (10 km) deformation and stress data, and in part because of computer capability limitations. Upon reviewing deformation and stress data, it is clear that a model including deformation on discontinuities and an anisotropic failure surface with tension would better describe the behavior of pack ice. A model based on these assumptions is needed to represent the deformation and stress in pack ice on scales from 10 to 100 km, and would need to explicitly resolve discontinuities. Such a model would require a different class of metrics to validate discontinuities against observations.
Using effort information with change-in-ratio data for population estimation
Udevitz, Mark S.; Pollock, Kenneth H.
1995-01-01
Most change-in-ratio (CIR) methods for estimating fish and wildlife population sizes have been based only on assumptions about how encounter probabilities vary among population subclasses. When information on sampling effort is available, it is also possible to derive CIR estimators based on assumptions about how encounter probabilities vary over time. This paper presents a generalization of previous CIR models that allows explicit consideration of a range of assumptions about the variation of encounter probabilities among subclasses and over time. Explicit estimators are derived under this model for specific sets of assumptions about the encounter probabilities. Numerical methods are presented for obtaining estimators under the full range of possible assumptions. Likelihood ratio tests for these assumptions are described. Emphasis is on obtaining estimators based on assumptions about variation of encounter probabilities over time.
Power and Method: Political Activism and Educational Research. Critical Social Thought Series.
ERIC Educational Resources Information Center
Gitlin, Andrew, Ed.
This book scrutinizes some basic assumptions about educational research with the aim that such research may act more powerfully on those persistent and important problems of our schools surrounding issues of race, class, and gender. In particular, the 13 essays in this book examine how power is infused in research by addressing such questions as…
ERIC Educational Resources Information Center
Feinberg, Walter
2006-01-01
This essay explores a disciplinary hybrid, called here, philosophical ethnography. Philosophical ethnography is a philosophy of the everyday and ethnography in the context of intercultural discourse about coordinating meaning, evaluation, norms and action. Its basic assumption is that in the affairs of human beings truth, justice and beauty are…
The Future of Family Business Education in UK Business Schools
ERIC Educational Resources Information Center
Collins, Lorna; Seaman, Claire; Graham, Stuart; Stepek, Martin
2013-01-01
Purpose: This practitioner paper aims to question basic assumptions about management education and to argue that a new paradigm is needed for UK business schools which embraces an oft neglected, yet economically vital, stakeholder group, namely family businesses. It seeks to pose the question of why we have forgotten to teach about family business…
ERIC Educational Resources Information Center
Olympia, Daniel; Farley, Megan; Christiansen, Elizabeth; Pettersson, Hollie; Jenson, William; Clark, Elaine
2004-01-01
While much of the current focus in special education remains on reauthorization of the Individuals with Disabilities Act of 1997, disparities in the identification of children with serious emotional disorders continue to plague special educators and school psychologists. Several years after the issue of social maladjustment and its relationship to…
Locations of Racism in Education: A Speech Act Analysis of a Policy Chain
ERIC Educational Resources Information Center
Arneback, Emma; Quennerstedt, Ann
2016-01-01
This article explores how racism is located in an educational policy chain and identifies how its interpretation changes throughout the chain. A basic assumption is that the policy formation process can be seen as a chain in which international, national and local policies are "links"--separate entities yet joined. With Sweden as the…
ERIC Educational Resources Information Center
Dimitrova, Radosveta; Ferrer-Wreder, Laura; Galanti, Maria Rosaria
2016-01-01
This study evaluated the factorial structure of the Pedagogical and Social Climate in School (PESOC) questionnaire among 307 teachers in Bulgaria. The teacher edition of PESOC consists of 11 scales (i.e., Expectations for Students, Unity Among Teachers, Approach to Students, Basic Assumptions About Students' Ability to Learn, School-Home…
The Education System in Greece. [Revised.
ERIC Educational Resources Information Center
EURYDICE Central Unit, Brussels (Belgium).
The education policy of the Greek government rests on the basic assumption that effective education is a social goal and that every citizen has a right to an education. A brief description of the Greek education system and of the adjustments made to give practical meaning to the provisions on education in the Constitution is presented in the…
Experiences in Rural Mental Health II: Organizing a Low Budget Program.
ERIC Educational Resources Information Center
Hollister, William G.; And Others
Based on a North Carolina feasibility study (1967-73) which focused on development of a pattern for providing comprehensive mental health services to rural people, this second program guide deals with organization of a low-income program budget. Presenting the basic assumptions utilized in the development of a low-budget program in Franklin and…
ERIC Educational Resources Information Center
Gunthorpe, Sydney
2006-01-01
From the assumption that matching a student's learning style with the learning method best suited for the student, it follows that developing courses that correlate learning method with learning style would be more successful for students. Albuquerque Technical Vocational Institute (TVI) in New Mexico has attempted to provide students with more…
Reds, Greens, Yellows Ease the Spelling Blues.
ERIC Educational Resources Information Center
Irwin, Virginia
1971-01-01
This document reports on a color-coding innovation designed to improve the spelling ability of high school seniors. This color-coded system is based on two assumptions: that color will appeal to the students and that there are three principal reasons for misspelling. Two groups were chosen for the experiments. A basic list of spelling demons was…
The Politics and Coverage of Terror: From Media Images to Public Consciousness.
ERIC Educational Resources Information Center
Wittebols, James H.
This paper presents a typology of terrorism which is grounded in how media differentially cover each type. The typology challenges some of the basic assumptions, such as that the media "allow" themselves to be exploited by terrorists and "encourage" terrorism, and the conventional wisdom about the net effects of the media's…
The Past as Prologue: Examining the Consequences of Business as Usual. Center Paper 01-93.
ERIC Educational Resources Information Center
Jones, Dennis P.; And Others
This study examined the ability of California to meet increased demand for postsecondary education without significantly altering the basic historical assumptions and policies that have governed relations between the state and its institutions of higher learning. Results of a series of analyses that estimated projected enrollments and costs under…
The Spouse and Familial Incest: An Adlerian Perspective.
ERIC Educational Resources Information Center
Quinn, Kathleen L.
A major component of Adlerian psychology concerns the belief in responsibility to self and others. In both incest perpetrator and spouse the basic underlying assumption of responsibility to self and others is often not present. Activities and behaviors occur in a social context and as such need to be regarded within a social context that may serve…
Effects of Problem Scope and Creativity Instructions on Idea Generation and Selection
ERIC Educational Resources Information Center
Rietzschel, Eric F.; Nijstad, Bernard A.; Stroebe, Wolfgang
2014-01-01
The basic assumption of brainstorming is that increased quantity of ideas results in increased generation as well as selection of creative ideas. Although previous research suggests that idea quantity correlates strongly with the number of good ideas generated, quantity has been found to be unrelated to the quality of selected ideas. This article…
ERIC Educational Resources Information Center
Cunha, George M.
This Records and Archives Management Programme (RAMP) study is intended to assist in the development of basic training programs and courses in document preservation and restoration, and to promote harmonization of such training both within the archival profession and within the broader information field. Based on the assumption that conservation…
The Role of the Social Studies in Public Education.
ERIC Educational Resources Information Center
Byrne, T. C.
This paper was prepared for a social studies curriculum conference in Alberta in June, 1967. It provides a point of view on curriculum building which could be useful in establishing a national service in this field. The basic assumption is that the social studies should in some measure change the behavior of the students (a sharp departure from…
Twisting of thin walled columns perfectly restrained at one end
NASA Technical Reports Server (NTRS)
Lazzarino, Lucio
1938-01-01
Proceeding from the basic assumptions of the Batho-Bredt theory on twisting failure of thin-walled columns, the discrepancies most frequently encountered are analyzed. A generalized approximate method is suggested for the determination of the disturbances in the stress condition of the column, induced by the constrained warping in one of the end sections.
Adolescent Literacy in Europe--An Urgent Call for Action
ERIC Educational Resources Information Center
Sulkunen, Sari
2013-01-01
This article focuses on the literacy of the adolescents who, in most European countries, are about to leave or have recently left basic education with the assumption that they have the command of functional literacy as required in and for further studies, citizenship, work life and a fulfilling life as individuals. First, the overall performance…
Is the European (Active) Citizenship Ideal Fostering Inclusion within the Union? A Critical Review
ERIC Educational Resources Information Center
Milana, Marcella
2008-01-01
This article reviews: (1) the establishment and functioning of EU citizenship: (2) the resulting perception of education for European active citizenship; and (3) the question of its adequacy for enhancing democratic values and practices within the Union. Key policy documents produced by the EU help to unfold the basic assumptions on which…
Improving Child Management Practices of Parents and Teachers. Maxi I Practicum. Final Report.
ERIC Educational Resources Information Center
Adreani, Arnold J.; McCaffrey, Robert
The practicum design reported in this document was based on one basic assumption, that the adult perceptions of children influence adult behavior toward children which in turn influences the child's behavior. Therefore, behavior changes by children could best be effected by changing the adult perception of, and behavior toward, the child.…
The Importance of Woody Twig Ends to Deer in the Southeast
Charles T. Cushwa; Robert L. Downing; Richard F. Harlow; David F. Urbston
1970-01-01
One of the basic assumptions underlying research on wildlife habitat in the five Atlantic states of the Southeast is that white-tailed deer (Odocoileus virginianus) rely heavily on the ends of woody twigs during the winter. Considerable research has been undertaken to determine methods for increasing and measuring the availability of woody twigs to...
Going off the Grid: Re-Examining Technology in the Basic Writing Classroom
ERIC Educational Resources Information Center
Clay-Buck, Holly; Tuberville, Brenda
2015-01-01
The notion that today's students are constantly exposed to information technology has become so pervasive that it seems the academic conversation assumes students are "tech savvy." The proliferation of apps and smart phones aimed at the traditional college-aged population feeds into this assumption, aided in no small part by a growing…
ERIC Educational Resources Information Center
Liskin-Gasparro, Judith E., Ed.
This collection of papers is divided into three parts. Part 1, "Changing Patterns: Curricular Implications," includes "Basic Assumptions Revisited: Today's French and Spanish Students at a Large Metropolitan University" (Gail Guntermann, Suzanne Hendrickson, and Carmen de Urioste) and "Le Francais et Mort, Vive le…
Predictive performance models and multiple task performance
NASA Technical Reports Server (NTRS)
Wickens, Christopher D.; Larish, Inge; Contorer, Aaron
1989-01-01
Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.
Theory, modelling and calibration of passive samplers used in water monitoring: Chapter 7
Booij, K.; Vrana, B.; Huckins, James N.; Greenwood, Richard B.; Mills, Graham; Vrana, B.
2007-01-01
This chapter discusses contaminant uptake by a passive sampling device (PSD) that consists of a central sorption phase, surrounded by a membrane. A variety of models has been used over the past few years to better understand the kinetics of contaminant transfer to passive samplers. These models are essential for understanding how the amounts of absorbed contaminants relate to ambient concentrations, as well as for the design and evaluation of calibration experiments. Models differ in the number of phases and simplifying assumptions that are taken into consideration, such as the existence of (pseudo-) steady-state conditions, the presence or absence of linear concentration gradients within the membrane phase, the way in which transport within the WBL is modeled and whether or not the aqueous concentration is constant during the sampler exposure. The chapter introduces the basic concepts and models used in the literature on passive samplers for the special case of triolein-containing semipermeable membrane devices (SPMDs). These can easily be extended to samplers with more or with less sorption phases. It also discusses the transport of chemicals through the various phases constituting PSDs. the implications of these models for designing and evaluating calibration studies have been discussed.
The inverse niche model for food webs with parasites
Warren, Christopher P.; Pascual, Mercedes; Lafferty, Kevin D.; Kuris, Armand M.
2010-01-01
Although parasites represent an important component of ecosystems, few field and theoretical studies have addressed the structure of parasites in food webs. We evaluate the structure of parasitic links in an extensive salt marsh food web, with a new model distinguishing parasitic links from non-parasitic links among free-living species. The proposed model is an extension of the niche model for food web structure, motivated by the potential role of size (and related metabolic rates) in structuring food webs. The proposed extension captures several properties observed in the data, including patterns of clustering and nestedness, better than does a random model. By relaxing specific assumptions, we demonstrate that two essential elements of the proposed model are the similarity of a parasite's hosts and the increasing degree of parasite specialization, along a one-dimensional niche axis. Thus, inverting one of the basic rules of the original model, the one determining consumers' generality appears critical. Our results support the role of size as one of the organizing principles underlying niche space and food web topology. They also strengthen the evidence for the non-random structure of parasitic links in food webs and open the door to addressing questions concerning the consequences and origins of this structure.
A reduced-order, single-bubble cavitation model with applications to therapeutic ultrasound
Kreider, Wayne; Crum, Lawrence A.; Bailey, Michael R.; Sapozhnikov, Oleg A.
2011-01-01
Cavitation often occurs in therapeutic applications of medical ultrasound such as shock-wave lithotripsy (SWL) and high-intensity focused ultrasound (HIFU). Because cavitation bubbles can affect an intended treatment, it is important to understand the dynamics of bubbles in this context. The relevant context includes very high acoustic pressures and frequencies as well as elevated temperatures. Relative to much of the prior research on cavitation and bubble dynamics, such conditions are unique. To address the relevant physics, a reduced-order model of a single, spherical bubble is proposed that incorporates phase change at the liquid-gas interface as well as heat and mass transport in both phases. Based on the energy lost during the inertial collapse and rebound of a millimeter-sized bubble, experimental observations were used to tune and test model predictions. In addition, benchmarks from the published literature were used to assess various aspects of model performance. Benchmark comparisons demonstrate that the model captures the basic physics of phase change and diffusive transport, while it is quantitatively sensitive to specific model assumptions and implementation details. Given its performance and numerical stability, the model can be used to explore bubble behaviors across a broad parameter space relevant to therapeutic ultrasound. PMID:22088026
A reduced-order, single-bubble cavitation model with applications to therapeutic ultrasound.
Kreider, Wayne; Crum, Lawrence A; Bailey, Michael R; Sapozhnikov, Oleg A
2011-11-01
Cavitation often occurs in therapeutic applications of medical ultrasound such as shock-wave lithotripsy (SWL) and high-intensity focused ultrasound (HIFU). Because cavitation bubbles can affect an intended treatment, it is important to understand the dynamics of bubbles in this context. The relevant context includes very high acoustic pressures and frequencies as well as elevated temperatures. Relative to much of the prior research on cavitation and bubble dynamics, such conditions are unique. To address the relevant physics, a reduced-order model of a single, spherical bubble is proposed that incorporates phase change at the liquid-gas interface as well as heat and mass transport in both phases. Based on the energy lost during the inertial collapse and rebound of a millimeter-sized bubble, experimental observations were used to tune and test model predictions. In addition, benchmarks from the published literature were used to assess various aspects of model performance. Benchmark comparisons demonstrate that the model captures the basic physics of phase change and diffusive transport, while it is quantitatively sensitive to specific model assumptions and implementation details. Given its performance and numerical stability, the model can be used to explore bubble behaviors across a broad parameter space relevant to therapeutic ultrasound.
Rubin, David C.; Berntsen, Dorthe; Johansen, Malene Klindt
2009-01-01
In the mnemonic model of PTSD, the current memory of a negative event, not the event itself determines symptoms. The model is an alternative to the current event-based etiology of PTSD represented in the DSM. The model accounts for important and reliable findings that are often inconsistent with the current diagnostic view and that have been neglected by theoretical accounts of the disorder, including the following observations. The diagnosis needs objective information about the trauma and peritraumatic emotions, but uses retrospective memory reports that can have substantial biases. Negative events and emotions that do not satisfy the current diagnostic criteria for a trauma can be followed by symptoms that would otherwise qualify for PTSD. Predisposing factors that affect the current memory have large effects on symptoms. The inability-to-recall-an-important-aspect-of-the-trauma symptom does not correlate with other symptoms. Loss or enhancement of the trauma memory affects PTSD symptoms in predictable ways. Special mechanisms that apply only to traumatic memories are not needed, increasing parsimony and the knowledge that can be applied to understanding PTSD. PMID:18954211
Coexistence trend contingent to Mediterranean oaks with different leaf habits.
Di Paola, Arianna; Paquette, Alain; Trabucco, Antonio; Mereu, Simone; Valentini, Riccardo; Paparella, Francesco
2017-05-01
In a previous work we developed a mathematical model to explain the co-occurrence of evergreen and deciduous oak groups in the Mediterranean region, regarded as one of the distinctive features of Mediterranean biodiversity. The mathematical analysis showed that a stabilizing mechanism resulting from niche difference (i.e. different water use and water stress tolerance) between groups allows their coexistence at intermediate values of suitable soil water content. A simple formal derivation of the model expresses this hypothesis in a testable form linked uniquely to the actual evapotranspiration of forests community. In the present work we ascertain whether this simplified conclusion possesses some degree of explanatory power by comparing available data on oaks distributions and remotely sensed evapotranspiration (MODIS product) in a large-scale survey embracing the western Mediterranean area. Our findings confirmed the basic assumptions of model addressed on large scale, but also revealed asymmetric responses to water use and water stress tolerance between evergreen and deciduous oaks that should be taken into account to increase the understating of species interactions and, ultimately, improve the modeling capacity to explain co-occurrence.
Heterogeneous Structure of Stem Cells Dynamics: Statistical Models and Quantitative Predictions
Bogdan, Paul; Deasy, Bridget M.; Gharaibeh, Burhan; Roehrs, Timo; Marculescu, Radu
2014-01-01
Understanding stem cell (SC) population dynamics is essential for developing models that can be used in basic science and medicine, to aid in predicting cells fate. These models can be used as tools e.g. in studying patho-physiological events at the cellular and tissue level, predicting (mal)functions along the developmental course, and personalized regenerative medicine. Using time-lapsed imaging and statistical tools, we show that the dynamics of SC populations involve a heterogeneous structure consisting of multiple sub-population behaviors. Using non-Gaussian statistical approaches, we identify the co-existence of fast and slow dividing subpopulations, and quiescent cells, in stem cells from three species. The mathematical analysis also shows that, instead of developing independently, SCs exhibit a time-dependent fractal behavior as they interact with each other through molecular and tactile signals. These findings suggest that more sophisticated models of SC dynamics should view SC populations as a collective and avoid the simplifying homogeneity assumption by accounting for the presence of more than one dividing sub-population, and their multi-fractal characteristics. PMID:24769917
Ferrofluids: Modeling, numerical analysis, and scientific computation
NASA Astrophysics Data System (ADS)
Tomas, Ignacio
This dissertation presents some developments in the Numerical Analysis of Partial Differential Equations (PDEs) describing the behavior of ferrofluids. The most widely accepted PDE model for ferrofluids is the Micropolar model proposed by R.E. Rosensweig. The Micropolar Navier-Stokes Equations (MNSE) is a subsystem of PDEs within the Rosensweig model. Being a simplified version of the much bigger system of PDEs proposed by Rosensweig, the MNSE are a natural starting point of this thesis. The MNSE couple linear velocity u, angular velocity w, and pressure p. We propose and analyze a first-order semi-implicit fully-discrete scheme for the MNSE, which decouples the computation of the linear and angular velocities, is unconditionally stable and delivers optimal convergence rates under assumptions analogous to those used for the Navier-Stokes equations. Moving onto the much more complex Rosensweig's model, we provide a definition (approximation) for the effective magnetizing field h, and explain the assumptions behind this definition. Unlike previous definitions available in the literature, this new definition is able to accommodate the effect of external magnetic fields. Using this definition we setup the system of PDEs coupling linear velocity u, pressure p, angular velocity w, magnetization m, and magnetic potential ϕ We show that this system is energy-stable and devise a numerical scheme that mimics the same stability property. We prove that solutions of the numerical scheme always exist and, under certain simplifying assumptions, that the discrete solutions converge. A notable outcome of the analysis of the numerical scheme for the Rosensweig's model is the choice of finite element spaces that allow the construction of an energy-stable scheme. Finally, with the lessons learned from Rosensweig's model, we develop a diffuse-interface model describing the behavior of two-phase ferrofluid flows and present an energy-stable numerical scheme for this model. For a simplified version of this model and the corresponding numerical scheme we prove (in addition to stability) convergence and existence of solutions as by-product . Throughout this dissertation, we will provide numerical experiments, not only to validate mathematical results, but also to help the reader gain a qualitative understanding of the PDE models analyzed in this dissertation (the MNSE, the Rosenweig's model, and the Two-phase model). In addition, we also provide computational experiments to illustrate the potential of these simple models and their ability to capture basic phenomenological features of ferrofluids, such as the Rosensweig instability for the case of the two-phase model. In this respect, we highlight the incisive numerical experiments with the two-phase model illustrating the critical role of the demagnetizing field to reproduce physically realistic behavior of ferrofluids.
Computational models of basal-ganglia pathway functions: focus on functional neuroanatomy
Schroll, Henning; Hamker, Fred H.
2013-01-01
Over the past 15 years, computational models have had a considerable impact on basal-ganglia research. Most of these models implement multiple distinct basal-ganglia pathways and assume them to fulfill different functions. As there is now a multitude of different models, it has become complex to keep track of their various, sometimes just marginally different assumptions on pathway functions. Moreover, it has become a challenge to oversee to what extent individual assumptions are corroborated or challenged by empirical data. Focusing on computational, but also considering non-computational models, we review influential concepts of pathway functions and show to what extent they are compatible with or contradict each other. Moreover, we outline how empirical evidence favors or challenges specific model assumptions and propose experiments that allow testing assumptions against each other. PMID:24416002
Further analytical study of hybrid rocket combustion
NASA Technical Reports Server (NTRS)
Hung, W. S. Y.; Chen, C. S.; Haviland, J. K.
1972-01-01
Analytical studies of the transient and steady-state combustion processes in a hybrid rocket system are discussed. The particular system chosen consists of a gaseous oxidizer flowing within a tube of solid fuel, resulting in a heterogeneous combustion. Finite rate chemical kinetics with appropriate reaction mechanisms were incorporated in the model. A temperature dependent Arrhenius type fuel surface regression rate equation was chosen for the current study. The governing mathematical equations employed for the reacting gas phase and for the solid phase are the general, two-dimensional, time-dependent conservation equations in a cylindrical coordinate system. Keeping the simplifying assumptions to a minimum, these basic equations were programmed for numerical computation, using two implicit finite-difference schemes, the Lax-Wendroff scheme for the gas phase, and, the Crank-Nicolson scheme for the solid phase.
On S.N. Bernstein’s derivation of Mendel’s Law and ‘rediscovery’ of the Hardy-Weinberg distribution
Stark, Alan; Seneta, Eugene
2012-01-01
Around 1923 the soon-to-be famous Soviet mathematician and probabilist Sergei N. Bernstein started to construct an axiomatic foundation of a theory of heredity. He began from the premise of stationarity (constancy of type proportions) from the first generation of offspring. This led him to derive the Mendelian coefficients of heredity. It appears that he had no direct influence on the subsequent development of population genetics. A basic assumption of Bernstein was that parents coupled randomly to produce offspring. This paper shows that a simple model of non-random mating, which nevertheless embodies a feature of the Hardy-Weinberg Law, can produce Mendelian coefficients of heredity while maintaining the population distribution. How W. Johannsen’s monograph influenced Bernstein is discussed. PMID:22888285
Dissecting effects of complex mixtures: who's afraid of informative priors?
Thomas, Duncan C; Witte, John S; Greenland, Sander
2007-03-01
Epidemiologic studies commonly investigate multiple correlated exposures, which are difficult to analyze appropriately. Hierarchical modeling provides a promising approach for analyzing such data by adding a higher-level structure or prior model for the exposure effects. This prior model can incorporate additional information on similarities among the correlated exposures and can be parametric, semiparametric, or nonparametric. We discuss the implications of applying these models and argue for their expanded use in epidemiology. While a prior model adds assumptions to the conventional (first-stage) model, all statistical methods (including conventional methods) make strong intrinsic assumptions about the processes that generated the data. One should thus balance prior modeling assumptions against assumptions of validity, and use sensitivity analyses to understand their implications. In doing so - and by directly incorporating into our analyses information from other studies or allied fields - we can improve our ability to distinguish true causes of disease from noise and bias.
A Unimodal Model for Double Observer Distance Sampling Surveys.
Becker, Earl F; Christ, Aaron M
2015-01-01
Distance sampling is a widely used method to estimate animal population size. Most distance sampling models utilize a monotonically decreasing detection function such as a half-normal. Recent advances in distance sampling modeling allow for the incorporation of covariates into the distance model, and the elimination of the assumption of perfect detection at some fixed distance (usually the transect line) with the use of double-observer models. The assumption of full observer independence in the double-observer model is problematic, but can be addressed by using the point independence assumption which assumes there is one distance, the apex of the detection function, where the 2 observers are assumed independent. Aerially collected distance sampling data can have a unimodal shape and have been successfully modeled with a gamma detection function. Covariates in gamma detection models cause the apex of detection to shift depending upon covariate levels, making this model incompatible with the point independence assumption when using double-observer data. This paper reports a unimodal detection model based on a two-piece normal distribution that allows covariates, has only one apex, and is consistent with the point independence assumption when double-observer data are utilized. An aerial line-transect survey of black bears in Alaska illustrate how this method can be applied.
Walmsley, Christopher W; McCurry, Matthew R; Clausen, Phillip D; McHenry, Colin R
2013-01-01
Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be 'reasonable' are often assumed to have little influence on the results and their interpretation. HERE WE REPORT AN EXTENSIVE SENSITIVITY ANALYSIS WHERE HIGH RESOLUTION FINITE ELEMENT (FE) MODELS OF MANDIBLES FROM SEVEN SPECIES OF CROCODILE WERE ANALYSED UNDER LOADS TYPICAL FOR COMPARATIVE ANALYSIS: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results.
McCurry, Matthew R.; Clausen, Phillip D.; McHenry, Colin R.
2013-01-01
Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be ‘reasonable’ are often assumed to have little influence on the results and their interpretation. Here we report an extensive sensitivity analysis where high resolution finite element (FE) models of mandibles from seven species of crocodile were analysed under loads typical for comparative analysis: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results. PMID:24255817
Experimental Basis for IED Particle Model
NASA Astrophysics Data System (ADS)
Zheng-Johansson, J.
2009-05-01
The internally electrodynamic (IED) particle model is built on three experimental facts: a) electric charges present in all matter particles, b) an accelerated charge generates electromagnetic (EM) waves by Maxwell's equations and Planck energy equation, and c) source motion gives Doppler effect. A set of well-kwon basic particle equations have been predicted based on first-principles solutions for IED particle (e.g. arxiv:0812.3951, J Phys CS128, 012019, 2008); the equations are long experimentally validated. A critical review of the key experiments suggests that the IED process underlies these equations not just sufficiently but also necessarily. E.g.: 1) A free IED electron solution is a plane wave ψ= Ce^i(kdX-φT) requisite for producing the diffraction fringe in a Davisson-Germer experiment, and of also all basic point-like attributes facilitated by a linear momentum kd and the model structure. It needs not further be a wave packet which produces not a diffraction fringe. 2)The radial partial EM waves, hence the total ψ, of an IED electron will, on both EM theory and experiment basis -not by assumption, enter two slits at the same time, as is requisite for an electron to interfere with itself as shown in double slit experiments. 3) On annihilation, an electron converts (from mass m) to a radiation energy φ without an acceleration which is externally observable and yet requisite by EM theory. So a charge oscillation of frequency φ and its EM waves must regularly present internal of a normal electron, whence the IED model.
Classical nucleation theory in the phase-field crystal model
NASA Astrophysics Data System (ADS)
Jreidini, Paul; Kocher, Gabriel; Provatas, Nikolas
2018-04-01
A full understanding of polycrystalline materials requires studying the process of nucleation, a thermally activated phase transition that typically occurs at atomistic scales. The numerical modeling of this process is problematic for traditional numerical techniques: commonly used phase-field methods' resolution does not extend to the atomic scales at which nucleation takes places, while atomistic methods such as molecular dynamics are incapable of scaling to the mesoscale regime where late-stage growth and structure formation takes place following earlier nucleation. Consequently, it is of interest to examine nucleation in the more recently proposed phase-field crystal (PFC) model, which attempts to bridge the atomic and mesoscale regimes in microstructure simulations. In this work, we numerically calculate homogeneous liquid-to-solid nucleation rates and incubation times in the simplest version of the PFC model, for various parameter choices. We show that the model naturally exhibits qualitative agreement with the predictions of classical nucleation theory (CNT) despite a lack of some explicit atomistic features presumed in CNT. We also examine the early appearance of lattice structure in nucleating grains, finding disagreement with some basic assumptions of CNT. We then argue that a quantitatively correct nucleation theory for the PFC model would require extending CNT to a multivariable theory.
Zhao, Yue; Liu, Zhiyong; Liu, Chenfeng; Hu, Zhipeng
2017-01-01
Microalgae are considered to be a potential major biomass feedstock for biofuel due to their high lipid content. However, no correlation equations as a function of initial nitrogen concentration for lipid accumulation have been developed for simplicity to predict lipid production and optimize the lipid production process. In this study, a lipid accumulation model was developed with simple parameters based on the assumption protein synthesis shift to lipid synthesis by a linear function of nitrogen quota. The model predictions fitted well for the growth, lipid content, and nitrogen consumption of Coelastrum sp. HA-1 under various initial nitrogen concentrations. Then the model was applied successfully in Chlorella sorokiniana to predict the lipid content with different light intensities. The quantitative relationship between initial nitrogen concentrations and the final lipid content with sensitivity analysis of the model were also discussed. Based on the model results, the conversion efficiency from protein synthesis to lipid synthesis is higher and higher in microalgae metabolism process as nitrogen decreases; however, the carbohydrate composition content remains basically unchanged neither in HA-1 nor in C. sorokiniana. PMID:28194424
Classical nucleation theory in the phase-field crystal model.
Jreidini, Paul; Kocher, Gabriel; Provatas, Nikolas
2018-04-01
A full understanding of polycrystalline materials requires studying the process of nucleation, a thermally activated phase transition that typically occurs at atomistic scales. The numerical modeling of this process is problematic for traditional numerical techniques: commonly used phase-field methods' resolution does not extend to the atomic scales at which nucleation takes places, while atomistic methods such as molecular dynamics are incapable of scaling to the mesoscale regime where late-stage growth and structure formation takes place following earlier nucleation. Consequently, it is of interest to examine nucleation in the more recently proposed phase-field crystal (PFC) model, which attempts to bridge the atomic and mesoscale regimes in microstructure simulations. In this work, we numerically calculate homogeneous liquid-to-solid nucleation rates and incubation times in the simplest version of the PFC model, for various parameter choices. We show that the model naturally exhibits qualitative agreement with the predictions of classical nucleation theory (CNT) despite a lack of some explicit atomistic features presumed in CNT. We also examine the early appearance of lattice structure in nucleating grains, finding disagreement with some basic assumptions of CNT. We then argue that a quantitatively correct nucleation theory for the PFC model would require extending CNT to a multivariable theory.
Multi-Agent Market Modeling of Foreign Exchange Rates
NASA Astrophysics Data System (ADS)
Zimmermann, Georg; Neuneier, Ralph; Grothmann, Ralph
A market mechanism is basically driven by a superposition of decisions of many agents optimizing their profit. The oeconomic price dynamic is a consequence of the cumulated excess demand/supply created on this micro level. The behavior analysis of a small number of agents is well understood through the game theory. In case of a large number of agents one may use the limiting case that an individual agent does not have an influence on the market, which allows the aggregation of agents by statistic methods. In contrast to this restriction, we can omit the assumption of an atomic market structure, if we model the market through a multi-agent approach. The contribution of the mathematical theory of neural networks to the market price formation is mostly seen on the econometric side: neural networks allow the fitting of high dimensional nonlinear dynamic models. Furthermore, in our opinion, there is a close relationship between economics and the modeling ability of neural networks because a neuron can be interpreted as a simple model of decision making. With this in mind, a neural network models the interaction of many decisions and, hence, can be interpreted as the price formation mechanism of a market.
Analyzing the impact of modeling choices and assumptions in compartmental epidemiological models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nutaro, James J.; Pullum, Laura L.; Ramanathan, Arvind
In this study, computational models have become increasingly used as part of modeling, predicting, and understanding how infectious diseases spread within large populations. These models can be broadly classified into differential equation-based models (EBM) and agent-based models (ABM). Both types of models are central in aiding public health officials design intervention strategies in case of large epidemic outbreaks. We examine these models in the context of illuminating their hidden assumptions and the impact these may have on the model outcomes. Very few ABM/EBMs are evaluated for their suitability to address a particular public health concern, and drawing relevant conclusions aboutmore » their suitability requires reliable and relevant information regarding the different modeling strategies and associated assumptions. Hence, there is a need to determine how the different modeling strategies, choices of various parameters, and the resolution of information for EBMs and ABMs affect outcomes, including predictions of disease spread. In this study, we present a quantitative analysis of how the selection of model types (i.e., EBM vs. ABM), the underlying assumptions that are enforced by model types to model the disease propagation process, and the choice of time advance (continuous vs. discrete) affect the overall outcomes of modeling disease spread. Our study reveals that the magnitude and velocity of the simulated epidemic depends critically on the selection of modeling principles, various assumptions of disease process, and the choice of time advance.« less
Analyzing the impact of modeling choices and assumptions in compartmental epidemiological models
Nutaro, James J.; Pullum, Laura L.; Ramanathan, Arvind; ...
2016-05-01
In this study, computational models have become increasingly used as part of modeling, predicting, and understanding how infectious diseases spread within large populations. These models can be broadly classified into differential equation-based models (EBM) and agent-based models (ABM). Both types of models are central in aiding public health officials design intervention strategies in case of large epidemic outbreaks. We examine these models in the context of illuminating their hidden assumptions and the impact these may have on the model outcomes. Very few ABM/EBMs are evaluated for their suitability to address a particular public health concern, and drawing relevant conclusions aboutmore » their suitability requires reliable and relevant information regarding the different modeling strategies and associated assumptions. Hence, there is a need to determine how the different modeling strategies, choices of various parameters, and the resolution of information for EBMs and ABMs affect outcomes, including predictions of disease spread. In this study, we present a quantitative analysis of how the selection of model types (i.e., EBM vs. ABM), the underlying assumptions that are enforced by model types to model the disease propagation process, and the choice of time advance (continuous vs. discrete) affect the overall outcomes of modeling disease spread. Our study reveals that the magnitude and velocity of the simulated epidemic depends critically on the selection of modeling principles, various assumptions of disease process, and the choice of time advance.« less
NASA Astrophysics Data System (ADS)
Muenich, R. L.; Kalcic, M. M.; Teshager, A. D.; Long, C. M.; Wang, Y. C.; Scavia, D.
2017-12-01
Thanks to the availability of open-source software, online tutorials, and advanced software capabilities, watershed modeling has expanded its user-base and applications significantly in the past thirty years. Even complicated models like the Soil and Water Assessment Tool (SWAT) are being used and documented in hundreds of peer-reviewed publications each year, and likely more applied in practice. These models can help improve our understanding of present, past, and future conditions, or analyze important "what-if" management scenarios. However, baseline data and methods are often adopted and applied without rigorous testing. In multiple collaborative projects, we have evaluated the influence of some of these common approaches on model results. Specifically, we examined impacts of baseline data and assumptions involved in manure application, combined sewer overflows, and climate data incorporation across multiple watersheds in the Western Lake Erie Basin. In these efforts, we seek to understand the impact of using typical modeling data and assumptions, versus using improved data and enhanced assumptions on model outcomes and thus ultimately, study conclusions. We provide guidance for modelers as they adopt and apply data and models for their specific study region. While it is difficult to quantitatively assess the full uncertainty surrounding model input data and assumptions, recognizing the impacts of model input choices is important when considering actions at the both the field and watershed scales.
Status of the Space Station environmental control and life support system design concept
NASA Technical Reports Server (NTRS)
Ray, C. D.; Humphries, W. R.
1986-01-01
The current status of the Space Station (SS) environmental control and life support system (ECLSS) design is outlined. The concept has been defined at the subsystem level. Data supporting these definitions are provided which identify general configuratioons for all modules. Requirements, guidelines and assumptions used in generating these configurations are detailed. The basic 2 US module 'core' Space Station is addressed along with system synergism issues and early man-tended and future growth considerations. Along with these basic studies, also addressed here are options related to variation in the 'core' module makeup and more austere Station concepts such as commonality, automation and design to cost.
NASA Technical Reports Server (NTRS)
Hoover, D. Q.
1976-01-01
Electric power plant costs and efficiencies are presented for three basic open-cycle MHD systems: (1) direct coal fired system, (2) a system with a separately fired air heater, and (3) a system burning low-Btu gas from an integrated gasifier. Power plant designs were developed corresponding to the basic cases with variation of major parameters for which major system components were sized and costed. Flow diagrams describing each design are presented. A discussion of the limitations of each design is made within the framework of the assumptions made.
Teaching "Instant Experience" with Graphical Model Validation Techniques
ERIC Educational Resources Information Center
Ekstrøm, Claus Thorn
2014-01-01
Graphical model validation techniques for linear normal models are often used to check the assumptions underlying a statistical model. We describe an approach to provide "instant experience" in looking at a graphical model validation plot, so it becomes easier to validate if any of the underlying assumptions are violated.