Sample records for underlying theoretical assumptions

  1. Testing electrostatic equilibrium in the ionosphere by detailed comparison of ground magnetic deflection and incoherent scatter radar.

    NASA Astrophysics Data System (ADS)

    Cosgrove, R. B.; Schultz, A.; Imamura, N.

    2016-12-01

    Although electrostatic equilibrium is always assumed in the ionosphere, there is no good theoretical or experimental justification for the assumption. In fact, recent theoretical investigations suggest that the electrostatic assumption may be grossly in error. If true, many commonly used modeling methods are placed in doubt. For example, the accepted method for calculating ionospheric conductance??field line integration??may be invalid. In this talk we briefly outline the theoretical research that places the electrostatic assumption in doubt, and then describe how comparison of ground magnetic field data with incoherent scatter radar (ISR) data can be used to test the electrostatic assumption in the ionosphere. We describe a recent experiment conducted for the purpose, where an array of magnetometers was temporalily installed under the Poker Flat AMISR.

  2. Change in Soil Porosity under Load

    NASA Astrophysics Data System (ADS)

    Dyba, V. P.; Skibin, E. G.

    2017-11-01

    The theoretical basis for the process of soil compaction under various loading paths is considered in the article, the theoretical assumptions are compared with the results of the tests of clay soil on a stabilometer. The variant of the critical state model of the sealing plastic-rigid environment is also considered the strength characteristics of which depend on the porosity coefficient. The loading surface is determined by the results of compression and stabilometrical tests. In order to clarify the results of this task, it is necessary to carry out stabilometric tests under conditions of simple loading, i.e. where the vertical pressure would be proportional to the compression pressure σ3 = kσ1. Within the study the attempts were made to confirm the model given in the beginning of the article by laboratory tests. After the analysis of the results, the provided theoretical assumptions were confirmed.

  3. A utility-theoretic model for QALYs and willingness to pay.

    PubMed

    Klose, Thomas

    2003-01-01

    Despite the widespread use of quality-adjusted life years (QALY) in economic evaluation studies, their utility-theoretic foundation remains unclear. A model for preferences over health, money, and time is presented in this paper. Under the usual assumptions of the original QALY-model, an additive separable presentation of the utilities in different periods exists. In contrast to the usual assumption that QALY-weights do solely depend on aspects of health-related quality of life, wealth-standardized QALY-weights might vary with the wealth level in the presented extension of the original QALY-model resulting in an inconsistent measurement of QALYs. Further assumptions are presented to make the measurement of QALYs consistent with lifetime preferences over health and money. Even under these strict assumptions, QALYs and WTP (which also can be defined in this utility-theoretic model) are not equivalent preference-based measures of the effects of health technologies on an individual level. The results suggest that the individual WTP per QALY can depend on the magnitude of the QALY-gain as well as on the disease burden, when health influences the marginal utility of wealth. Further research seems to be indicated on this structural aspect of preferences over health and wealth and to quantify its impact. Copyright 2002 John Wiley & Sons, Ltd.

  4. How Mean is the Mean?

    PubMed Central

    Speelman, Craig P.; McGann, Marek

    2013-01-01

    In this paper we voice concerns about the uncritical manner in which the mean is often used as a summary statistic in psychological research. We identify a number of implicit assumptions underlying the use of the mean and argue that the fragility of these assumptions should be more carefully considered. We examine some of the ways in which the potential violation of these assumptions can lead us into significant theoretical and methodological error. Illustrations of alternative models of research already extant within Psychology are used to explore methods of research less mean-dependent and suggest that a critical assessment of the assumptions underlying its use in research play a more explicit role in the process of study design and review. PMID:23888147

  5. Language Performance Assessment: Current Trends in Theory and Research

    ERIC Educational Resources Information Center

    El-Koumy, Abdel-Salam Abdel-Khalek

    2004-01-01

    The purpose of this paper is to review the theoretical and empirical literature relevant to language performance assessment. Following a definition of performance assessment, this paper considers: (1) theoretical assumptions underlying performance assessment; (2) purposes of performance assessment; (3) performance assessment procedures; (4) merits…

  6. Misleading Theoretical Assumptions in Hypertext/Hypermedia Research.

    ERIC Educational Resources Information Center

    Tergan, Sigmar-Olaf

    1997-01-01

    Reviews basic theoretical assumptions of research on learning with hypertext/hypermedia. Focuses on whether the results of research on hypertext/hypermedia-based learning support these assumptions. Results of empirical studies and theoretical analysis reveal that many research approaches have been misled by inappropriate theoretical assumptions on…

  7. Exploring super-Gaussianity toward robust information-theoretical time delay estimation.

    PubMed

    Petsatodis, Theodoros; Talantzis, Fotios; Boukis, Christos; Tan, Zheng-Hua; Prasad, Ramjee

    2013-03-01

    Time delay estimation (TDE) is a fundamental component of speaker localization and tracking algorithms. Most of the existing systems are based on the generalized cross-correlation method assuming gaussianity of the source. It has been shown that the distribution of speech, captured with far-field microphones, is highly varying, depending on the noise and reverberation conditions. Thus the performance of TDE is expected to fluctuate depending on the underlying assumption for the speech distribution, being also subject to multi-path reflections and competitive background noise. This paper investigates the effect upon TDE when modeling the source signal with different speech-based distributions. An information theoretical TDE method indirectly encapsulating higher order statistics (HOS) formed the basis of this work. The underlying assumption of Gaussian distributed source has been replaced by that of generalized Gaussian distribution that allows evaluating the problem under a larger set of speech-shaped distributions, ranging from Gaussian to Laplacian and Gamma. Closed forms of the univariate and multivariate entropy expressions of the generalized Gaussian distribution are derived to evaluate the TDE. The results indicate that TDE based on the specific criterion is independent of the underlying assumption for the distribution of the source, for the same covariance matrix.

  8. Idiographic versus Nomothetic Approaches to Research in Organizations.

    DTIC Science & Technology

    1981-07-01

    alternative methodologic assumption based on intensive examination of one or a few cases under the theoretic assumption of dynamic interactionism is, with...phenomenological studies the researcher may not enter the actual setting but instead examines symbolic meanings as they constitute themselves in...B. Interactionism in personality from a historical perspective. Psychological Bulletin, 1974, 81, 1026-l148. Elashoff, J.D.; & Thoresen, C.E

  9. Symbolic interactionism as a theoretical perspective for multiple method research.

    PubMed

    Benzies, K M; Allen, M N

    2001-02-01

    Qualitative and quantitative research rely on different epistemological assumptions about the nature of knowledge. However, the majority of nurse researchers who use multiple method designs do not address the problem of differing theoretical perspectives. Traditionally, symbolic interactionism has been viewed as one perspective underpinning qualitative research, but it is also the basis for quantitative studies. Rooted in social psychology, symbolic interactionism has a rich intellectual heritage that spans more than a century. Underlying symbolic interactionism is the major assumption that individuals act on the basis of the meaning that things have for them. The purpose of this paper is to present symbolic interactionism as a theoretical perspective for multiple method designs with the aim of expanding the dialogue about new methodologies. Symbolic interactionism can serve as a theoretical perspective for conceptually clear and soundly implemented multiple method research that will expand the understanding of human health behaviour.

  10. Statistical Issues for Calculating Reentry Hazards

    NASA Technical Reports Server (NTRS)

    Matney, Mark; Bacon, John

    2016-01-01

    A number of statistical tools have been developed over the years for assessing the risk of reentering object to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. This information, combined with information on the expected ground path of the reentry, is used to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of this analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper builds on previous IAASS work to re-examine many of these theoretical assumptions, including the mathematical basis for the hazard calculations, and outlining the conditions under which the simplifying assumptions hold. This study also employs empirical and theoretical information to test these assumptions, and makes recommendations how to improve the accuracy of these calculations in the future.

  11. Model error in covariance structure models: Some implications for power and Type I error

    PubMed Central

    Coffman, Donna L.

    2010-01-01

    The present study investigated the degree to which violation of the parameter drift assumption affects the Type I error rate for the test of close fit and power analysis procedures proposed by MacCallum, Browne, and Sugawara (1996) for both the test of close fit and the test of exact fit. The parameter drift assumption states that as sample size increases both sampling error and model error (i.e. the degree to which the model is an approximation in the population) decrease. Model error was introduced using a procedure proposed by Cudeck and Browne (1992). The empirical power for both the test of close fit, in which the null hypothesis specifies that the Root Mean Square Error of Approximation (RMSEA) ≤ .05, and the test of exact fit, in which the null hypothesis specifies that RMSEA = 0, is compared with the theoretical power computed using the MacCallum et al. (1996) procedure. The empirical power and theoretical power for both the test of close fit and the test of exact fit are nearly identical under violations of the assumption. The results also indicated that the test of close fit maintains the nominal Type I error rate under violations of the assumption. PMID:21331302

  12. Motion and Stability of Saturated Soil Systems under Dynamic Loading.

    DTIC Science & Technology

    1985-04-04

    12 7.3 Experimental Verification of Theories ............................. 13 8. ADDITIONAL COMMENTS AND OTHER WORK, AT THE OHIO...theoretical/computational models. The continuing rsearch effort will extend and refine the theoretical models, allow for compressibility of soil as...motion of soil and water and, therefore, a correct theory of liquefaction should not include this assumption. Finite element methodologies have been

  13. Is Tissue the Issue? A Critique of SOMPA's Models and Tests.

    ERIC Educational Resources Information Center

    Goodman, Joan F.

    1979-01-01

    A critical view of the underlying theoretical rationale of the System of Multicultural Pluralistic Assessment (SOMPA) model for student assessment is presented. The critique is extensive and questions the basic assumptions of the model. (JKS)

  14. On the existence and stability of liquid water on the surface of mars today.

    PubMed

    Kuznetz, L H; Gan, D C

    2002-01-01

    The recent discovery of high concentrations of hydrogen just below the surface of Mars' polar regions by Mars Odyssey has enlivened the debate about past or present life on Mars. The prevailing assumption prior to the discovery was that the liquid water essential for its existence is absent. That assumption was based largely on the calculation of heat and mass transfer coefficients or theoretical climate models. This research uses an experimental approach to determine the feasibility of liquid water under martian conditions, setting the stage for a more empirical approach to the question of life on Mars. Experiments were conducted in three parts: Liquid water's existence was confirmed by droplets observed under martian conditions in part 1; the evolution of frost melting on the surface of various rocks under martian conditions was observed in part 2; and the evaporation rate of water in Petri dishes under Mars-like conditions was determined and compared with the theoretical predictions of various investigators in part 3. The results led to the conclusion that liquid water can be stable for extended periods of time on the martian surface under present-day conditions.

  15. On the existence and stability of liquid water on the surface of mars today

    NASA Technical Reports Server (NTRS)

    Kuznetz, L. H.; Gan, D. C.

    2002-01-01

    The recent discovery of high concentrations of hydrogen just below the surface of Mars' polar regions by Mars Odyssey has enlivened the debate about past or present life on Mars. The prevailing assumption prior to the discovery was that the liquid water essential for its existence is absent. That assumption was based largely on the calculation of heat and mass transfer coefficients or theoretical climate models. This research uses an experimental approach to determine the feasibility of liquid water under martian conditions, setting the stage for a more empirical approach to the question of life on Mars. Experiments were conducted in three parts: Liquid water's existence was confirmed by droplets observed under martian conditions in part 1; the evolution of frost melting on the surface of various rocks under martian conditions was observed in part 2; and the evaporation rate of water in Petri dishes under Mars-like conditions was determined and compared with the theoretical predictions of various investigators in part 3. The results led to the conclusion that liquid water can be stable for extended periods of time on the martian surface under present-day conditions.

  16. Hepatitis C bio-behavioural surveys in people who inject drugs-a systematic review of sensitivity to the theoretical assumptions of respondent driven sampling.

    PubMed

    Buchanan, Ryan; Khakoo, Salim I; Coad, Jonathan; Grellier, Leonie; Parkes, Julie

    2017-07-11

    New, more effective and better-tolerated therapies for hepatitis C (HCV) have made the elimination of HCV a feasible objective. However, for this to be achieved, it is necessary to have a detailed understanding of HCV epidemiology in people who inject drugs (PWID). Respondent-driven sampling (RDS) can provide prevalence estimates in hidden populations such as PWID. The aims of this systematic review are to identify published studies that use RDS in PWID to measure the prevalence of HCV, and compare each study against the STROBE-RDS checklist to assess their sensitivity to the theoretical assumptions underlying RDS. Searches were undertaken in accordance with PRISMA systematic review guidelines. Included studies were English language publications in peer-reviewed journals, which reported the use of RDS to recruit PWID to an HCV bio-behavioural survey. Data was extracted under three headings: (1) survey overview, (2) survey outcomes, and (3) reporting against selected STROBE-RDS criteria. Thirty-one studies met the inclusion criteria. They varied in scale (range 1-15 survey sites) and the sample sizes achieved (range 81-1000 per survey site) but were consistent in describing the use of standard RDS methods including: seeds, coupons and recruitment incentives. Twenty-seven studies (87%) either calculated or reported the intention to calculate population prevalence estimates for HCV and two used RDS data to calculate the total population size of PWID. Detailed operational and analytical procedures and reporting against selected criteria from the STROBE-RDS checklist varied between studies. There were widespread indications that sampling did not meet the assumptions underlying RDS, which led to two studies being unable to report an estimated HCV population prevalence in at least one survey location. RDS can be used to estimate a population prevalence of HCV in PWID and estimate the PWID population size. Accordingly, as a single instrument, it is a useful tool for guiding HCV elimination. However, future studies should report the operational conduct of each survey in accordance with the STROBE-RDS checklist to indicate sensitivity to the theoretical assumptions underlying the method. PROSPERO CRD42015019245.

  17. Achievement Goal Orientations and Identity Formation Styles

    ERIC Educational Resources Information Center

    Kaplan, Avi; Flum, Hanoch

    2010-01-01

    The present article points to shared underlying theoretical assumptions and central processes of a prominent academic motivation perspective--achievement goal theory--and recent process perspectives in the identity formation literature, and more specifically, identity formation styles. The review highlights the shared definition of achievement…

  18. Self, College Experiences, and Society: Rethinking the Theoretical Foundations of Student Development Theory

    ERIC Educational Resources Information Center

    Winkle-Wagner, Rachelle

    2012-01-01

    This article examines the psychological theoretical foundations of college student development theory and the theoretical assumptions of this framework. A complimentary, sociological perspective and the theoretical assumptions of this approach are offered. The potential limitations of the overuse of each perspective are considered. The conclusion…

  19. Individual behavioral phenotypes: an integrative meta-theoretical framework. Why "behavioral syndromes" are not analogs of "personality".

    PubMed

    Uher, Jana

    2011-09-01

    Animal researchers are increasingly interested in individual differences in behavior. Their interpretation as meaningful differences in behavioral strategies stable over time and across contexts, adaptive, heritable, and acted upon by natural selection has triggered new theoretical developments. However, the analytical approaches used to explore behavioral data still address population-level phenomena, and statistical methods suitable to analyze individual behavior are rarely applied. I discuss fundamental investigative principles and analytical approaches to explore whether, in what ways, and under which conditions individual behavioral differences are actually meaningful. I elaborate the meta-theoretical ideas underlying common theoretical concepts and integrate them into an overarching meta-theoretical and methodological framework. This unravels commonalities and differences, and shows that assumptions of analogy to concepts of human personality are not always warranted and that some theoretical developments may be based on methodological artifacts. Yet, my results also highlight possible directions for new theoretical developments in animal behavior research. Copyright © 2011 Wiley Periodicals, Inc.

  20. "Owning" Knowledge: Looking beyond Politics to Find the Public Good

    ERIC Educational Resources Information Center

    Bernstein-Sierra, Samantha

    2017-01-01

    This chapter explores the theoretical assumptions underlying both the IP system and its counternarrative, academic openness, to encourage stakeholders to look beyond extremes as depicted in political rhetoric, and find a compromise consistent with the common mission of faculty, universities, and publishers.

  1. A close examination of double filtering with fold change and t test in microarray analysis

    PubMed Central

    2009-01-01

    Background Many researchers use the double filtering procedure with fold change and t test to identify differentially expressed genes, in the hope that the double filtering will provide extra confidence in the results. Due to its simplicity, the double filtering procedure has been popular with applied researchers despite the development of more sophisticated methods. Results This paper, for the first time to our knowledge, provides theoretical insight on the drawback of the double filtering procedure. We show that fold change assumes all genes to have a common variance while t statistic assumes gene-specific variances. The two statistics are based on contradicting assumptions. Under the assumption that gene variances arise from a mixture of a common variance and gene-specific variances, we develop the theoretically most powerful likelihood ratio test statistic. We further demonstrate that the posterior inference based on a Bayesian mixture model and the widely used significance analysis of microarrays (SAM) statistic are better approximations to the likelihood ratio test than the double filtering procedure. Conclusion We demonstrate through hypothesis testing theory, simulation studies and real data examples, that well constructed shrinkage testing methods, which can be united under the mixture gene variance assumption, can considerably outperform the double filtering procedure. PMID:19995439

  2. The Nature of the University

    ERIC Educational Resources Information Center

    Lenartowicz, Marta

    2015-01-01

    Higher education research frequently refers to the complex external conditions that give our old-fashioned universities a good reason to change. The underlying theoretical assumption of such framing is that organizations are open systems. This paper presents an alternative view, derived from the theory of social systems autopoiesis. It proposes…

  3. Calculation of Temperature Rise in Calorimetry.

    ERIC Educational Resources Information Center

    Canagaratna, Sebastian G.; Witt, Jerry

    1988-01-01

    Gives a simple but fuller account of the basis for accurately calculating temperature rise in calorimetry. Points out some misconceptions regarding these calculations. Describes two basic methods, the extrapolation to zero time and the equal area method. Discusses the theoretical basis of each and their underlying assumptions. (CW)

  4. The Theoretical Distribution of Evoked Brainstem Activity in Preterm, High-Risk, and Healthy Infants.

    ERIC Educational Resources Information Center

    Salamy, A.

    1981-01-01

    Determines the frequency distribution of Brainstem Auditory Evoked Potential variables (BAEP) for premature babies at different stages of development--normal newborns, infants, young children, and adults. The author concludes that the assumption of normality underlying most "standard" statistical analyses can be met for many BAEP…

  5. The Metrical Foot in Diyari.

    ERIC Educational Resources Information Center

    Poser, William

    1989-01-01

    Considers the metrical foot in Diyari, a South Australian Language, and concludes that, on the basis of stress alone, an argument can be made for the constituency of the metrical stress foot under certain theoretical assumptions. This conclusion is reinforced by the occupance in Diyari of other less theory-dependant phenomena. (46 references) (JL)

  6. A Theoretical Examination of Psychosocial Issues for Asian Pacific American Students.

    ERIC Educational Resources Information Center

    Kodama, Corinne Maekawa; McEwen, Marylu K.; Liang, Christopher T. H.; Lee, Sunny

    2001-01-01

    Examines psychosocial issues for Asian Pacific American (APA) students, one of the fastest growing but most understudied college populations. Finds that general groupings of developmental issues align somewhat with traditional psychosocial theory, although the underlying assumptions and specific developmental tasks do not fit the experience of…

  7. Should Debbie Do Shale? A Playful Polemic in Honor of Paul Feyerabend.

    ERIC Educational Resources Information Center

    Steedman, P. H.

    1982-01-01

    Examines the epistemological assumptions underlying the teaching of high school science. The author recommends a science course at the high school level in which science is presented, within a historical context, as an essentially theoretical activity which reflects a culture's political, religious, philosophical, aesthetic, and ideological…

  8. Generating Synergy between Conceptual Change and Knowledge Building

    ERIC Educational Resources Information Center

    Lee, Chwee Beng

    2010-01-01

    This paper is an initial effort to review the reciprocity between the theoretical traditions of "conceptual change" and "knowledge building" by discussing the underlying epistemological assumptions, objectives, conceptions of concepts and ideas, and mechanisms that bring forth the respective goals of these two traditions. The basis for generating…

  9. A Competency Approach to Developing Leaders--Is This Approach Effective?

    ERIC Educational Resources Information Center

    Richards, Patricia

    2008-01-01

    This paper examines the underlying assumptions that competency-based frameworks are based upon in relation to leadership development. It examines the impetus for this framework becoming the prevailing theoretical base for developing leaders and tracks the historical path to this phenomenon. Research suggests that a competency-based framework may…

  10. Learning from Programmed Instruction: Examining Implications for Modern Instructional Technology

    ERIC Educational Resources Information Center

    McDonald, Jason K.; Yanchar, Stephen C.; Osguthorpe, Russell T.

    2005-01-01

    This article reports a theoretical examination of several parallels between contemporary instructional technology (as manifest in one of its most current manifestations, online learning) and one of its direct predecessors, programmed instruction. We place particular focus on the underlying assumptions of the two movements. Our analysis suggests…

  11. Questionable assumptions hampered interpretation of a network meta-analysis of primary care depression treatments.

    PubMed

    Linde, Klaus; Rücker, Gerta; Schneider, Antonius; Kriston, Levente

    2016-03-01

    We aimed to evaluate the underlying assumptions of a network meta-analysis investigating which depression treatment works best in primary care and to highlight challenges and pitfalls of interpretation under consideration of these assumptions. We reviewed 100 randomized trials investigating pharmacologic and psychological treatments for primary care patients with depression. Network meta-analysis was carried out within a frequentist framework using response to treatment as outcome measure. Transitivity was assessed by epidemiologic judgment based on theoretical and empirical investigation of the distribution of trial characteristics across comparisons. Homogeneity and consistency were investigated by decomposing the Q statistic. There were important clinical and statistically significant differences between "pure" drug trials comparing pharmacologic substances with each other or placebo (63 trials) and trials including a psychological treatment arm (37 trials). Overall network meta-analysis produced results well comparable with separate meta-analyses of drug trials and psychological trials. Although the homogeneity and consistency assumptions were mostly met, we considered the transitivity assumption unjustifiable. An exchange of experience between reviewers and, if possible, some guidance on how reviewers addressing important clinical questions can proceed in situations where important assumptions for valid network meta-analysis are not met would be desirable. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. On the Worthwhileness of Theoretical Activities

    ERIC Educational Resources Information Center

    Hand, Michael

    2009-01-01

    R.S. Peters' arguments for the worthwhileness of theoretical activities are intended to justify education per se, on the assumption that education is necessarily a matter of initiating people into theoretical activities. If we give up this assumption, we can ask whether Peters' arguments might serve instead to justify the academic curriculum over…

  13. Bayesian Learning and the Psychology of Rule Induction

    ERIC Educational Resources Information Center

    Endress, Ansgar D.

    2013-01-01

    In recent years, Bayesian learning models have been applied to an increasing variety of domains. While such models have been criticized on theoretical grounds, the underlying assumptions and predictions are rarely made concrete and tested experimentally. Here, I use Frank and Tenenbaum's (2011) Bayesian model of rule-learning as a case study to…

  14. "Wrighting" the Self: New Technologies and Textual Subjectivities

    ERIC Educational Resources Information Center

    Sakr, Mona

    2012-01-01

    The expression of the self through multimodal texts is a central theme in education. While it has been suggested that new technologies act as important mediators in the relationship between texts and subjectivity, the mechanisms underlying such mediation has been a neglected topic of research. This paper considers the theoretical assumptions upon…

  15. The Unfinished Stories of Two First Nations Mothers

    ERIC Educational Resources Information Center

    Moayeri, Maryam; Smith, Jane

    2010-01-01

    This study is shaped by an underlying theoretical assumption that literacy is a cultural practice, shaped by and shaping social factors such as culture, gender, politics, and economics. As a result, this article focuses on the literacy practices of two mothers who participated in the study. Because of their Aboriginal ancestry and the historical…

  16. Theoretical Perspectives on the Internationalization of Firms

    ERIC Educational Resources Information Center

    Rask, Morten; Strandskov, Jesper; Hakonsson, Dorthe Dojbak

    2008-01-01

    The purpose of this article is to build a coherent framework of the four main theories relating to the internationalization of firms, in order to facilitate better business teaching and research. Yet, theories of the internationalization of firms are broad and rest on different underlying assumptions. With the purpose of clarifying the potential…

  17. Communities of Inquiry: Politics, Power and Group Dynamics

    ERIC Educational Resources Information Center

    Burgh, Gilbert; Yorshansky, Mor

    2011-01-01

    The notion of a community of inquiry has been treated by many of its proponents as being an exemplar of democracy in action. We argue that the assumptions underlying this view present some practical and theoretical difficulties, particularly in relation to distribution of power among the members of a community of inquiry. We identify two…

  18. Delinquency, Social Skills and the Structure of Peer Relations: Assessing Criminological Theories by Social Network Theory

    ERIC Educational Resources Information Center

    Smangs, Mattias

    2010-01-01

    This article explores the plausibility of the conflicting theoretical assumptions underlying the main criminological perspectives on juvenile delinquents, their peer relations and social skills: the social ability model, represented by Sutherland's theory of differential associations, and the social disability model, represented by Hirschi's…

  19. The Role of Somatosensory Information in Speech Perception: Imitation Improves Recognition of Disordered Speech

    ERIC Educational Resources Information Center

    Borrie, Stephanie A.; Schäfer, Martina C. M.

    2015-01-01

    Purpose: Perceptual learning paradigms involving written feedback appear to be a viable clinical tool to reduce the intelligibility burden of dysarthria. The underlying theoretical assumption is that pairing the degraded acoustics with the intended lexical targets facilitates a remapping of existing mental representations in the lexicon. This…

  20. The Non-Theoretical View on Educational Theory: Scientific, Epistemological and Methodological Assumptions

    ERIC Educational Resources Information Center

    Penalva, José

    2014-01-01

    This article examines the underlying problems of one particular perspective in educational theory that has recently gained momentum: the Wilfred Carr approach, which puts forward the premise that there is no theory in educational research and, consequently, it is a form of practice. The article highlights the scientific, epistemological and…

  1. A theoretical approach to measuring pilot workload

    NASA Technical Reports Server (NTRS)

    Kantowitz, B. H.

    1984-01-01

    Theoretical assumptions used by researchers in the area of attention, with emphasis upon errors and inconsistent assumptions used by some researchers were studied. Two GAT experiments, two laboratory studies and one field experiment were conducted.

  2. Family learning research in museums: An emerging disciplinary matrix?

    NASA Astrophysics Data System (ADS)

    Ellenbogen, Kirsten M.; Luke, Jessica J.; Dierking, Lynn D.

    2004-07-01

    Thomas Kuhn's notion of a disciplinary matrix provides a useful framework for investigating the growth of research on family learning in and from museums over the last decade. To track the emergence of this disciplinary matrix we consider three issues. First are shifting theoretical perspectives that result in new shared language, beliefs, values, understandings, and assumptions about what counts as family learning. Second are realigning methodologies, driven by underlying disciplinary assumptions about how research in this arena is best conducted, what questions should be addressed, and criteria for valid and reliable evidence. Third is resituating the focus of our research to make the family central to what we study, reflecting a more holistic understanding of the family as an educational institution within larger learning infrastructure. We discuss research that exemplifies these three issues and demonstrates the ways in which shifting theoretical perspectives, realigning methodologies, and resituating research foci signal the existence of a nascent disciplinary matrix.

  3. Is Retrieval-Induced Forgetting behind the Bilingual Disadvantage in Word Production?

    ERIC Educational Resources Information Center

    Runnqvist, Elin; Costa, Albert

    2012-01-01

    Levy, Mc Veigh, Marful and Andreson (2007) found that naming pictures in L2 impaired subsequent recall of the L1 translation words. This was interpreted as evidence for a domain-general inhibitory mechanism (RIF) underlying first language attrition. Because this result is at odds with some previous findings and theoretical assumptions, we wanted…

  4. The Demand for Higher Education in Michigan: Projections to the Year 2000.

    ERIC Educational Resources Information Center

    Moor, James R., Jr.; And Others

    Using data from the 1960-1977 period, this study provides a range of headcount enrollment projections for the Michigan higher education system to the year 2000 by type of institution and by age and sex of student under alternative sets of projection assumptions. The theoretical framework, methodology, and working model developed in this study are…

  5. On the Basis of the Basic Variety.

    ERIC Educational Resources Information Center

    Schwartz, Bonnie D.

    1997-01-01

    Considers the interplay between source and target language in relation to two points made by Klein and Perdue: (1) the argument that the analysis of the target language should not be used as the model for analyzing interlanguage data; and (2) the theoretical claim that under the technical assumptions of minimalism, the Basic Variety is a "perfect"…

  6. Competency Is Not Guaranteed by the Letters that Follow Your Name: A Response to My Critics

    ERIC Educational Resources Information Center

    Perry, Robin E.

    2006-01-01

    This article is a formal response to those that authored critiques of the author's research. Each author has provided a thoughtful and critical perspective highlighting the perceived merits and demerits of the research questions posed, theoretical assumptions underlying the inquiry, study design and methodology, and interpretations garnered from…

  7. Calculation of Macrosegregation in an Ingot

    NASA Technical Reports Server (NTRS)

    Poirier, D. R.; Maples, A. L.

    1986-01-01

    Report describes both two-dimensional theoretical model of macrosegregation (separating into regions of discrete composition) in solidification of binary alloy in chilled rectangular mold and interactive computer program embodying model. Model evolved from previous ones limited to calculating effects of interdendritic fluid flow on final macrosegregation for given input temperature field under assumption of no fluid in bulk melt.

  8. Comparison of Unidimensional and Multidimensional Approaches to IRT Parameter Estimation. Research Report. ETS RR-04-44

    ERIC Educational Resources Information Center

    Zhang, Jinming

    2004-01-01

    It is common to assume during statistical analysis of a multiscale assessment that the assessment has simple structure or that it is composed of several unidimensional subtests. Under this assumption, both the unidimensional and multidimensional approaches can be used to estimate item parameters. This paper theoretically demonstrates that these…

  9. Developing Orthographic Awareness among Beginning Chinese Language Learners: Investigating the Influence of Beginning Level Textbooks

    ERIC Educational Resources Information Center

    Fan, Hui-Mei

    2010-01-01

    The present study is based on the theoretical assumptions that frequency of characters and their structural components, as well as the frequency types of structural components, are important to enable learners of Chinese as a foreign language (CFL) to discover the underlying structure of Chinese characters. In the CFL context, since reliable…

  10. Does McNemar's test compare the sensitivities and specificities of two diagnostic tests?

    PubMed

    Kim, Soeun; Lee, Woojoo

    2017-02-01

    McNemar's test is often used in practice to compare the sensitivities and specificities for the evaluation of two diagnostic tests. For correct evaluation of accuracy, an intuitive recommendation is to test the diseased and the non-diseased groups separately so that the sensitivities can be compared among the diseased, and specificities can be compared among the healthy group of people. This paper provides a rigorous theoretical framework for this argument and study the validity of McNemar's test regardless of the conditional independence assumption. We derive McNemar's test statistic under the null hypothesis considering both assumptions of conditional independence and conditional dependence. We then perform power analyses to show how the result is affected by the amount of the conditional dependence under alternative hypothesis.

  11. Optimal policy for value-based decision-making.

    PubMed

    Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre

    2016-08-18

    For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down.

  12. Optimal policy for value-based decision-making

    PubMed Central

    Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre

    2016-01-01

    For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down. PMID:27535638

  13. Implicit assumptions underlying simple harvest models of marine bird populations can mislead environmental management decisions.

    PubMed

    O'Brien, Susan H; Cook, Aonghais S C P; Robinson, Robert A

    2017-10-01

    Assessing the potential impact of additional mortality from anthropogenic causes on animal populations requires detailed demographic information. However, these data are frequently lacking, making simple algorithms, which require little data, appealing. Because of their simplicity, these algorithms often rely on implicit assumptions, some of which may be quite restrictive. Potential Biological Removal (PBR) is a simple harvest model that estimates the number of additional mortalities that a population can theoretically sustain without causing population extinction. However, PBR relies on a number of implicit assumptions, particularly around density dependence and population trajectory that limit its applicability in many situations. Among several uses, it has been widely employed in Europe in Environmental Impact Assessments (EIA), to examine the acceptability of potential effects of offshore wind farms on marine bird populations. As a case study, we use PBR to estimate the number of additional mortalities that a population with characteristics typical of a seabird population can theoretically sustain. We incorporated this level of additional mortality within Leslie matrix models to test assumptions within the PBR algorithm about density dependence and current population trajectory. Our analyses suggest that the PBR algorithm identifies levels of mortality which cause population declines for most population trajectories and forms of population regulation. Consequently, we recommend that practitioners do not use PBR in an EIA context for offshore wind energy developments. Rather than using simple algorithms that rely on potentially invalid implicit assumptions, we recommend use of Leslie matrix models for assessing the impact of additional mortality on a population, enabling the user to explicitly define assumptions and test their importance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Quantitative force measurements using frequency modulation atomic force microscopy—theoretical foundations

    NASA Astrophysics Data System (ADS)

    Sader, John E.; Uchihashi, Takayuki; Higgins, Michael J.; Farrell, Alan; Nakayama, Yoshikazu; Jarvis, Suzanne P.

    2005-03-01

    Use of the atomic force microscope (AFM) in quantitative force measurements inherently requires a theoretical framework enabling conversion of the observed deflection properties of the cantilever to an interaction force. In this paper, the theoretical foundations of using frequency modulation atomic force microscopy (FM-AFM) in quantitative force measurements are examined and rigorously elucidated, with consideration being given to both 'conservative' and 'dissipative' interactions. This includes a detailed discussion of the underlying assumptions involved in such quantitative force measurements, the presentation of globally valid explicit formulae for evaluation of so-called 'conservative' and 'dissipative' forces, discussion of the origin of these forces, and analysis of the applicability of FM-AFM to quantitative force measurements in liquid.

  15. Theoretical study of strength of elastic-plastic water-saturated interface under constrained shear

    NASA Astrophysics Data System (ADS)

    Dimaki, Andrey V.; Shilko, Evgeny V.; Psakhie, Sergey G.

    2016-11-01

    This paper presents a theoretical study of shear strength of an elastic-plastic water-filled interface between elastic permeable blocks under compression. The medium is described within the discrete element method. The relationship between the stress-strain state of the solid skeleton and pore pressure of a liquid is described in the framework of the Biot's model of poroelasticity. The simulation demonstrates that shear strength of an elastic-plastic interface depends non-linearly on the values of permeability and loading to a great extent. We have proposed an empirical relation that approximates the obtained results of the numerical simulation in assumption of the interplay of dilation of the material and mass transfer of the liquid.

  16. Performance management in healthcare: a critical analysis.

    PubMed

    Hewko, Sarah J; Cummings, Greta G

    2016-01-01

    Purpose - The purpose of this paper is to explore the underlying theoretical assumptions and implications of current micro-level performance management and evaluation (PME) practices, specifically within health-care organizations. PME encompasses all activities that are designed and conducted to align employee outputs with organizational goals. Design/methodology/approach - PME, in the context of healthcare, is analyzed through the lens of critical theory. Specifically, Habermas' theory of communicative action is used to highlight some of the questions that arise in looking critically at PME. To provide a richer definition of key theoretical concepts, the authors conducted a preliminary, exploratory hermeneutic semantic analysis of the key words "performance" and "management" and of the term "performance management". Findings - Analysis reveals that existing micro-level PME systems in health-care organizations have the potential to create a workforce that is compliant, dependent, technically oriented and passive, and to support health-care systems in which inequalities and power imbalances are perpetually reinforced. Practical implications - At a time when the health-care system is under increasing pressure to provide high-quality, affordable services with fewer resources, it may be wise to investigate new sector-specific ways of evaluating and managing performance. Originality/value - In this paper, written for health-care leaders and health human resource specialists, the theoretical assumptions and implications of current PME practices within health-care organizations are explored. It is hoped that readers will be inspired to support innovative PME practices within their organizations that encourage peak performance among health-care professionals.

  17. Constructor theory of probability

    PubMed Central

    2016-01-01

    Unitary quantum theory, having no Born Rule, is non-probabilistic. Hence the notorious problem of reconciling it with the unpredictability and appearance of stochasticity in quantum measurements. Generalizing and improving upon the so-called ‘decision-theoretic approach’, I shall recast that problem in the recently proposed constructor theory of information—where quantum theory is represented as one of a class of superinformation theories, which are local, non-probabilistic theories conforming to certain constructor-theoretic conditions. I prove that the unpredictability of measurement outcomes (to which constructor theory gives an exact meaning) necessarily arises in superinformation theories. Then I explain how the appearance of stochasticity in (finitely many) repeated measurements can arise under superinformation theories. And I establish sufficient conditions for a superinformation theory to inform decisions (made under it) as if it were probabilistic, via a Deutsch–Wallace-type argument—thus defining a class of decision-supporting superinformation theories. This broadens the domain of applicability of that argument to cover constructor-theory compliant theories. In addition, in this version some of the argument's assumptions, previously construed as merely decision-theoretic, follow from physical properties expressed by constructor-theoretic principles. PMID:27616914

  18. The Influence of Theoretical Tools on Teachers' Orientation to Notice and Classroom Practice: A Case Study

    ERIC Educational Resources Information Center

    Mellone, Maria

    2011-01-01

    Assumptions about the construction and the transmission of knowledge and about the nature of mathematics always underlie any teaching practice, even if often unconsciously. I examine the conjecture that theoretical tools suitably chosen can help the teacher to make such assumptions explicit and to support the teacher's reflection on his/her…

  19. Planning Under Continuous Time and Resource Uncertainty: A Challenge for AI

    NASA Technical Reports Server (NTRS)

    Bresina, John; Dearden, Richard; Meuleau, Nicolas; Smith, David; Washington, Rich; Clancy, Daniel (Technical Monitor)

    2002-01-01

    There has been considerable work in Al on decision-theoretic planning and planning under uncertainty. Unfortunately, all of this work suffers from one or more of the following limitations: 1) it relies on very simple models of actions and time, 2) it assumes that uncertainty is manifested in discrete action outcomes, and 3) it is only practical for very small problems. For many real world problems, these assumptions fail to hold. A case in point is planning the activities for a Mars rover. For this domain none of the above assumptions are valid: 1) actions can be concurrent and have differing durations, 2) there is uncertainty concerning action durations and consumption of continuous resources like power, and 3) typical daily plans involve on the order of a hundred actions. We describe the rover problem, discuss previous work on planning under uncertainty, and present a detailed. but very small, example illustrating some of the difficulties of finding good plans.

  20. A unified approach for determining the ultimate strength of RC members subjected to combined axial force, bending, shear and torsion

    PubMed Central

    Huang, Zhen

    2017-01-01

    This paper uses experimental investigation and theoretical derivation to study the unified failure mechanism and ultimate capacity model of reinforced concrete (RC) members under combined axial, bending, shear and torsion loading. Fifteen RC members are tested under different combinations of compressive axial force, bending, shear and torsion using experimental equipment designed by the authors. The failure mechanism and ultimate strength data for the four groups of tested RC members under different combined loading conditions are investigated and discussed in detail. The experimental research seeks to determine how the ultimate strength of RC members changes with changing combined loads. According to the experimental research, a unified theoretical model is established by determining the shape of the warped failure surface, assuming an appropriate stress distribution on the failure surface, and considering the equilibrium conditions. This unified failure model can be reasonably and systematically changed into well-known failure theories of concrete members under single or combined loading. The unified calculation model could be easily used in design applications with some assumptions and simplifications. Finally, the accuracy of this theoretical unified model is verified by comparisons with experimental results. PMID:28414777

  1. The theories underpinning rational emotive behaviour therapy: where's the supportive evidence?

    PubMed

    MacInnes, Douglas

    2004-08-01

    This paper examines the underlying theoretical philosophy of one of the most widely used cognitive behaviour therapies, rational emotive behaviour therapy. It examines whether two central theoretical principles are supported by research evidence: firstly, that irrational beliefs lead to dysfunctional emotions and inferences and that rational beliefs lead to functional emotions and inferences and, secondly, that demand beliefs are the primary core irrational belief. The established criteria for evaluating the efficacy of the theories are detailed and used to evaluate the strength of evidence supporting these two assumptions. The findings indicate there is limited evidence to support these theories. Copyright 2004 Elsevier Ltd.

  2. Mathematical modelling of clostridial acetone-butanol-ethanol fermentation.

    PubMed

    Millat, Thomas; Winzer, Klaus

    2017-03-01

    Clostridial acetone-butanol-ethanol (ABE) fermentation features a remarkable shift in the cellular metabolic activity from acid formation, acidogenesis, to the production of industrial-relevant solvents, solventogensis. In recent decades, mathematical models have been employed to elucidate the complex interlinked regulation and conditions that determine these two distinct metabolic states and govern the transition between them. In this review, we discuss these models with a focus on the mechanisms controlling intra- and extracellular changes between acidogenesis and solventogenesis. In particular, we critically evaluate underlying model assumptions and predictions in the light of current experimental knowledge. Towards this end, we briefly introduce key ideas and assumptions applied in the discussed modelling approaches, but waive a comprehensive mathematical presentation. We distinguish between structural and dynamical models, which will be discussed in their chronological order to illustrate how new biological information facilitates the 'evolution' of mathematical models. Mathematical models and their analysis have significantly contributed to our knowledge of ABE fermentation and the underlying regulatory network which spans all levels of biological organization. However, the ties between the different levels of cellular regulation are not well understood. Furthermore, contradictory experimental and theoretical results challenge our current notion of ABE metabolic network structure. Thus, clostridial ABE fermentation still poses theoretical as well as experimental challenges which are best approached in close collaboration between modellers and experimentalists.

  3. Assessment of dietary exposure in the French population to 13 selected food colours, preservatives, antioxidants, stabilizers, emulsifiers and sweeteners.

    PubMed

    Bemrah, Nawel; Leblanc, Jean-Charles; Volatier, Jean-Luc

    2008-01-01

    The results of French intake estimates for 13 food additives prioritized by the methods proposed in the 2001 Report from the European Commission on Dietary Food Additive Intake in the European Union are reported. These 13 additives were selected using the first and second tiers of the three-tier approach. The first tier was based on theoretical food consumption data and the maximum permitted level of additives. The second tier used real individual food consumption data and the maximum permitted level of additives for the substances which exceeded the acceptable daily intakes (ADI) in the first tier. In the third tier reported in this study, intake estimates were calculated for the 13 additives (colours, preservatives, antioxidants, stabilizers, emulsifiers and sweeteners) according to two modelling assumptions corresponding to two different food habit scenarios (assumption 1: consumers consume foods that may or may not contain food additives, and assumption 2: consumers always consume foods that contain additives) when possible. In this approach, real individual food consumption data and the occurrence/use-level of food additives reported by the food industry were used. Overall, the results of the intake estimates are reassuring for the majority of additives studied since the risk of exceeding the ADI was low, except for nitrites, sulfites and annatto, whose ADIs were exceeded by either children or adult consumers or by both populations under one and/or two modelling assumptions. Under the first assumption, the ADI is exceeded for high consumers among adults for nitrites and sulfites (155 and 118.4%, respectively) and among children for nitrites (275%). Under the second assumption, the average nitrites dietary exposure in children exceeds the ADI (146.7%). For high consumers, adults exceed the nitrite and sulfite ADIs (223 and 156.4%, respectively) and children exceed the nitrite, annatto and sulfite ADIs (416.7, 124.6 and 130.6%, respectively).

  4. Teaching Critical Thinking by Examining Assumptions

    ERIC Educational Resources Information Center

    Yanchar, Stephen C.; Slife, Brent D.

    2004-01-01

    We describe how instructors can integrate the critical thinking skill of examining theoretical assumptions (e.g., determinism and materialism) and implications into psychology courses. In this instructional approach, students formulate questions that help them identify assumptions and implications, use those questions to identify and examine the…

  5. Stability analysis for virus spreading in complex networks with quarantine and non-homogeneous transition rates

    NASA Astrophysics Data System (ADS)

    Alarcon-Ramos, L. A.; Schaum, A.; Rodríguez Lucatero, C.; Bernal Jaquez, R.

    2014-03-01

    Virus propagations in complex networks have been studied in the framework of discrete time Markov process dynamical systems. These studies have been carried out under the assumption of homogeneous transition rates, yielding conditions for virus extinction in terms of the transition probabilities and the largest eigenvalue of the connectivity matrix. Nevertheless the assumption of homogeneous rates is rather restrictive. In the present study we consider non-homogeneous transition rates, assigned according to a uniform distribution, with susceptible, infected and quarantine states, thus generalizing the previous studies. A remarkable result of this analysis is that the extinction depends on the weakest element in the network. Simulation results are presented for large free-scale networks, that corroborate our theoretical findings.

  6. Tip of the Tongue States Increase Under Evaluative Observation.

    PubMed

    James, Lori E; Schmank, Christopher J; Castro, Nichol; Buchanan, Tony W

    2018-02-01

    We tested the frequent assumption that the difficulty of word retrieval increases when a speaker is being observed and evaluated. We modified the Trier Social Stress Test (TSST) so that participants believed that its evaluative observation components continued throughout the duration of a subsequent word retrieval task, and measured participants' reported tip of the tongue (TOT) states. Participants in this TSST condition experienced more TOTs than participants in a comparable, placebo TSST condition in which there was no suggestion of evaluative observation. This experiment provides initial evidence confirming the assumption that evaluative observation by a third party can be disruptive to word retrieval. We interpret our findings by proposing an extension to a well-supported theoretical model of TOTs.

  7. A generative inference framework for analysing patterns of cultural change in sparse population data with evidence for fashion trends in LBK culture.

    PubMed

    Kandler, Anne; Shennan, Stephen

    2015-12-06

    Cultural change can be quantified by temporal changes in frequency of different cultural artefacts and it is a central question to identify what underlying cultural transmission processes could have caused the observed frequency changes. Observed changes, however, often describe the dynamics in samples of the population of artefacts, whereas transmission processes act on the whole population. Here we develop a modelling framework aimed at addressing this inference problem. To do so, we firstly generate population structures from which the observed sample could have been drawn randomly and then determine theoretical samples at a later time t2 produced under the assumption that changes in frequencies are caused by a specific transmission process. Thereby we also account for the potential effect of time-averaging processes in the generation of the observed sample. Subsequent statistical comparisons (e.g. using Bayesian inference) of the theoretical and observed samples at t2 can establish which processes could have produced the observed frequency data. In this way, we infer underlying transmission processes directly from available data without any equilibrium assumption. We apply this framework to a dataset describing pottery from settlements of some of the first farmers in Europe (the LBK culture) and conclude that the observed frequency dynamic of different types of decorated pottery is consistent with age-dependent selection, a preference for 'young' pottery types which is potentially indicative of fashion trends. © 2015 The Author(s).

  8. Bayesian learning and the psychology of rule induction

    PubMed Central

    Endress, Ansgar D.

    2014-01-01

    In recent years, Bayesian learning models have been applied to an increasing variety of domains. While such models have been criticized on theoretical grounds, the underlying assumptions and predictions are rarely made concrete and tested experimentally. Here, I use Frank and Tenenbaum's (2011) Bayesian model of rule-learning as a case study to spell out the underlying assumptions, and to confront them with the empirical results Frank and Tenenbaum (2011) propose to simulate, as well as with novel experiments. While rule-learning is arguably well suited to rational Bayesian approaches, I show that their models are neither psychologically plausible nor ideal observer models. Further, I show that their central assumption is unfounded: humans do not always preferentially learn more specific rules, but, at least in some situations, those rules that happen to be more salient. Even when granting the unsupported assumptions, I show that all of the experiments modeled by Frank and Tenenbaum (2011) either contradict their models, or have a large number of more plausible interpretations. I provide an alternative account of the experimental data based on simple psychological mechanisms, and show that this account both describes the data better, and is easier to falsify. I conclude that, despite the recent surge in Bayesian models of cognitive phenomena, psychological phenomena are best understood by developing and testing psychological theories rather than models that can be fit to virtually any data. PMID:23454791

  9. Accuracy of the domain method for the material derivative approach to shape design sensitivities

    NASA Technical Reports Server (NTRS)

    Yang, R. J.; Botkin, M. E.

    1987-01-01

    Numerical accuracy for the boundary and domain methods of the material derivative approach to shape design sensitivities is investigated through the use of mesh refinement. The results show that the domain method is generally more accurate than the boundary method, using the finite element technique. It is also shown that the domain method is equivalent, under certain assumptions, to the implicit differentiation approach not only theoretically but also numerically.

  10. A theoretical and practical test of geographical profiling with serial vehicle theft in a U.K. context.

    PubMed

    Tonkin, Matthew; Woodhams, Jessica; Bond, John W; Loe, Trudy

    2010-01-01

    Geographical profiling is an investigative methodology sometimes employed by the police to predict the residence of an unknown offender from the locations of his/her crimes. The validity of geographical profiling, however, has not been fully explored for certain crime types. This study, therefore, presents a preliminary test of the potential for geographical profiling with a sample of 145 serial vehicle thieves from the U.K. The behavioural assumptions underlying geographical profiling (distance decay and domocentricity) are tested and a simple practical test of profiling using the spatial mean is presented. There is evidence for distance decay but not domocentricity among the spatial behaviour of car thieves from the U.K. A degree of success was achieved when applying the spatial mean on a case-by-case basis. The level of success varied, however, and neither series length in days nor number of crimes could account for the variation. The findings question previously held assumptions regarding geographical profiling and have potential theoretical and practical implications for the study and investigation of vehicle theft in the U.K. 2009 John Wiley & Sons, Ltd.

  11. New directions in evidence-based policy research: a critical analysis of the literature

    PubMed Central

    2014-01-01

    Despite 40 years of research into evidence-based policy (EBP) and a continued drive from both policymakers and researchers to increase research uptake in policy, barriers to the use of evidence are persistently identified in the literature. However, it is not clear what explains this persistence – whether they represent real factors, or if they are artefacts of approaches used to study EBP. Based on an updated review, this paper analyses this literature to explain persistent barriers and facilitators. We critically describe the literature in terms of its theoretical underpinnings, definitions of ‘evidence’, methods, and underlying assumptions of research in the field, and aim to illuminate the EBP discourse by comparison with approaches from other fields. Much of the research in this area is theoretically naive, focusing primarily on the uptake of research evidence as opposed to evidence defined more broadly, and privileging academics’ research priorities over those of policymakers. Little empirical data analysing the processes or impact of evidence use in policy is available to inform researchers or decision-makers. EBP research often assumes that policymakers do not use evidence and that more evidence – meaning research evidence – use would benefit policymakers and populations. We argue that these assumptions are unsupported, biasing much of EBP research. The agenda of ‘getting evidence into policy’ has side-lined the empirical description and analysis of how research and policy actually interact in vivo. Rather than asking how research evidence can be made more influential, academics should aim to understand what influences and constitutes policy, and produce more critically and theoretically informed studies of decision-making. We question the main assumptions made by EBP researchers, explore the implications of doing so, and propose new directions for EBP research, and health policy. PMID:25023520

  12. On the asymptotic improvement of supervised learning by utilizing additional unlabeled samples - Normal mixture density case

    NASA Technical Reports Server (NTRS)

    Shahshahani, Behzad M.; Landgrebe, David A.

    1992-01-01

    The effect of additional unlabeled samples in improving the supervised learning process is studied in this paper. Three learning processes. supervised, unsupervised, and combined supervised-unsupervised, are compared by studying the asymptotic behavior of the estimates obtained under each process. Upper and lower bounds on the asymptotic covariance matrices are derived. It is shown that under a normal mixture density assumption for the probability density function of the feature space, the combined supervised-unsupervised learning is always superior to the supervised learning in achieving better estimates. Experimental results are provided to verify the theoretical concepts.

  13. Chemical library subset selection algorithms: a unified derivation using spatial statistics.

    PubMed

    Hamprecht, Fred A; Thiel, Walter; van Gunsteren, Wilfred F

    2002-01-01

    If similar compounds have similar activity, rational subset selection becomes superior to random selection in screening for pharmacological lead discovery programs. Traditional approaches to this experimental design problem fall into two classes: (i) a linear or quadratic response function is assumed (ii) some space filling criterion is optimized. The assumptions underlying the first approach are clear but not always defendable; the second approach yields more intuitive designs but lacks a clear theoretical foundation. We model activity in a bioassay as realization of a stochastic process and use the best linear unbiased estimator to construct spatial sampling designs that optimize the integrated mean square prediction error, the maximum mean square prediction error, or the entropy. We argue that our approach constitutes a unifying framework encompassing most proposed techniques as limiting cases and sheds light on their underlying assumptions. In particular, vector quantization is obtained, in dimensions up to eight, in the limiting case of very smooth response surfaces for the integrated mean square error criterion. Closest packing is obtained for very rough surfaces under the integrated mean square error and entropy criteria. We suggest to use either the integrated mean square prediction error or the entropy as optimization criteria rather than approximations thereof and propose a scheme for direct iterative minimization of the integrated mean square prediction error. Finally, we discuss how the quality of chemical descriptors manifests itself and clarify the assumptions underlying the selection of diverse or representative subsets.

  14. The Attitudes of Navy Corrections Staff Members: What they Think About Confinees and their Jobs

    DTIC Science & Technology

    1994-02-01

    and socially involved citizens (Irwin, 1974). The attitudes and behaviors of the custodial and program staff are thought to be essential factors for...developed. Initially, we have proceeded with the underlying theoretical assumption (supported by previous research) that both the attitudes and behaviors of...suggested that one of the contextual variables related to attitudes about a group was the views of persons with reward power. It may be that negative

  15. Helicopter rotor loads using a matched asymptotic expansion technique

    NASA Technical Reports Server (NTRS)

    Pierce, G. A.; Vaidyanathan, A. R.

    1981-01-01

    The theoretical basis and computational feasibility of the Van Holten method, and its performance and range of validity by comparison with experiment and other approximate methods was examined. It is found that within the restrictions of incompressible, potential flow and the assumption of small disturbances, the method does lead to a valid description of the flow. However, the method begins to break down under conditions favoring nonlinear effects such as wake distortion and blade/rotor interaction.

  16. Basic principles of respiratory function monitoring in ventilated newborns: A review.

    PubMed

    Schmalisch, Gerd

    2016-09-01

    Respiratory monitoring during mechanical ventilation provides a real-time picture of patient-ventilator interaction and is a prerequisite for lung-protective ventilation. Nowadays, measurements of airflow, tidal volume and applied pressures are standard in neonatal ventilators. The measurement of lung volume during mechanical ventilation by tracer gas washout techniques is still under development. The clinical use of capnography, although well established in adults, has not been embraced by neonatologists because of technical and methodological problems in very small infants. While the ventilatory parameters are well defined, the calculation of other physiological parameters are based upon specific assumptions which are difficult to verify. Incomplete knowledge of the theoretical background of these calculations and their limitations can lead to incorrect interpretations with clinical consequences. Therefore, the aim of this review was to describe the basic principles and the underlying assumptions of currently used methods for respiratory function monitoring in ventilated newborns and to highlight methodological limitations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Theory of periodic swarming of bacteria: Application to Proteus mirabilis

    NASA Astrophysics Data System (ADS)

    Czirók, A.; Matsushita, M.; Vicsek, T.

    2001-03-01

    The periodic swarming of bacteria is one of the simplest examples for pattern formation produced by the self-organized collective behavior of a large number of organisms. In the spectacular colonies of Proteus mirabilis (the most common species exhibiting this type of growth), a series of concentric rings are developed as the bacteria multiply and swarm following a scenario that periodically repeats itself. We have developed a theoretical description for this process in order to obtain a deeper insight into some of the typical processes governing the phenomena in systems of many interacting living units. Our approach is based on simple assumptions directly related to the latest experimental observations on colony formation under various conditions. The corresponding one-dimensional model consists of two coupled differential equations investigated here both by numerical integrations and by analyzing the various expressions obtained from these equations using a few natural assumptions about the parameters of the model. We determine the phase diagram corresponding to systems exhibiting periodic swarming, and discuss in detail how the various stages of the colony development can be interpreted in our framework. We point out that all of our theoretical results are in excellent agreement with the complete set of available observations. Thus the present study represents one of the few examples where self-organized biological pattern formation is understood within a relatively simple theoretical approach, leading to results and predictions fully compatible with experiments.

  18. A Theoretically Consistent Framework for Modelling Lagrangian Particle Deposition in Plant Canopies

    NASA Astrophysics Data System (ADS)

    Bailey, Brian N.; Stoll, Rob; Pardyjak, Eric R.

    2018-06-01

    We present a theoretically consistent framework for modelling Lagrangian particle deposition in plant canopies. The primary focus is on describing the probability of particles encountering canopy elements (i.e., potential deposition), and provides a consistent means for including the effects of imperfect deposition through any appropriate sub-model for deposition efficiency. Some aspects of the framework draw upon an analogy to radiation propagation through a turbid medium with which to develop model theory. The present method is compared against one of the most commonly used heuristic Lagrangian frameworks, namely that originally developed by Legg and Powell (Agricultural Meteorology, 1979, Vol. 20, 47-67), which is shown to be theoretically inconsistent. A recommendation is made to discontinue the use of this heuristic approach in favour of the theoretically consistent framework developed herein, which is no more difficult to apply under equivalent assumptions. The proposed framework has the additional advantage that it can be applied to arbitrary canopy geometries given readily measurable parameters describing vegetation structure.

  19. Potential of wind power projects under the Clean Development Mechanism in India

    PubMed Central

    Purohit, Pallav; Michaelowa, Axel

    2007-01-01

    Background So far, the cumulative installed capacity of wind power projects in India is far below their gross potential (≤ 15%) despite very high level of policy support, tax benefits, long term financing schemes etc., for more than 10 years etc. One of the major barriers is the high costs of investments in these systems. The Clean Development Mechanism (CDM) of the Kyoto Protocol provides industrialized countries with an incentive to invest in emission reduction projects in developing countries to achieve a reduction in CO2 emissions at lowest cost that also promotes sustainable development in the host country. Wind power projects could be of interest under the CDM because they directly displace greenhouse gas emissions while contributing to sustainable rural development, if developed correctly. Results Our estimates indicate that there is a vast theoretical potential of CO2 mitigation by the use of wind energy in India. The annual potential Certified Emissions Reductions (CERs) of wind power projects in India could theoretically reach 86 million. Under more realistic assumptions about diffusion of wind power projects based on past experiences with the government-run programmes, annual CER volumes by 2012 could reach 41 to 67 million and 78 to 83 million by 2020. Conclusion The projections based on the past diffusion trend indicate that in India, even with highly favorable assumptions, the dissemination of wind power projects is not likely to reach its maximum estimated potential in another 15 years. CDM could help to achieve the maximum utilization potential more rapidly as compared to the current diffusion trend if supportive policies are introduced. PMID:17663772

  20. Early rationality in action perception and production? A theoretical exposition.

    PubMed

    Paulus, Markus; Király, Ildikó

    2013-10-01

    Within recent years, the question of early rationality in action perception and production has become a topic of great interest in developmental psychology. On the one hand, studies have provided evidence for rational action perception and action imitation even in very young infants. On the other hand, scholars have recently questioned these interpretations and proposed that the ability to rationally evaluate actions is not yet in place in infancy. Others have examined the development of the ability to make rational action choices and have indicated limitations of young children's ability to act rationally. This editorial to the special issue on Early Rationality in Action Perception and Production? introduces the reader to the current debate. It elucidates the underlying theoretical assumptions that drive the debate on whether or not young children's action perception and production is rational. Finally, it summarizes the papers and their contributions to the theoretical debate. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. What do men want? Re-examining whether men benefit from higher fertility than is optimal for women

    PubMed Central

    Sear, Rebecca

    2016-01-01

    Several empirical observations suggest that when women have more autonomy over their reproductive decisions, fertility is lower. Some evolutionary theorists have interpreted this as evidence for sexual conflicts of interest, arguing that higher fertility is more adaptive for men than women. We suggest the assumptions underlying these arguments are problematic: assuming that women suffer higher costs of reproduction than men neglects the (different) costs of reproduction for men; the assumption that men can repartner is often false. We use simple models to illustrate that (i) men or women can prefer longer interbirth intervals (IBIs), (ii) if men can only partner with wives sequentially they may favour shorter IBIs than women, but such a strategy would only be optimal for a few men who can repartner. This suggests that an evolved universal male preference for higher fertility than women prefer is implausible and is unlikely to fully account for the empirical data. This further implies that if women have more reproductive autonomy, populations should grow, not decline. More precise theoretical explanations with clearly stated assumptions, and data that better address both ultimate fitness consequences and proximate psychological motivations, are needed to understand under which conditions sexual conflict over reproductive timing should arise. PMID:27022076

  2. Stress regularity in quasi-static perfect plasticity with a pressure dependent yield criterion

    NASA Astrophysics Data System (ADS)

    Babadjian, Jean-François; Mora, Maria Giovanna

    2018-04-01

    This work is devoted to establishing a regularity result for the stress tensor in quasi-static planar isotropic linearly elastic - perfectly plastic materials obeying a Drucker-Prager or Mohr-Coulomb yield criterion. Under suitable assumptions on the data, it is proved that the stress tensor has a spatial gradient that is locally squared integrable. As a corollary, the usual measure theoretical flow rule is expressed in a strong form using the quasi-continuous representative of the stress.

  3. Phase retrieval from local measurements in two dimensions

    NASA Astrophysics Data System (ADS)

    Iwen, Mark; Preskitt, Brian; Saab, Rayan; Viswanathan, Aditya

    2017-08-01

    The phase retrieval problem has appeared in a multitude of applications for decades. While ad hoc solutions have existed since the early 1970s, recent developments have provided algorithms that offer promising theoretical guarantees under increasingly realistic assumptions. Motivated by ptychographic imaging, we generalize a recent result on phase retrieval of a one dimensional objective vector x ∈ ℂd to recover a two dimensional sample Q ∈ ℂd x d from phaseless measurements, using a tensor product formulation to extend the previous work.

  4. Status of wing flutter

    NASA Technical Reports Server (NTRS)

    Kussner, H G

    1936-01-01

    This report presents a survey of previous theoretical and experimental investigations on wing flutter covering thirteen cases of flutter observed on airplanes. The direct cause of flutter is, in the majority of cases, attributable to (mass-) unbalanced ailerons. Under the conservative assumption that the flutter with the phase angle most favorable for excitation occurs only in two degrees of freedom, the lowest critical speed can be estimated from the data obtained on the oscillation bench. Corrective measures for increasing the critical speed and for definite avoidance of wing flutter, are discussed.

  5. Using the theoretical domains framework to identify barriers and enablers to pediatric asthma management in primary care settings.

    PubMed

    Yamada, Janet; Potestio, Melissa L; Cave, Andrew J; Sharpe, Heather; Johnson, David W; Patey, Andrea M; Presseau, Justin; Grimshaw, Jeremy M

    2017-12-20

    This study aimed to apply a theory-based approach to identify barriers and enablers to implementing the Alberta Primary Care Asthma Pediatric Pathway (PCAPP) into clinical practice. Phase 1 included an assessment of assumptions underlying the intervention from the perspectives of the developers. Phase 2 determined the perceived barriers and enablers for: 1) primary care physicians' prescribing practices, 2) allied health care professionals' provision of asthma education to parents, and 3) children and parents' adherence to their treatment plans. Interviews were conducted with 35 individuals who reside in Alberta, Canada. Phase 1 included three developers. Phase 2 included 11 primary care physicians, 10 allied health care professionals, and 11 parents of children with asthma. Phase 2 interviews were based on the 14 domains of the Theoretical Domains Framework (TDF). Transcribed interviews were analyzed using a directed content analysis. Key assumptions by the developers about the intervention, and beliefs by others about the barriers and enablers of the targeted behaviors were identified. Eight TDF domains mapped onto the assumptions of the pathway as described by the intervention developers. Interviews with health care professionals and parents identified nine TDF domains that influenced the targeted behaviors: knowledge, skills, beliefs about capabilities, social/professional role and identity, beliefs about consequences, environmental context and resources, behavioral regulation, social influences, and emotions. Barriers and enablers perceived by health care professionals and parents that influenced asthma management will inform the optimization of the PCAPP prior to its evaluation.

  6. Collapse of Experimental Colloidal Aging using Record Dynamics

    NASA Astrophysics Data System (ADS)

    Robe, Dominic; Boettcher, Stefan; Sibani, Paolo; Yunker, Peter

    The theoretical framework of record dynamics (RD) posits that aging behavior in jammed systems is controlled by short, rare events involving activation of only a few degrees of freedom. RD predicts dynamics in an aging system to progress with the logarithm of t /tw . This prediction has been verified through new analysis of experimental data on an aging 2D colloidal system. MSD and persistence curves spanning three orders of magnitude in waiting time are collapsed. These predictions have also been found consistent with a number of experiments and simulations, but verification of the specific assumptions that RD makes about the underlying statistics of these rare events has been elusive. Here the observation of individual particles allows for the first time the direct verification of the assumptions about event rates and sizes. This work is suppoted by NSF Grant DMR-1207431.

  7. Quantum State Tomography via Reduced Density Matrices.

    PubMed

    Xin, Tao; Lu, Dawei; Klassen, Joel; Yu, Nengkun; Ji, Zhengfeng; Chen, Jianxin; Ma, Xian; Long, Guilu; Zeng, Bei; Laflamme, Raymond

    2017-01-13

    Quantum state tomography via local measurements is an efficient tool for characterizing quantum states. However, it requires that the original global state be uniquely determined (UD) by its local reduced density matrices (RDMs). In this work, we demonstrate for the first time a class of states that are UD by their RDMs under the assumption that the global state is pure, but fail to be UD in the absence of that assumption. This discovery allows us to classify quantum states according to their UD properties, with the requirement that each class be treated distinctly in the practice of simplifying quantum state tomography. Additionally, we experimentally test the feasibility and stability of performing quantum state tomography via the measurement of local RDMs for each class. These theoretical and experimental results demonstrate the advantages and possible pitfalls of quantum state tomography with local measurements.

  8. The genetical theory of social behaviour

    PubMed Central

    Lehmann, Laurent; Rousset, François

    2014-01-01

    We survey the population genetic basis of social evolution, using a logically consistent set of arguments to cover a wide range of biological scenarios. We start by reconsidering Hamilton's (Hamilton 1964 J. Theoret. Biol. 7, 1–16 (doi:10.1016/0022-5193(64)90038-4)) results for selection on a social trait under the assumptions of additive gene action, weak selection and constant environment and demography. This yields a prediction for the direction of allele frequency change in terms of phenotypic costs and benefits and genealogical concepts of relatedness, which holds for any frequency of the trait in the population, and provides the foundation for further developments and extensions. We then allow for any type of gene interaction within and between individuals, strong selection and fluctuating environments and demography, which may depend on the evolving trait itself. We reach three conclusions pertaining to selection on social behaviours under broad conditions. (i) Selection can be understood by focusing on a one-generation change in mean allele frequency, a computation which underpins the utility of reproductive value weights; (ii) in large populations under the assumptions of additive gene action and weak selection, this change is of constant sign for any allele frequency and is predicted by a phenotypic selection gradient; (iii) under the assumptions of trait substitution sequences, such phenotypic selection gradients suffice to characterize long-term multi-dimensional stochastic evolution, with almost no knowledge about the genetic details underlying the coevolving traits. Having such simple results about the effect of selection regardless of population structure and type of social interactions can help to delineate the common features of distinct biological processes. Finally, we clarify some persistent divergences within social evolution theory, with respect to exactness, synergies, maximization, dynamic sufficiency and the role of genetic arguments. PMID:24686929

  9. The genetical theory of social behaviour.

    PubMed

    Lehmann, Laurent; Rousset, François

    2014-05-19

    We survey the population genetic basis of social evolution, using a logically consistent set of arguments to cover a wide range of biological scenarios. We start by reconsidering Hamilton's (Hamilton 1964 J. Theoret. Biol. 7, 1-16 (doi:10.1016/0022-5193(64)90038-4)) results for selection on a social trait under the assumptions of additive gene action, weak selection and constant environment and demography. This yields a prediction for the direction of allele frequency change in terms of phenotypic costs and benefits and genealogical concepts of relatedness, which holds for any frequency of the trait in the population, and provides the foundation for further developments and extensions. We then allow for any type of gene interaction within and between individuals, strong selection and fluctuating environments and demography, which may depend on the evolving trait itself. We reach three conclusions pertaining to selection on social behaviours under broad conditions. (i) Selection can be understood by focusing on a one-generation change in mean allele frequency, a computation which underpins the utility of reproductive value weights; (ii) in large populations under the assumptions of additive gene action and weak selection, this change is of constant sign for any allele frequency and is predicted by a phenotypic selection gradient; (iii) under the assumptions of trait substitution sequences, such phenotypic selection gradients suffice to characterize long-term multi-dimensional stochastic evolution, with almost no knowledge about the genetic details underlying the coevolving traits. Having such simple results about the effect of selection regardless of population structure and type of social interactions can help to delineate the common features of distinct biological processes. Finally, we clarify some persistent divergences within social evolution theory, with respect to exactness, synergies, maximization, dynamic sufficiency and the role of genetic arguments.

  10. Non-Normality and Testing that a Correlation Equals Zero

    ERIC Educational Resources Information Center

    Levy, Kenneth J.

    1977-01-01

    The importance of the assumption of normality for testing that a bivariate normal correlation equals zero is examined. Both empirical and theoretical evidence suggest that such tests are robust with respect to violation of the normality assumption. (Author/JKS)

  11. Power, Revisited

    ERIC Educational Resources Information Center

    Roscigno, Vincent J.

    2011-01-01

    Power is a core theoretical construct in the field with amazing utility across substantive areas, levels of analysis and methodologies. Yet, its use along with associated assumptions--assumptions surrounding constraint vs. action and specifically organizational structure and rationality--remain problematic. In this article, and following an…

  12. Pore Formation During Solidification of Aluminum: Reconciliation of Experimental Observations, Modeling Assumptions, and Classical Nucleation Theory

    NASA Astrophysics Data System (ADS)

    Yousefian, Pedram; Tiryakioğlu, Murat

    2018-02-01

    An in-depth discussion of pore formation is presented in this paper by first reinterpreting in situ observations reported in the literature as well as assumptions commonly made to model pore formation in aluminum castings. The physics of pore formation is reviewed through theoretical fracture pressure calculations based on classical nucleation theory for homogeneous and heterogeneous nucleation, with and without dissolved gas, i.e., hydrogen. Based on the fracture pressure for aluminum, critical pore size and the corresponding probability of vacancies clustering to form that size have been calculated using thermodynamic data reported in the literature. Calculations show that it is impossible for a pore to nucleate either homogeneously or heterogeneously in aluminum, even with dissolved hydrogen. The formation of pores in aluminum castings can only be explained by inflation of entrained surface oxide films (bifilms) under reduced pressure and/or with dissolved gas, which involves only growth, avoiding any nucleation problem. This mechanism is consistent with the reinterpretations of in situ observations as well as the assumptions made in the literature to model pore formation.

  13. Measuring the Sensitivity of Single-locus “Neutrality Tests” Using a Direct Perturbation Approach

    PubMed Central

    Garrigan, Daniel; Lewontin, Richard; Wakeley, John

    2010-01-01

    A large number of statistical tests have been proposed to detect natural selection based on a sample of variation at a single genetic locus. These tests measure the deviation of the allelic frequency distribution observed within populations from the distribution expected under a set of assumptions that includes both neutral evolution and equilibrium population demography. The present study considers a new way to assess the statistical properties of these tests of selection, by their behavior in response to direct perturbations of the steady-state allelic frequency distribution, unconstrained by any particular nonequilibrium demographic scenario. Results from Monte Carlo computer simulations indicate that most tests of selection are more sensitive to perturbations of the allele frequency distribution that increase the variance in allele frequencies than to perturbations that decrease the variance. Simulations also demonstrate that it requires, on average, 4N generations (N is the diploid effective population size) for tests of selection to relax to their theoretical, steady-state distributions following different perturbations of the allele frequency distribution to its extremes. This relatively long relaxation time highlights the fact that these tests are not robust to violations of the other assumptions of the null model besides neutrality. Lastly, genetic variation arising under an example of a regularly cycling demographic scenario is simulated. Tests of selection performed on this last set of simulated data confirm the confounding nature of these tests for the inference of natural selection, under a demographic scenario that likely holds for many species. The utility of using empirical, genomic distributions of test statistics, instead of the theoretical steady-state distribution, is discussed as an alternative for improving the statistical inference of natural selection. PMID:19744997

  14. Sampling Assumptions in Inductive Generalization

    ERIC Educational Resources Information Center

    Navarro, Daniel J.; Dry, Matthew J.; Lee, Michael D.

    2012-01-01

    Inductive generalization, where people go beyond the data provided, is a basic cognitive capability, and it underpins theoretical accounts of learning, categorization, and decision making. To complete the inductive leap needed for generalization, people must make a key "sampling" assumption about how the available data were generated.…

  15. A unified framework for approximation in inverse problems for distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.

    1988-01-01

    A theoretical framework is presented that can be used to treat approximation techniques for very general classes of parameter estimation problems involving distributed systems that are either first or second order in time. Using the approach developed, one can obtain both convergence and stability (continuous dependence of parameter estimates with respect to the observations) under very weak regularity and compactness assumptions on the set of admissible parameters. This unified theory can be used for many problems found in the recent literature and in many cases offers significant improvements to existing results.

  16. A Study of Poisson's Ratio in the Yield Region

    NASA Technical Reports Server (NTRS)

    Gerard, George; Wildhorn, Sorrel

    1952-01-01

    In the yield region of the stress-strain curve the variation in Poisson's ratio from the elastic to the plastic value is most pronounced. This variation was studied experimentally by a systematic series of tests on several aluminum alloys. The tests were conducted under simple tensile and compressive loading along three orthogonal axes. A theoretical variation of Poisson's ratio for an orthotropic solid was obtained from dilatational considerations. The assumptions used in deriving the theory were examined by use of the test data and were found to be in reasonable agreement with experimental evidence.

  17. Physical implications of the eclipsing binary pulsar

    NASA Technical Reports Server (NTRS)

    Wasserman, Ira; Cordes, James M.

    1988-01-01

    The observed characteristics of the msec pulsar P1957+20, discovered in an eclipsing binary by Fruchter et al. (1988), are considered theoretically. Model equations for the stellar wind and optical emission are derived and used to estimate the effective temperature and optical luminosity associated with wind excitation; then the energy levels required to generate such winds are investigated. The color temperature of the pulsar-heated stellar surface calculated under the assumption of adiabatic expansion is 1000-10,000 K, in good agreement with the observational estimate of 5500 K.

  18. Thermal desorption of metals from tungsten single crystal surfaces

    NASA Technical Reports Server (NTRS)

    Bauer, E.; Bonczek, F.; Poppa, H.; Todd, G.

    1975-01-01

    After a short review of experimental methods used to determine desorption energies and frequencies the assumptions underlying the theoretical analysis of experimental data are discussed. Recent experimental results on the flash desorption of Cu, Ag, and Au from clean, well characterized W (110) and (100) surfaces are presented and analyzed in detail with respect to the coverage dependence. The results obtained clearly reveal the limitations of previous analytical methods and of the experimental technique per se (such as structure and phase changes below and in the temperature region in which desorption occurs).

  19. Group consensus control for networked multi-agent systems with communication delays.

    PubMed

    An, Bao-Ran; Liu, Guo-Ping; Tan, Chong

    2018-05-01

    This paper investigates group consensus problems in networked multi-agent systems (NMAS) with communication delays. Based on the sed state prediction scheme, the group consensus control protocol is designed to compensate the communication delay actively. In light of algebraic graph theories and matrix theories, necessary and(or) sufficient conditions of group consensus with respect to a given admissible control set are obtained for the NMAS with communication delays under mild assumptions. Finally, simulations are performed to demonstrate the effectiveness of the theoretical results. Copyright © 2018 ISA. All rights reserved.

  20. Fifty Years of Mountain Passes: A Perspective on Dan Janzen's Classic Article.

    PubMed

    Sheldon, Kimberly S; Huey, Raymond B; Kaspari, Michael; Sanders, Nathan J

    2018-05-01

    In 1967, Dan Janzen published "Why Mountain Passes Are Higher in the Tropics" in The American Naturalist. Janzen's seminal article has captured the attention of generations of biologists and continues to inspire theoretical and empirical work. The underlying assumptions and derived predictions are broadly synthetic and widely applicable. Consequently, Janzen's "seasonality hypothesis" has proven relevant to physiology, climate change, ecology, and evolution. To celebrate the fiftieth anniversary of this highly influential article, we highlight the past, present, and future of this work and include a unique historical perspective from Janzen himself.

  1. Asteroid differentiation - Pyroclastic volcanism to magma oceans

    NASA Technical Reports Server (NTRS)

    Taylor, G. J.; Keil, Klaus; Mccoy, Timothy; Haack, Henning; Scott, Edward R. D.

    1993-01-01

    A summary is presented of theoretical and speculative research on the physics of igneous processes involved in asteroid differentiation. Partial melting processes, melt migration, and their products are discussed and explosive volcanism is described. Evidence for the existence of asteroidal magma oceans is considered and processes which may have occurred in these oceans are examined. Synthesis and inferences of asteroid heat sources are discussed under the assumption that asteroids are heated mainly by internal processes and that the role of impact heating is small. Inferences of these results for earth-forming planetesimals are suggested.

  2. Enactments in Psychoanalysis: Therapeutic Benefits.

    PubMed

    Stern, Stanley

    The therapeutic benefits of enactments are addressed. Relevant literature reveals disparate conceptions about the nature and use of enactments. Clarification of the term is discussed. This analyst's theoretical and technical evolution is addressed; it is inextricably related to using enactments. How can it not be? A taxonomy of enactments is presented. The article considers that enactments may be fundamental in the evolution from orthodox to contemporary analytic technique. Assumptions underlying enactments are explored, as are guidelines for using enactments. Finally, the article posits that enactments have widened the scope of analysis and contributed to its vitality.

  3. An Information Theoretic Investigation Of Complex Adaptive Supply Networks With Organizational Topologies

    DTIC Science & Technology

    2016-12-22

    assumptions of behavior. This research proposes an information theoretic methodology to discover such complex network structures and dynamics while overcoming...the difficulties historically associated with their study. Indeed, this was the first application of an information theoretic methodology as a tool...1 Research Objectives and Questions..............................................................................2 Methodology

  4. Behavioural social choice: a status report.

    PubMed

    Regenwetter, Michel; Grofman, Bernard; Popova, Anna; Messner, William; Davis-Stober, Clintin P; Cavagnaro, Daniel R

    2009-03-27

    Behavioural social choice has been proposed as a social choice parallel to seminal developments in other decision sciences, such as behavioural decision theory, behavioural economics, behavioural finance and behavioural game theory. Behavioural paradigms compare how rational actors should make certain types of decisions with how real decision makers behave empirically. We highlight that important theoretical predictions in social choice theory change dramatically under even minute violations of standard assumptions. Empirical data violate those critical assumptions. We argue that the nature of preference distributions in electorates is ultimately an empirical question, which social choice theory has often neglected. We also emphasize important insights for research on decision making by individuals. When researchers aggregate individual choice behaviour in laboratory experiments to report summary statistics, they are implicitly applying social choice rules. Thus, they should be aware of the potential for aggregation paradoxes. We hypothesize that such problems may substantially mar the conclusions of a number of (sometimes seminal) papers in behavioural decision research.

  5. Zipf's word frequency law in natural language: a critical review and future directions.

    PubMed

    Piantadosi, Steven T

    2014-10-01

    The frequency distribution of words has been a key object of study in statistical linguistics for the past 70 years. This distribution approximately follows a simple mathematical form known as Zipf's law. This article first shows that human language has a highly complex, reliable structure in the frequency distribution over and above this classic law, although prior data visualization methods have obscured this fact. A number of empirical phenomena related to word frequencies are then reviewed. These facts are chosen to be informative about the mechanisms giving rise to Zipf's law and are then used to evaluate many of the theoretical explanations of Zipf's law in language. No prior account straightforwardly explains all the basic facts or is supported with independent evaluation of its underlying assumptions. To make progress at understanding why language obeys Zipf's law, studies must seek evidence beyond the law itself, testing assumptions and evaluating novel predictions with new, independent data.

  6. Efficient estimation of the maximum metabolic productivity of batch systems.

    PubMed

    St John, Peter C; Crowley, Michael F; Bomble, Yannick J

    2017-01-01

    Production of chemicals from engineered organisms in a batch culture involves an inherent trade-off between productivity, yield, and titer. Existing strategies for strain design typically focus on designing mutations that achieve the highest yield possible while maintaining growth viability. While these methods are computationally tractable, an optimum productivity could be achieved by a dynamic strategy in which the intracellular division of resources is permitted to change with time. New methods for the design and implementation of dynamic microbial processes, both computational and experimental, have therefore been explored to maximize productivity. However, solving for the optimal metabolic behavior under the assumption that all fluxes in the cell are free to vary is a challenging numerical task. Previous studies have therefore typically focused on simpler strategies that are more feasible to implement in practice, such as the time-dependent control of a single flux or control variable. This work presents an efficient method for the calculation of a maximum theoretical productivity of a batch culture system using a dynamic optimization framework. The proposed method follows traditional assumptions of dynamic flux balance analysis: first, that internal metabolite fluxes are governed by a pseudo-steady state, and secondly that external metabolite fluxes are dynamically bounded. The optimization is achieved via collocation on finite elements, and accounts explicitly for an arbitrary number of flux changes. The method can be further extended to calculate the complete Pareto surface of productivity as a function of yield. We apply this method to succinate production in two engineered microbial hosts, Escherichia coli and Actinobacillus succinogenes , and demonstrate that maximum productivities can be more than doubled under dynamic control regimes. The maximum theoretical yield is a measure that is well established in the metabolic engineering literature and whose use helps guide strain and pathway selection. We present a robust, efficient method to calculate the maximum theoretical productivity: a metric that will similarly help guide and evaluate the development of dynamic microbial bioconversions. Our results demonstrate that nearly optimal yields and productivities can be achieved with only two discrete flux stages, indicating that near-theoretical productivities might be achievable in practice.

  7. Statistical Issues for Calculating Reentry Hazards

    NASA Technical Reports Server (NTRS)

    Bacon, John B.; Matney, Mark

    2016-01-01

    A number of statistical tools have been developed over the years for assessing the risk of reentering object to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. This information, combined with information on the expected ground path of the reentry, is used to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of this analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper builds on previous IAASS work to re-examine one of these theoretical assumptions.. This study employs empirical and theoretical information to test the assumption of a fully random decay along the argument of latitude of the final orbit, and makes recommendations how to improve the accuracy of this calculation in the future.

  8. Towards improving the NASA standard soil moisture retrieval algorithm and product

    NASA Astrophysics Data System (ADS)

    Mladenova, I. E.; Jackson, T. J.; Njoku, E. G.; Bindlish, R.; Cosh, M. H.; Chan, S.

    2013-12-01

    Soil moisture mapping using passive-based microwave remote sensing techniques has proven to be one of the most effective ways of acquiring reliable global soil moisture information on a routine basis. An important step in this direction was made by the launch of the Advanced Microwave Scanning Radiometer on the NASA's Earth Observing System Aqua satellite (AMSR-E). Along with the standard NASA algorithm and operational AMSR-E product, the easy access and availability of the AMSR-E data promoted the development and distribution of alternative retrieval algorithms and products. Several evaluation studies have demonstrated issues with the standard NASA AMSR-E product such as dampened temporal response and limited range of the final retrievals and noted that the available global passive-based algorithms, even though based on the same electromagnetic principles, produce different results in terms of accuracy and temporal dynamics. Our goal is to identify the theoretical causes that determine the reduced sensitivity of the NASA AMSR-E product and outline ways to improve the operational NASA algorithm, if possible. Properly identifying the underlying reasons that cause the above mentioned features of the NASA AMSR-E product and differences between the alternative algorithms requires a careful examination of the theoretical basis of each approach. Specifically, the simplifying assumptions and parametrization approaches adopted by each algorithm to reduce the dimensionality of unknowns and characterize the observing system. Statistically-based error analyses, which are useful and necessary, provide information on the relative accuracy of each product but give very little information on the theoretical causes, knowledge that is essential for algorithm improvement. Thus, we are currently examining the possibility of improving the standard NASA AMSR-E global soil moisture product by conducting a thorough theoretically-based review of and inter-comparisons between several well established global retrieval techniques. A detailed discussion focused on the theoretical basis of each approach and algorithms sensitivity to assumptions and parametrization approaches will be presented. USDA is an equal opportunity provider and employer.

  9. Analysis of Drop Oscillations Excited by an Electrical Point Force in AC EWOD

    NASA Astrophysics Data System (ADS)

    Oh, Jung Min; Ko, Sung Hee; Kang, Kwan Hyoung

    2008-03-01

    Recently, a few researchers have reported the oscillation of a sessile drop in AC EWOD (electrowetting on dielectrics), and some of its consequences. The drop oscillation problem in AC EWOD is associated with various applications based on electrowetting such as LOC (lab-on-a-chip), liquid lens, and electronic display. However, no theoretical analysis of the problem has been attempted yet. In the present paper, we propose a theoretical model to analyze the oscillation by applying the conventional method to analyze the drop oscillation. The domain perturbation method is used to derive the shape mode equations under the assumptions of weak viscous flow and small deformation. The Maxwell stress is exerted on the three-phase contact line of the droplet like a point force. The force is regarded as a delta function, and is decomposed into the driving forces of each shape mode. The theoretical results on the shape and the frequency responses are compared with experiments, which shows a qualitative agreement.

  10. Doppler broadening of neutron-induced resonances using ab initio phonon spectrum

    NASA Astrophysics Data System (ADS)

    Noguere, G.; Maldonado, P.; De Saint Jean, C.

    2018-05-01

    Neutron resonances observed in neutron cross section data can only be compared with their theoretical analogues after a correct broadening of the resonance widths. This broadening is usually carried out by two different theoretical models, namely the Free Gas Model and the Crystal Lattice Model, which, however, are only applicable under certain assumptions. Here, we use neutron transmission experiments on UO2 samples at T=23.7 K and T=293.7 K, to investigate the limitations of these models when an ab initio phonon spectrum is introduced in the calculations. Comparisons of the experimental and theoretical transmissions highlight the underestimation of the energy transferred at low temperature and its impact on the accurate determination of the radiation widths Γ_{γ_{λ}} of the 238U resonances λ. The observed deficiency of the model represents an experimental evidence that the Debye-Waller factor is not correctly calculated at low temperature near the Neel temperature ( TN=30.8 K).

  11. Decision-theoretic saliency: computational principles, biological plausibility, and implications for neurophysiology and psychophysics.

    PubMed

    Gao, Dashan; Vasconcelos, Nuno

    2009-01-01

    A decision-theoretic formulation of visual saliency, first proposed for top-down processing (object recognition) (Gao & Vasconcelos, 2005a), is extended to the problem of bottom-up saliency. Under this formulation, optimality is defined in the minimum probability of error sense, under a constraint of computational parsimony. The saliency of the visual features at a given location of the visual field is defined as the power of those features to discriminate between the stimulus at the location and a null hypothesis. For bottom-up saliency, this is the set of visual features that surround the location under consideration. Discrimination is defined in an information-theoretic sense and the optimal saliency detector derived for a class of stimuli that complies with known statistical properties of natural images. It is shown that under the assumption that saliency is driven by linear filtering, the optimal detector consists of what is usually referred to as the standard architecture of V1: a cascade of linear filtering, divisive normalization, rectification, and spatial pooling. The optimal detector is also shown to replicate the fundamental properties of the psychophysics of saliency: stimulus pop-out, saliency asymmetries for stimulus presence versus absence, disregard of feature conjunctions, and Weber's law. Finally, it is shown that the optimal saliency architecture can be applied to the solution of generic inference problems. In particular, for the class of stimuli studied, it performs the three fundamental operations of statistical inference: assessment of probabilities, implementation of Bayes decision rule, and feature selection.

  12. School, Cultural Diversity, Multiculturalism, and Contact

    ERIC Educational Resources Information Center

    Pagani, Camilla; Robustelli, Francesco; Martinelli, Cristina

    2011-01-01

    The basic assumption of this paper is that school's potential to improve cross-cultural relations, as well as interpersonal relations in general, is enormous. This assumption is supported by a number of theoretical considerations and by the analysis of data we obtained from a study we conducted on the attitudes toward diversity and…

  13. Problematising Mathematics Education

    ERIC Educational Resources Information Center

    Begg, Andy

    2015-01-01

    We assume many things when considering our practice, but our assumptions limit what we do. In this theoretical/philosophical paper I consider some assumptions that relate to our work. My purpose is to stimulate a debate, a search for alternatives, and to help us improve mathematics education by influencing our future curriculum documents and…

  14. Testing a pollen-parent fecundity distribution model on seed-parent fecundity distributions in bee-pollinated forage legume polycrosses

    USDA-ARS?s Scientific Manuscript database

    Random mating (i.e., panmixis) is a fundamental assumption in quantitative genetics. In outcrossing bee-pollinated perennial forage legume polycrosses, mating is assumed by default to follow theoretical random mating. This assumption informs breeders of expected inbreeding estimates based on polycro...

  15. The Metatheoretical Assumptions of Literacy Engagement: A Preliminary Centennial History

    ERIC Educational Resources Information Center

    Hruby, George G.; Burns, Leslie D.; Botzakis, Stergios; Groenke, Susan L.; Hall, Leigh A.; Laughter, Judson; Allington, Richard L.

    2016-01-01

    In this review of literacy education research in North America over the past century, the authors examined the historical succession of theoretical frameworks on students' active participation in their own literacy learning, and in particular the metatheoretical assumptions that justify those frameworks. The authors used "motivation" and…

  16. Nuclear Reactions in Micro/Nano-Scale Metal Particles

    NASA Astrophysics Data System (ADS)

    Kim, Y. E.

    2013-03-01

    Low-energy nuclear reactions in micro/nano-scale metal particles are described based on the theory of Bose-Einstein condensation nuclear fusion (BECNF). The BECNF theory is based on a single basic assumption capable of explaining the observed LENR phenomena; deuterons in metals undergo Bose-Einstein condensation. The BECNF theory is also a quantitative predictive physical theory. Experimental tests of the basic assumption and theoretical predictions are proposed. Potential application to energy generation by ignition at low temperatures is described. Generalized theory of BECNF is used to carry out theoretical analyses of recently reported experimental results for hydrogen-nickel system.

  17. Seasonal carbon dioxide exchange between the regolith and atmosphere of Mars - Experimental and theoretical studies

    NASA Technical Reports Server (NTRS)

    Fanale, F. P.; Salvail, J. R.; Banerdt, W. B.; Saunders, R. S.; Johansen, L. A.

    1982-01-01

    CO2 penetration rate measurements have been made through basalt-clay soils under conditions simulating the penetration of the cap-induced seasonal CO2 pressure wave through the topmost regolith of Mars, and results suggest that existing theoretical models for the diffusion of a gas through a porous and highly adsorbing medium may be used to assess the importance of the Martian seasonal regolith-atmosphere CO2 exchange. The maximum effect of thermally driven exchange between the topmost seasonally (thermally) affected regolith and the atmosphere shows that, while this may be of greater importance than the isothermal exchange, the thermally driven exchange would be recognizable only if the pressure wave from CO2 exchanged at high latitudes did not propagate atmospherically faster than the rate at which the exchange itself occurred. This is an unreasonable assumption.

  18. Validation of the underlying assumptions of the quality-adjusted life-years outcome: results from the ECHOUTCOME European project.

    PubMed

    Beresniak, Ariel; Medina-Lara, Antonieta; Auray, Jean Paul; De Wever, Alain; Praet, Jean-Claude; Tarricone, Rosanna; Torbica, Aleksandra; Dupont, Danielle; Lamure, Michel; Duru, Gerard

    2015-01-01

    Quality-adjusted life-years (QALYs) have been used since the 1980s as a standard health outcome measure for conducting cost-utility analyses, which are often inadequately labeled as 'cost-effectiveness analyses'. This synthetic outcome, which combines the quantity of life lived with its quality expressed as a preference score, is currently recommended as reference case by some health technology assessment (HTA) agencies. While critics of the QALY approach have expressed concerns about equity and ethical issues, surprisingly, very few have tested the basic methodological assumptions supporting the QALY equation so as to establish its scientific validity. The main objective of the ECHOUTCOME European project was to test the validity of the underlying assumptions of the QALY outcome and its relevance in health decision making. An experiment has been conducted with 1,361 subjects from Belgium, France, Italy, and the UK. The subjects were asked to express their preferences regarding various hypothetical health states derived from combining different health states with time durations in order to compare observed utility values of the couples (health state, time) and calculated utility values using the QALY formula. Observed and calculated utility values of the couples (health state, time) were significantly different, confirming that preferences expressed by the respondents were not consistent with the QALY theoretical assumptions. This European study contributes to establishing that the QALY multiplicative model is an invalid measure. This explains why costs/QALY estimates may vary greatly, leading to inconsistent recommendations relevant to providing access to innovative medicines and health technologies. HTA agencies should consider other more robust methodological approaches to guide reimbursement decisions.

  19. Signal Detection with Criterion Noise: Applications to Recognition Memory

    ERIC Educational Resources Information Center

    Benjamin, Aaron S.; Diaz, Michael; Wee, Serena

    2009-01-01

    A tacit but fundamental assumption of the theory of signal detection is that criterion placement is a noise-free process. This article challenges that assumption on theoretical and empirical grounds and presents the noisy decision theory of signal detection (ND-TSD). Generalized equations for the isosensitivity function and for measures of…

  20. Instrumental variables as bias amplifiers with general outcome and confounding.

    PubMed

    Ding, P; VanderWeele, T J; Robins, J M

    2017-06-01

    Drawing causal inference with observational studies is the central pillar of many disciplines. One sufficient condition for identifying the causal effect is that the treatment-outcome relationship is unconfounded conditional on the observed covariates. It is often believed that the more covariates we condition on, the more plausible this unconfoundedness assumption is. This belief has had a huge impact on practical causal inference, suggesting that we should adjust for all pretreatment covariates. However, when there is unmeasured confounding between the treatment and outcome, estimators adjusting for some pretreatment covariate might have greater bias than estimators without adjusting for this covariate. This kind of covariate is called a bias amplifier, and includes instrumental variables that are independent of the confounder, and affect the outcome only through the treatment. Previously, theoretical results for this phenomenon have been established only for linear models. We fill in this gap in the literature by providing a general theory, showing that this phenomenon happens under a wide class of models satisfying certain monotonicity assumptions. We further show that when the treatment follows an additive or multiplicative model conditional on the instrumental variable and the confounder, these monotonicity assumptions can be interpreted as the signs of the arrows of the causal diagrams.

  1. A theoretical approach to artificial intelligence systems in medicine.

    PubMed

    Spyropoulos, B; Papagounos, G

    1995-10-01

    The various theoretical models of disease, the nosology which is accepted by the medical community and the prevalent logic of diagnosis determine both the medical approach as well as the development of the relevant technology including the structure and function of the A.I. systems involved. A.I. systems in medicine, in addition to the specific parameters which enable them to reach a diagnostic and/or therapeutic proposal, entail implicitly theoretical assumptions and socio-cultural attitudes which prejudice the orientation and the final outcome of the procedure. The various models -causal, probabilistic, case-based etc. -are critically examined and their ethical and methodological limitations are brought to light. The lack of a self-consistent theoretical framework in medicine, the multi-faceted character of the human organism as well as the non-explicit nature of the theoretical assumptions involved in A.I. systems restrict them to the role of decision supporting "instruments" rather than regarding them as decision making "devices". This supporting role and, especially, the important function which A.I. systems should have in the structure, the methods and the content of medical education underscore the need of further research in the theoretical aspects and the actual development of such systems.

  2. Optimum runway orientation relative to crosswinds

    NASA Technical Reports Server (NTRS)

    Falls, L. W.; Brown, S. C.

    1972-01-01

    Specific magnitudes of crosswinds may exist that could be constraints to the success of an aircraft mission such as the landing of the proposed space shuttle. A method is required to determine the orientation or azimuth of the proposed runway which will minimize the probability of certain critical crosswinds. Two procedures for obtaining the optimum runway orientation relative to minimizing a specified crosswind speed are described and illustrated with examples. The empirical procedure requires only hand calculations on an ordinary wind rose. The theoretical method utilizes wind statistics computed after the bivariate normal elliptical distribution is applied to a data sample of component winds. This method requires only the assumption that the wind components are bivariate normally distributed. This assumption seems to be reasonable. Studies are currently in progress for testing wind components for bivariate normality for various stations. The close agreement between the theoretical and empirical results for the example chosen substantiates the bivariate normal assumption.

  3. ROLE OF PRESSURE IN SMECTITE DEHYDRATION - EFFECTS ON GEOPRESSURE AND SMECTITE-TO-ILLITE TRANSFORMATION.

    USGS Publications Warehouse

    Colten-Bradley, Virginia

    1987-01-01

    Evaluation of the effects of pressure on the temperature of interlayer water loss (dehydration) by smectites under diagenetic conditions indicates that smectites are stable as hydrated phases in the deep subsurface. Hydraulic and differential pressure conditions affect dehydration differently. The temperature of dehydration increase with pore fluid pressure and interlayer water density. The temperatures of dehydration increase with pore fluid pressure and interlayer water density. The temperatures of dehydration under differential-presssure conditions are inversely related to pressure and interlayer water density. The model presented assumes the effects of pore fluid composition and 2:1 layer reactivity to be negligible. Agreement between theoretical and experimental results validate this assumption. Additional aspects of the subject are discussed.

  4. Teachers' Perspectives on Principal Mistreatment

    ERIC Educational Resources Information Center

    Blase, Joseph; Blase, Jo

    2006-01-01

    Although there is some important scholarly work on the problem of workplace mistreatment/abuse, theoretical or empirical work on abusive school principals is nonexistent. Symbolic interactionism was the theoretical structure for the present study. This perspective on social research is founded on three primary assumptions: (1) individuals act…

  5. Variation is the universal: making cultural evolution work in developmental psychology.

    PubMed

    Kline, Michelle Ann; Shamsudheen, Rubeena; Broesch, Tanya

    2018-04-05

    Culture is a human universal, yet it is a source of variation in human psychology, behaviour and development. Developmental researchers are now expanding the geographical scope of research to include populations beyond relatively wealthy Western communities. However, culture and context still play a secondary role in the theoretical grounding of developmental psychology research, far too often. In this paper, we highlight four false assumptions that are common in psychology, and that detract from the quality of both standard and cross-cultural research in development. These assumptions are: (i) the universality assumption , that empirical uniformity is evidence for universality, while any variation is evidence for culturally derived variation; (ii) the Western centrality assumption , that Western populations represent a normal and/or healthy standard against which development in all societies can be compared; (iii) the deficit assumption , that population-level differences in developmental timing or outcomes are necessarily due to something lacking among non-Western populations; and (iv) the equivalency assumption , that using identical research methods will necessarily produce equivalent and externally valid data, across disparate cultural contexts. For each assumption, we draw on cultural evolutionary theory to critique and replace the assumption with a theoretically grounded approach to culture in development. We support these suggestions with positive examples drawn from research in development. Finally, we conclude with a call for researchers to take reasonable steps towards more fully incorporating culture and context into studies of development, by expanding their participant pools in strategic ways. This will lead to a more inclusive and therefore more accurate description of human development.This article is part of the theme issue 'Bridging cultural gaps: interdisciplinary studies in human cultural evolution'. © 2018 The Author(s).

  6. Riddles of masculinity: gender, bisexuality, and thirdness.

    PubMed

    Fogel, Gerald I

    2006-01-01

    Clinical examples are used to illuminate several riddles of masculinity-ambiguities, enigmas, and paradoxes in relation to gender, bisexuality, and thirdness-frequently seen in male patients. Basic psychoanalytic assumptions about male psychology are examined in the light of advances in female psychology, using ideas from feminist and gender studies as well as important and now widely accepted trends in contemporary psychoanalytic theory. By reexamining basic assumptions about heterosexual men, as has been done with ideas concerning women and homosexual men, complexity and nuance come to the fore to aid the clinician in treating the complex characterological pictures seen in men today. In a context of rapid historical and theoretical change, the use of persistent gender stereotypes and unnecessarily limiting theoretical formulations, though often unintended, may mask subtle countertransference and theoretical blind spots, and limit optimal clinical effectiveness.

  7. Assessing the validity of discourse analysis: transdisciplinary convergence

    NASA Astrophysics Data System (ADS)

    Jaipal-Jamani, Kamini

    2014-12-01

    Research studies using discourse analysis approaches make claims about phenomena or issues based on interpretation of written or spoken text, which includes images and gestures. How are findings/interpretations from discourse analysis validated? This paper proposes transdisciplinary convergence as a way to validate discourse analysis approaches to research. The argument is made that discourse analysis explicitly grounded in semiotics, systemic functional linguistics, and critical theory, offers a credible research methodology. The underlying assumptions, constructs, and techniques of analysis of these three theoretical disciplines can be drawn on to show convergence of data at multiple levels, validating interpretations from text analysis.

  8. On the Number of Neurons and Time Scale of Integration Underlying the Formation of Percepts in the Brain

    PubMed Central

    Wohrer, Adrien; Machens, Christian K.

    2015-01-01

    All of our perceptual experiences arise from the activity of neural populations. Here we study the formation of such percepts under the assumption that they emerge from a linear readout, i.e., a weighted sum of the neurons’ firing rates. We show that this assumption constrains the trial-to-trial covariance structure of neural activities and animal behavior. The predicted covariance structure depends on the readout parameters, and in particular on the temporal integration window w and typical number of neurons K used in the formation of the percept. Using these predictions, we show how to infer the readout parameters from joint measurements of a subject’s behavior and neural activities. We consider three such scenarios: (1) recordings from the complete neural population, (2) recordings of neuronal sub-ensembles whose size exceeds K, and (3) recordings of neuronal sub-ensembles that are smaller than K. Using theoretical arguments and artificially generated data, we show that the first two scenarios allow us to recover the typical spatial and temporal scales of the readout. In the third scenario, we show that the readout parameters can only be recovered by making additional assumptions about the structure of the full population activity. Our work provides the first thorough interpretation of (feed-forward) percept formation from a population of sensory neurons. We discuss applications to experimental recordings in classic sensory decision-making tasks, which will hopefully provide new insights into the nature of perceptual integration. PMID:25793393

  9. Finite area combustor theoretical rocket performance

    NASA Technical Reports Server (NTRS)

    Gordon, Sanford; Mcbride, Bonnie J.

    1988-01-01

    Previous to this report, the computer program of NASA SP-273 and NASA TM-86885 was capable of calculating theoretical rocket performance based only on the assumption of an infinite area combustion chamber (IAC). An option was added to this program which now also permits the calculation of rocket performance based on the assumption of a finite area combustion chamber (FAC). In the FAC model, the combustion process in the cylindrical chamber is assumed to be adiabatic, but nonisentropic. This results in a stagnation pressure drop from the injector face to the end of the chamber and a lower calculated performance for the FAC model than the IAC model.

  10. Adjacency Matrix-Based Transmit Power Allocation Strategies in Wireless Sensor Networks

    PubMed Central

    Consolini, Luca; Medagliani, Paolo; Ferrari, Gianluigi

    2009-01-01

    In this paper, we present an innovative transmit power control scheme, based on optimization theory, for wireless sensor networks (WSNs) which use carrier sense multiple access (CSMA) with collision avoidance (CA) as medium access control (MAC) protocol. In particular, we focus on schemes where several remote nodes send data directly to a common access point (AP). Under the assumption of finite overall network transmit power and low traffic load, we derive the optimal transmit power allocation strategy that minimizes the packet error rate (PER) at the AP. This approach is based on modeling the CSMA/CA MAC protocol through a finite state machine and takes into account the network adjacency matrix, depending on the transmit power distribution and determining the network connectivity. It will be then shown that the transmit power allocation problem reduces to a convex constrained minimization problem. Our results show that, under the assumption of low traffic load, the power allocation strategy, which guarantees minimal delay, requires the maximization of network connectivity, which can be equivalently interpreted as the maximization of the number of non-zero entries of the adjacency matrix. The obtained theoretical results are confirmed by simulations for unslotted Zigbee WSNs. PMID:22346705

  11. Implementation of Improved Transverse Shear Calculations and Higher Order Laminate Theory Into Strain Rate Dependent Analyses of Polymer Matrix Composites

    NASA Technical Reports Server (NTRS)

    Zhu, Lin-Fa; Kim, Soo; Chattopadhyay, Aditi; Goldberg, Robert K.

    2004-01-01

    A numerical procedure has been developed to investigate the nonlinear and strain rate dependent deformation response of polymer matrix composite laminated plates under high strain rate impact loadings. A recently developed strength of materials based micromechanics model, incorporating a set of nonlinear, strain rate dependent constitutive equations for the polymer matrix, is extended to account for the transverse shear effects during impact. Four different assumptions of transverse shear deformation are investigated in order to improve the developed strain rate dependent micromechanics model. The validities of these assumptions are investigated using numerical and theoretical approaches. A method to determine through the thickness strain and transverse Poisson's ratio of the composite is developed. The revised micromechanics model is then implemented into a higher order laminated plate theory which is modified to include the effects of inelastic strains. Parametric studies are conducted to investigate the mechanical response of composite plates under high strain rate loadings. Results show the transverse shear stresses cannot be neglected in the impact problem. A significant level of strain rate dependency and material nonlinearity is found in the deformation response of representative composite specimens.

  12. Effects of erectable glossal hairs on a honeybee's nectar-drinking strategy

    NASA Astrophysics Data System (ADS)

    Yang, Heng; Wu, Jianing; Yan, Shaoze

    2014-06-01

    With the use of a scanning electron microscope, we observe specific microstructures of the mouthpart of the Italian bee (Apis mellifera ligustica), especially the distribution and dimensions of hairs on its glossa. Considering the erection of glossal hairs for trapping nectar modifies the viscous dipping model in analyzing the drinking strategy of a honeybee. Theoretical estimations of volume intake rates with respect to sucrose solutions of different concentrations agree with experimental data, which indicates that erectable hairs can significantly increase the ability of a bee to acquire nectar efficiently. The comparison with experimental results also indicates that a honeybee may continuously augment its pumping power, rather than keep it constant, to drink nectar with sharply increasing viscosity. Under the modified assumption of increasing working power, we introduce the rate at which working power increases with viscosity and discuss the nature-preferred nectar concentration of 35% by theoretically calculating optimal concentration maximizing energetic intake rates under varying increasing rates. Finally, the ability of the mouthparts of the honeybee to transfer viscous nectar may inspire a concept for transporting microfluidics with a wide range of viscosities.

  13. Program Evaluation Theory and Practice: A Comprehensive Guide

    ERIC Educational Resources Information Center

    Mertens, Donna M.; Wilson, Amy T.

    2012-01-01

    This engaging text takes an evenhanded approach to major theoretical paradigms in evaluation and builds a bridge from them to evaluation practice. Featuring helpful checklists, procedural steps, provocative questions that invite readers to explore their own theoretical assumptions, and practical exercises, the book provides concrete guidance for…

  14. Radiologic technology educators and andragogy.

    PubMed

    Galbraith, M W; Simon-Galbraith, J A

    1984-01-01

    Radiologic technology educators are in constant contact with adult learners. However, the theoretical framework that radiologic educators use to guide their instruction may not be appropriate for adults. This article examines the assumptions of the standard instructional theory and the most modern approach to adult education-- andragogy . It also shows how these assumptions affect the adult learner in a radiologic education setting.

  15. A Computational Framework for Analyzing Stochasticity in Gene Expression

    PubMed Central

    Sherman, Marc S.; Cohen, Barak A.

    2014-01-01

    Stochastic fluctuations in gene expression give rise to distributions of protein levels across cell populations. Despite a mounting number of theoretical models explaining stochasticity in protein expression, we lack a robust, efficient, assumption-free approach for inferring the molecular mechanisms that underlie the shape of protein distributions. Here we propose a method for inferring sets of biochemical rate constants that govern chromatin modification, transcription, translation, and RNA and protein degradation from stochasticity in protein expression. We asked whether the rates of these underlying processes can be estimated accurately from protein expression distributions, in the absence of any limiting assumptions. To do this, we (1) derived analytical solutions for the first four moments of the protein distribution, (2) found that these four moments completely capture the shape of protein distributions, and (3) developed an efficient algorithm for inferring gene expression rate constants from the moments of protein distributions. Using this algorithm we find that most protein distributions are consistent with a large number of different biochemical rate constant sets. Despite this degeneracy, the solution space of rate constants almost always informs on underlying mechanism. For example, we distinguish between regimes where transcriptional bursting occurs from regimes reflecting constitutive transcript production. Our method agrees with the current standard approach, and in the restrictive regime where the standard method operates, also identifies rate constants not previously obtainable. Even without making any assumptions we obtain estimates of individual biochemical rate constants, or meaningful ratios of rate constants, in 91% of tested cases. In some cases our method identified all of the underlying rate constants. The framework developed here will be a powerful tool for deducing the contributions of particular molecular mechanisms to specific patterns of gene expression. PMID:24811315

  16. A theoretical perspective on road safety communication campaigns.

    PubMed

    Elvik, Rune

    2016-12-01

    This paper proposes a theoretical perspective on road safety communication campaigns, which may help in identifying the conditions under which such campaigns can be effective. The paper proposes that, from a theoretical point of view, it is reasonable to assume that road user behaviour is, by and large, subjectively rational. This means that road users are assumed to behave the way they think is best. If this assumption is accepted, the best theoretical prediction is that road safety campaigns consisting of persuasive messages only will have no effect on road user behaviour and accordingly no effect on accidents. This theoretical prediction is not supported by meta-analyses of studies that have evaluated the effects of road safety communication campaigns. These analyses conclude that, on the average, such campaigns are associated with an accident reduction. The paper discusses whether this finding can be explained theoretically. The discussion relies on the distinction made by many modern theorists between bounded and perfect rationality. Road user behaviour is characterised by bounded rationality. Hence, if road users can gain insight into the bounds of their rationality, so that they see advantages to themselves of changing behaviour, they are likely to do so. It is, however, largely unknown whether such a mechanism explains why some road safety communication campaigns have been found to be more effective than others. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Single Cell Proteomics in Biomedicine: High-dimensional Data Acquisition, Visualization and Analysis

    PubMed Central

    Su, Yapeng; Shi, Qihui; Wei, Wei

    2017-01-01

    New insights on cellular heterogeneity in the last decade provoke the development of a variety of single cell omics tools at a lightning pace. The resultant high-dimensional single cell data generated by these tools require new theoretical approaches and analytical algorithms for effective visualization and interpretation. In this review, we briefly survey the state-of-the-art single cell proteomic tools with a particular focus on data acquisition and quantification, followed by an elaboration of a number of statistical and computational approaches developed to date for dissecting the high-dimensional single cell data. The underlying assumptions, unique features and limitations of the analytical methods with the designated biological questions they seek to answer will be discussed. Particular attention will be given to those information theoretical approaches that are anchored in a set of first principles of physics and can yield detailed (and often surprising) predictions. PMID:28128880

  18. Theory and interpretation in qualitative studies from general practice: Why and how?

    PubMed

    Malterud, Kirsti

    2016-03-01

    In this article, I want to promote theoretical awareness and commitment among qualitative researchers in general practice and suggest adequate and feasible theoretical approaches. I discuss different theoretical aspects of qualitative research and present the basic foundations of the interpretative paradigm. Associations between paradigms, philosophies, methodologies and methods are examined and different strategies for theoretical commitment presented. Finally, I discuss the impact of theory for interpretation and the development of general practice knowledge. A scientific theory is a consistent and soundly based set of assumptions about a specific aspect of the world, predicting or explaining a phenomenon. Qualitative research is situated in an interpretative paradigm where notions about particular human experiences in context are recognized from different subject positions. Basic theoretical features from the philosophy of science explain why and how this is different from positivism. Reflexivity, including theoretical awareness and consistency, demonstrates interpretative assumptions, accounting for situated knowledge. Different types of theoretical commitment in qualitative analysis are presented, emphasizing substantive theories to sharpen the interpretative focus. Such approaches are clearly within reach for a general practice researcher contributing to clinical practice by doing more than summarizing what the participants talked about, without trying to become a philosopher. Qualitative studies from general practice deserve stronger theoretical awareness and commitment than what is currently established. Persistent attention to and respect for the distinctive domain of knowledge and practice where the research deliveries are targeted is necessary to choose adequate theoretical endeavours. © 2015 the Nordic Societies of Public Health.

  19. What proportion of prescription items dispensed in community pharmacies are eligible for the New Medicine Service?

    PubMed

    Wells, Katharine M; Boyd, Matthew J; Thornley, Tracey; Boardman, Helen F

    2014-03-07

    The payment structure for the New Medicine Service (NMS) in England is based on the assumption that 0.5% of prescription items dispensed in community pharmacies are eligible for the service. This assumption is based on a theoretical calculation. This study aimed to find out the actual proportion of prescription items eligible for the NMS dispensed in community pharmacies in order to compare this with the theoretical assumption. The study also aimed to investigate whether the proportion of prescription items eligible for the NMS is affected by pharmacies' proximity to GP practices. The study collected data from eight pharmacies in Nottingham belonging to the same large chain of pharmacies. Pharmacies were grouped by distance from the nearest GP practice and sampled to reflect the distribution by distance of all pharmacies in Nottingham. Data on one thousand consecutive prescription items were collected from each pharmacy and the number of NMS eligible items recorded. All NHS prescriptions were included in the sample. Data were analysed and proportions calculated with 95% confidence intervals used to compare the study results against the theoretical figure of 0.5% of prescription items being eligible for the NMS. A total of 8005 prescription items were collected (a minimum of 1000 items per pharmacy) of which 17 items were eligible to receive the service. The study found that 0.25% (95% confidence intervals: 0.14% to 0.36%) of prescription items were eligible for the NMS which differs significantly from the theoretical assumption of 0.5%. The opportunity rate for the service was lower, 0.21% (95% confidence intervals: 0.10% to 0.32%) of items, as some items eligible for the NMS did not translate into opportunities to offer the service. Of all the prescription items collected in the pharmacies, 28% were collected by patient representatives. The results of this study show that the proportion of items eligible for the NMS dispensed in community pharmacies is lower than the Department of Health assumption of 0.5%. This study did not find a significant difference in the rate of NMS opportunities between pharmacies located close to GP practices compared to those further away.

  20. Analytical Implications of Using Practice Theory in Workplace Information Literacy Research

    ERIC Educational Resources Information Center

    Moring, Camilla; Lloyd, Annemaree

    2013-01-01

    Introduction: This paper considers practice theory and the analytical implications of using this theoretical approach in information literacy research. More precisely the aim of the paper is to discuss the translation of practice theoretical assumptions into strategies that frame the analytical focus and interest when researching workplace…

  1. Centroid and Theoretical Rotation: Justification for Their Use in Q Methodology Research

    ERIC Educational Resources Information Center

    Ramlo, Sue

    2016-01-01

    This manuscript's purpose is to introduce Q as a methodology before providing clarification about the preferred factor analytical choices of centroid and theoretical (hand) rotation. Stephenson, the creator of Q, designated that only these choices allowed for scientific exploration of subjectivity while not violating assumptions associated with…

  2. NLPIR: A Theoretical Framework for Applying Natural Language Processing to Information Retrieval.

    ERIC Educational Resources Information Center

    Zhou, Lina; Zhang, Dongsong

    2003-01-01

    Proposes a theoretical framework called NLPIR that integrates natural language processing (NLP) into information retrieval (IR) based on the assumption that there exists representation distance between queries and documents. Discusses problems in traditional keyword-based IR, including relevance, and describes some existing NLP techniques.…

  3. [Assumption of medical risks and the problem of medical liability in ancient Roman law].

    PubMed

    Váradi, Agnes

    2008-11-02

    The claim of an individual to assure his health and life, to assume and compensate the damage from diseases and accidents, had already appeared in the system of the ancient Roman law in the form of many singular legal institutions. In lack of a unified archetype of regulation, we have to analyse the damages caused in the health or corporal integrity of different personal groups: we have to mention the legal interpretation of the diseases or injuries suffered by serves, people under manus or patria potestas and free Roman citizens. The fragments from the Digest od Justinian do not only demonstrate concrete legal problems, but they can serve as a starting point for further theoretical analyses. For example: if death is the consequence of a medical failure, does the doctor have any kind of liability? Was after-care part of the healing process according to the Roman law? Examining these questions, we should not forget to talk about the complex liability system of the Roman law, the compensation of the damages caused in a contractual or delictual context and about the lex Aquilia. Although these conclusions have no direct relation with the present legal regulation of risk assumption, we have to see that analysing the examples of the Roman law can be useful for developing our view of a certain theoretical problem, like that of the modern liability concept in medicine as well.

  4. Resource-driven encounters among consumers and implications for the spread of infectious disease

    PubMed Central

    Flynn, Jason M.

    2017-01-01

    Animals share a variety of common resources, which can be a major driver of conspecific encounter rates. In this work, we implement a spatially explicit mathematical model for resource visitation behaviour in order to examine how changes in resource availability can influence the rate of encounters among consumers. Using simulations and asymptotic analysis, we demonstrate that, under a reasonable set of assumptions, the relationship between resource availability and consumer conspecific encounters is not monotonic. We characterize how the maximum encounter rate and associated critical resource density depend on system parameters like consumer density and the maximum distance from which consumers can detect and respond to resources. The assumptions underlying our theoretical model and analysis are motivated by observations of large aggregations of black-backed jackals at carcasses generated by seasonal outbreaks of anthrax among herbivores in Etosha National Park, Namibia. As non-obligate scavengers, black-backed jackals use carcasses as a supplemental food resource when they are available. While jackals do not appear to acquire disease from ingesting anthrax carcasses, changes in their movement patterns in response to changes in carcass abundance do alter jackals' conspecific encounter rate in ways that may affect the transmission dynamics of other diseases, such as rabies. Our theoretical results provide a method to quantify and analyse the hypothesis that the outbreak of a fatal disease among herbivores can potentially facilitate outbreaks of an entirely different disease among jackals. By analysing carcass visitation data, we find support for our model's prediction that the number of conspecific encounters at resource sites decreases with additional increases in resource availability. Whether or not this site-dependent effect translates to an overall decrease in encounters depends, unexpectedly, on the relationship between the maximum distance of detection and the resource density. PMID:29021163

  5. Study on low intensity aeration oxygenation model and optimization for shallow water

    NASA Astrophysics Data System (ADS)

    Chen, Xiao; Ding, Zhibin; Ding, Jian; Wang, Yi

    2018-02-01

    Aeration/oxygenation is an effective measure to improve self-purification capacity in shallow water treatment while high energy consumption, high noise and expensive management refrain the development and the application of this process. Based on two-film theory, the theoretical model of the three-dimensional partial differential equation of aeration in shallow water is established. In order to simplify the equation, the basic assumptions of gas-liquid mass transfer in vertical direction and concentration diffusion in horizontal direction are proposed based on engineering practice and are tested by the simulation results of gas holdup which are obtained by simulating the gas-liquid two-phase flow in aeration tank under low-intensity condition. Based on the basic assumptions and the theory of shallow permeability, the model of three-dimensional partial differential equations is simplified and the calculation model of low-intensity aeration oxygenation is obtained. The model is verified through comparing the aeration experiment. Conclusions as follows: (1)The calculation model of gas-liquid mass transfer in vertical direction and concentration diffusion in horizontal direction can reflect the process of aeration well; (2) Under low-intensity conditions, the long-term aeration and oxygenation is theoretically feasible to enhance the self-purification capacity of water bodies; (3) In the case of the same total aeration intensity, the effect of multipoint distributed aeration on the diffusion of oxygen concentration in the horizontal direction is obvious; (4) In the shallow water treatment, reducing the volume of aeration equipment with the methods of miniaturization, array, low-intensity, mobilization to overcome the high energy consumption, large size, noise and other problems can provide a good reference.

  6. Extended physics as a theoretical framework for systems biology?

    PubMed

    Miquel, Paul-Antoine

    2011-08-01

    In this essay we examine whether a theoretical and conceptual framework for systems biology could be built from the Bailly and Longo (2008, 2009) proposal. These authors aim to understand life as a coherent critical structure, and propose to develop an extended physical approach of evolution, as a diffusion of biomass in a space of complexity. Their attempt leads to a simple mathematical reconstruction of Gould's assumption (1989) concerning the bacterial world as a "left wall of least complexity" that we will examine. Extended physical systems are characterized by their constructive properties. Time is acting and new properties emerge by their history that can open the list of their initial properties. This conceptual and theoretical framework is nothing more than a philosophical assumption, but as such it provides a new and exciting approach concerning the evolution of life, and the transition between physics and biology. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Zipf’s word frequency law in natural language: A critical review and future directions

    PubMed Central

    2014-01-01

    The frequency distribution of words has been a key object of study in statistical linguistics for the past 70 years. This distribution approximately follows a simple mathematical form known as Zipf ’ s law. This article first shows that human language has a highly complex, reliable structure in the frequency distribution over and above this classic law, although prior data visualization methods have obscured this fact. A number of empirical phenomena related to word frequencies are then reviewed. These facts are chosen to be informative about the mechanisms giving rise to Zipf’s law and are then used to evaluate many of the theoretical explanations of Zipf’s law in language. No prior account straightforwardly explains all the basic facts or is supported with independent evaluation of its underlying assumptions. To make progress at understanding why language obeys Zipf’s law, studies must seek evidence beyond the law itself, testing assumptions and evaluating novel predictions with new, independent data. PMID:24664880

  8. Comparison of statistical algorithms for detecting homogeneous river reaches along a longitudinal continuum

    NASA Astrophysics Data System (ADS)

    Leviandier, Thierry; Alber, A.; Le Ber, F.; Piégay, H.

    2012-02-01

    Seven methods designed to delineate homogeneous river segments, belonging to four families, namely — tests of homogeneity, contrast enhancing, spatially constrained classification, and hidden Markov models — are compared, firstly on their principles, then on a case study, and on theoretical templates. These templates contain patterns found in the case study but not considered in the standard assumptions of statistical methods, such as gradients and curvilinear structures. The influence of data resolution, noise and weak satisfaction of the assumptions underlying the methods is investigated. The control of the number of reaches obtained in order to achieve meaningful comparisons is discussed. No method is found that outperforms all the others on all trials. However, the methods with sequential algorithms (keeping at order n + 1 all breakpoints found at order n) fail more often than those running complete optimisation at any order. The Hubert-Kehagias method and Hidden Markov Models are the most successful at identifying subpatterns encapsulated within the templates. Ergodic Hidden Markov Models are, moreover, liable to exhibit transition areas.

  9. Failure of Local Thermal Equilibrium in Quantum Friction

    NASA Astrophysics Data System (ADS)

    Intravaia, F.; Behunin, R. O.; Henkel, C.; Busch, K.; Dalvit, D. A. R.

    2016-09-01

    Recent progress in manipulating atomic and condensed matter systems has instigated a surge of interest in nonequilibrium physics, including many-body dynamics of trapped ultracold atoms and ions, near-field radiative heat transfer, and quantum friction. Under most circumstances the complexity of such nonequilibrium systems requires a number of approximations to make theoretical descriptions tractable. In particular, it is often assumed that spatially separated components of a system thermalize with their immediate surroundings, although the global state of the system is out of equilibrium. This powerful assumption reduces the complexity of nonequilibrium systems to the local application of well-founded equilibrium concepts. While this technique appears to be consistent for the description of some phenomena, we show that it fails for quantum friction by underestimating by approximately 80% the magnitude of the drag force. Our results show that the correlations among the components of driven, but steady-state, quantum systems invalidate the assumption of local thermal equilibrium, calling for a critical reexamination of this approach for describing the physics of nonequilibrium systems.

  10. Dynamic behaviour of thin composite plates for different boundary conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprintu, Iuliana, E-mail: sprintui@yahoo.com, E-mail: rotaruconstantin@yahoo.com; Rotaru, Constantin, E-mail: sprintui@yahoo.com, E-mail: rotaruconstantin@yahoo.com

    2014-12-10

    In the context of composite materials technology, which is increasingly present in industry, this article covers a topic of great interest and theoretical and practical importance. Given the complex design of fiber-reinforced materials and their heterogeneous nature, mathematical modeling of the mechanical response under different external stresses is very difficult to address in the absence of simplifying assumptions. In most structural applications, composite structures can be idealized as beams, plates, or shells. The analysis is reduced from a three-dimensional elasticity problem to a oneor two-dimensional problem, based on certain simplifying assumptions that can be made because the structure is thin.more » This paper aims to validate a mathematical model illustrating how thin rectangular orthotropic plates respond to the actual load. Thus, from the theory of thin plates, new analytical solutions are proposed corresponding to orthotropic rectangular plates having different boundary conditions. The proposed analytical solutions are considered both for solving equation orthotropic rectangular plates and for modal analysis.« less

  11. Theoretical and methodological issues with testing the SCCT and RIASEC models: Comment on Lent, Sheu, and Brown (2010) and Lubinski (2010).

    PubMed

    Armstrong, Patrick Ian; Vogel, David L

    2010-04-01

    The current article replies to comments made by Lent, Sheu, and Brown (2010) and Lubinski (2010) regarding the study "Interpreting the Interest-Efficacy Association From a RIASEC Perspective" (Armstrong & Vogel, 2009). The comments made by Lent et al. and Lubinski highlight a number of important theoretical and methodological issues, including the process of defining and differentiating between constructs, the assumptions underlying Holland's (1959, 1997) RIASEC (Realistic, Investigative, Artistic, Social, Enterprising, and Conventional types) model and interrelations among constructs specified in social cognitive career theory (SCCT), the importance of incremental validity for evaluating constructs, and methodological considerations when quantifying interest-efficacy correlations and for comparing models using multivariate statistical methods. On the basis of these comments and previous research on the SCCT and Holland models, we highlight the importance of considering multiple theoretical perspectives in vocational research and practice. Alternative structural models are outlined for examining the role of interests, self-efficacy, learning experiences, outcome expectations, personality, and cognitive abilities in the career choice and development process. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  12. The theoretical limit to plant productivity.

    PubMed

    DeLucia, Evan H; Gomez-Casanovas, Nuria; Greenberg, Jonathan A; Hudiburg, Tara W; Kantola, Ilsa B; Long, Stephen P; Miller, Adam D; Ort, Donald R; Parton, William J

    2014-08-19

    Human population and economic growth are accelerating the demand for plant biomass to provide food, fuel, and fiber. The annual increment of biomass to meet these needs is quantified as net primary production (NPP). Here we show that an underlying assumption in some current models may lead to underestimates of the potential production from managed landscapes, particularly of bioenergy crops that have low nitrogen requirements. Using a simple light-use efficiency model and the theoretical maximum efficiency with which plant canopies convert solar radiation to biomass, we provide an upper-envelope NPP unconstrained by resource limitations. This theoretical maximum NPP approached 200 tC ha(-1) yr(-1) at point locations, roughly 2 orders of magnitude higher than most current managed or natural ecosystems. Recalculating the upper envelope estimate of NPP limited by available water reduced it by half or more in 91% of the land area globally. While the high conversion efficiencies observed in some extant plants indicate great potential to increase crop yields without changes to the basic mechanism of photosynthesis, particularly for crops with low nitrogen requirements, realizing such high yields will require improvements in water use efficiency.

  13. Validity in work-based assessment: expanding our horizons.

    PubMed

    Govaerts, Marjan; van der Vleuten, Cees P M

    2013-12-01

    Although work-based assessments (WBA) may come closest to assessing habitual performance, their use for summative purposes is not undisputed. Most criticism of WBA stems from approaches to validity consistent with the quantitative psychometric framework. However, there is increasing research evidence that indicates that the assumptions underlying the predictive, deterministic framework of psychometrics may no longer hold. In this discussion paper we argue that meaningfulness and appropriateness of current validity evidence can be called into question and that we need alternative strategies to assessment and validity inquiry that build on current theories of learning and performance in complex and dynamic workplace settings. Drawing from research in various professional fields we outline key issues within the mechanisms of learning, competence and performance in the context of complex social environments and illustrate their relevance to WBA. In reviewing recent socio-cultural learning theory and research on performance and performance interpretations in work settings, we demonstrate that learning, competence (as inferred from performance) as well as performance interpretations are to be seen as inherently contextualised, and can only be under-stood 'in situ'. Assessment in the context of work settings may, therefore, be more usefully viewed as a socially situated interpretive act. We propose constructivist-interpretivist approaches towards WBA in order to capture and understand contextualised learning and performance in work settings. Theoretical assumptions underlying interpretivist assessment approaches call for a validity theory that provides the theoretical framework and conceptual tools to guide the validation process in the qualitative assessment inquiry. Basic principles of rigour specific to qualitative research have been established, and they can and should be used to determine validity in interpretivist assessment approaches. If used properly, these strategies generate trustworthy evidence that is needed to develop the validity argument in WBA, allowing for in-depth and meaningful information about professional competence. © 2013 John Wiley & Sons Ltd.

  14. Facilitative Dimensions in Interpersonal Relations: Verifying the Theoretical Assumptions of Carl Rogers in School, Family Education, Client-Centered Therapy, and Encounter Groups

    ERIC Educational Resources Information Center

    Tausch, Reinhard

    1978-01-01

    Summarized numerous different projects which investigated assumptions made by Carol Rogers about the necessary and sufficient conditions for significant positive change in person-to-person contact. Findings agree with Rogers' about the importance of empathy, genuineness, and respect. Presented at the Thirtieth Congress of Deutsch Gesell Schaft for…

  15. Individual Change and the Timing and Onset of Important Life Events: Methods, Models, and Assumptions

    ERIC Educational Resources Information Center

    Grimm, Kevin; Marcoulides, Katerina

    2016-01-01

    Researchers are often interested in studying how the timing of a specific event affects concurrent and future development. When faced with such research questions there are multiple statistical models to consider and those models are the focus of this paper as well as their theoretical underpinnings and assumptions regarding the nature of the…

  16. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  17. Mediating objects: scientific and public functions of models in nineteenth-century biology.

    PubMed

    Ludwig, David

    2013-01-01

    The aim of this article is to examine the scientific and public functions of two- and three-dimensional models in the context of three episodes from nineteenth-century biology. I argue that these models incorporate both data and theory by presenting theoretical assumptions in the light of concrete data or organizing data through theoretical assumptions. Despite their diverse roles in scientific practice, they all can be characterized as mediators between data and theory. Furthermore, I argue that these different mediating functions often reflect their different audiences that included specialized scientists, students, and the general public. In this sense, models in nineteenth-century biology can be understood as mediators between theory, data, and their diverse audiences.

  18. Disease Extinction Versus Persistence in Discrete-Time Epidemic Models.

    PubMed

    van den Driessche, P; Yakubu, Abdul-Aziz

    2018-04-12

    We focus on discrete-time infectious disease models in populations that are governed by constant, geometric, Beverton-Holt or Ricker demographic equations, and give a method for computing the basic reproduction number, [Formula: see text]. When [Formula: see text] and the demographic population dynamics are asymptotically constant or under geometric growth (non-oscillatory), we prove global asymptotic stability of the disease-free equilibrium of the disease models. Under the same demographic assumption, when [Formula: see text], we prove uniform persistence of the disease. We apply our theoretical results to specific discrete-time epidemic models that are formulated for SEIR infections, cholera in humans and anthrax in animals. Our simulations show that a unique endemic equilibrium of each of the three specific disease models is asymptotically stable whenever [Formula: see text].

  19. Reducing the time-lag between onset of chest pain and seeking professional medical help: a theory-based review

    PubMed Central

    2013-01-01

    Background Research suggests that there are a number of factors which can be associated with delay in a patient seeking professional help following chest pain, including demographic and social factors. These factors may have an adverse impact on the efficacy of interventions which to date have had limited success in improving patient action times. Theory-based methods of review are becoming increasingly recognised as important additions to conventional systematic review methods. They can be useful to gain additional insights into the characteristics of effective interventions by uncovering complex underlying mechanisms. Methods This paper describes the further analysis of research papers identified in a conventional systematic review of published evidence. The aim of this work was to investigate the theoretical frameworks underpinning studies exploring the issue of why people having a heart attack delay seeking professional medical help. The study used standard review methods to identify papers meeting the inclusion criterion, and carried out a synthesis of data relating to theoretical underpinnings. Results Thirty six papers from the 53 in the original systematic review referred to a particular theoretical perspective, or contained data which related to theoretical assumptions. The most frequently mentioned theory was the self-regulatory model of illness behaviour. Papers reported the potential significance of aspects of this model including different coping mechanisms, strategies of denial and varying models of treatment seeking. Studies also drew attention to the potential role of belief systems, applied elements of attachment theory, and referred to models of maintaining integrity, ways of knowing, and the influence of gender. Conclusions The review highlights the need to examine an individual’s subjective experience of and response to health threats, and confirms the gap between knowledge and changed behaviour. Interventions face key challenges if they are to influence patient perceptions regarding seriousness of symptoms; varying processes of coping; and obstacles created by patient perceptions of their role and responsibilities. A theoretical approach to review of these papers provides additional insight into the assumptions underpinning interventions, and illuminates factors which may impact on their efficacy. The method thus offers a useful supplement to conventional systematic review methods. PMID:23388093

  20. Dendritic solidification. I - Analysis of current theories and models. II - A model for dendritic growth under an imposed thermal gradient

    NASA Technical Reports Server (NTRS)

    Laxmanan, V.

    1985-01-01

    A critical review of the present dendritic growth theories and models is presented. Mathematically rigorous solutions to dendritic growth are found to rely on an ad hoc assumption that dendrites grow at the maximum possible growth rate. This hypothesis is found to be in error and is replaced by stability criteria which consider the conditions under which a dendrite tip advances in a stable fashion in a liquid. The important elements of a satisfactory model for dendritic solidification are summarized and a theoretically consistent model for dendritic growth under an imposed thermal gradient is proposed and described. The model is based on the modification of an analysis due to Burden and Hunt (1974) and predicts correctly in all respects, the transition from a dendritic to a planar interface at both very low and very large growth rates.

  1. Density functional computational studies on the glucose and glycine Maillard reaction: Formation of the Amadori rearrangement products

    NASA Astrophysics Data System (ADS)

    Jalbout, Abraham F.; Roy, Amlan K.; Shipar, Abul Haider; Ahmed, M. Samsuddin

    Theoretical energy changes of various intermediates leading to the formation of the Amadori rearrangement products (ARPs) under different mechanistic assumptions have been calculated, by using open chain glucose (O-Glu)/closed chain glucose (A-Glu and B-Glu) and glycine (Gly) as a model for the Maillard reaction. Density functional theory (DFT) computations have been applied on the proposed mechanisms under different pH conditions. Thus, the possibility of the formation of different compounds and electronic energy changes for different steps in the proposed mechanisms has been evaluated. B-Glu has been found to be more efficient than A-Glu, and A-Glu has been found more efficient than O-Glu in the reaction. The reaction under basic condition is the most favorable for the formation of ARPs. Other reaction pathways have been computed and discussed in this work.0

  2. Theory and simulation of the dynamics, deformation, and breakup of a chain of superparamagnetic beads under a rotating magnetic field

    NASA Astrophysics Data System (ADS)

    Vázquez-Quesada, A.; Franke, T.; Ellero, M.

    2017-03-01

    In this work, an analytical model for the behavior of superparamagnetic chains under the effect of a rotating magnetic field is presented. It is postulated that the relevant mechanisms for describing the shape and breakup of the chains into smaller fragments are the induced dipole-dipole magnetic force on the external beads, their translational and rotational drag forces, and the tangential lubrication between particles. Under this assumption, the characteristic S-shape of the chain can be qualitatively understood. Furthermore, based on a straight chain approximation, a novel analytical expression for the critical frequency for the chain breakup is obtained. In order to validate the model, the analytical expressions are compared with full three-dimensional smoothed particle hydrodynamics simulations of magnetic beads showing excellent agreement. Comparison with previous theoretical results and experimental data is also reported.

  3. Adolescent Egocentrism and Formal Operations: Tests of a Theoretical Assumption.

    ERIC Educational Resources Information Center

    Lapsley, David K.; And Others

    1986-01-01

    Describes two studies of the theoretical relation between adolescent egocentrism and formal operations. Study 1 used the Adolescent Egocentrism Scale (AES) and Lunzer's battery of formal reasoning tasks to assess 183 adolescents. Study 2 administered the AES, the Imaginary Audience Scale (IAS), and the Test of Logical Thinking to 138 adolescents.…

  4. The future of future-oriented cognition in non-humans: theory and the empirical case of the great apes.

    PubMed

    Osvath, Mathias; Martin-Ordas, Gema

    2014-11-05

    One of the most contested areas in the field of animal cognition is non-human future-oriented cognition. We critically examine key underlying assumptions in the debate, which is mainly preoccupied with certain dichotomous positions, the most prevalent being whether or not 'real' future orientation is uniquely human. We argue that future orientation is a theoretical construct threatening to lead research astray. Cognitive operations occur in the present moment and can be influenced only by prior causation and the environment, at the same time that most appear directed towards future outcomes. Regarding the current debate, future orientation becomes a question of where on various continua cognition becomes 'truly' future-oriented. We question both the assumption that episodic cognition is the most important process in future-oriented cognition and the assumption that future-oriented cognition is uniquely human. We review the studies on future-oriented cognition in the great apes to find little doubt that our closest relatives possess such ability. We conclude by urging that future-oriented cognition not be viewed as expression of some select set of skills. Instead, research into future-oriented cognition should be approached more like research into social and physical cognition. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  5. The future of future-oriented cognition in non-humans: theory and the empirical case of the great apes

    PubMed Central

    Osvath, Mathias; Martin-Ordas, Gema

    2014-01-01

    One of the most contested areas in the field of animal cognition is non-human future-oriented cognition. We critically examine key underlying assumptions in the debate, which is mainly preoccupied with certain dichotomous positions, the most prevalent being whether or not ‘real’ future orientation is uniquely human. We argue that future orientation is a theoretical construct threatening to lead research astray. Cognitive operations occur in the present moment and can be influenced only by prior causation and the environment, at the same time that most appear directed towards future outcomes. Regarding the current debate, future orientation becomes a question of where on various continua cognition becomes ‘truly’ future-oriented. We question both the assumption that episodic cognition is the most important process in future-oriented cognition and the assumption that future-oriented cognition is uniquely human. We review the studies on future-oriented cognition in the great apes to find little doubt that our closest relatives possess such ability. We conclude by urging that future-oriented cognition not be viewed as expression of some select set of skills. Instead, research into future-oriented cognition should be approached more like research into social and physical cognition. PMID:25267827

  6. Modeling the thickness dependence of the magnetic phase transition temperature in thin FeRh films

    NASA Astrophysics Data System (ADS)

    Ostler, Thomas Andrew; Barton, Craig; Thomson, Thomas; Hrkac, Gino

    2017-02-01

    FeRh and its first-order phase transition can open new routes for magnetic hybrid materials and devices under the assumption that it can be exploited in ultra-thin-film structures. Motivated by experimental measurements showing an unexpected increase in the phase transition temperature with decreasing thickness of FeRh on top of MgO, we develop a computational model to investigate strain effects of FeRh in such magnetic structures. Our theoretical results show that the presence of the MgO interface results in a strain that changes the magnetic configuration which drives the anomalous behavior.

  7. An interpretation of the narrow positron annihilation feature from X-ray nova Muscae 1991

    NASA Technical Reports Server (NTRS)

    Chen, Wan; Gehrels, Neil; Cheng, F. H.

    1993-01-01

    The physical mechanism responsible for the narrow redshifted positron annihilation gamma-ray line from the X-ray nova Muscae 1991 is studied. The orbital inclination angle of the system is estimated and its black hole mass is constrained under the assumptions that the annihilation line centroid redshift is purely gravitational and that the line width is due to the combined effect of temperature broadening and disk rotation. The large black hole mass lower limit of 8 solar and the high binary mass ratio it implies raise a serious challenge to theoretical models of the formation and evolution of massive binaries.

  8. The span as a fundamental factor in airplane design

    NASA Technical Reports Server (NTRS)

    Lachmann, G

    1928-01-01

    Previous theoretical investigations of steady curvilinear flight did not afford a suitable criterion of "maneuverability," which is very important for judging combat, sport and stunt-flying airplanes. The idea of rolling ability, i.e., of the speed of rotation of the airplane about its X axis in rectilinear flight at constant speed and for a constant, suddenly produced deflection of the ailerons, is introduced and tested under simplified assumptions for the air-force distribution over the span. This leads to the following conclusions: the effect of the moment of inertia about the X axis is negligibly small, since the speed of rotation very quickly reaches a uniform value.

  9. Spatially Resolved Spectroscopy of the PMS Quadruple GG Tau: Evidence for a Substellar Companion

    NASA Astrophysics Data System (ADS)

    White, R. J.; Ghez, A. M.; Schultz, G.; Reid, I. N.

    1998-05-01

    We present spatially resolved optical spectra from HST (FOS) and the Keck Telescope (HIRES & LRIS) of the components of the quadruple PMS system GG Tau. According to the latest PMS evolutionary models, the coldest component of this system, GG Tau/c B, appears to be substellar with a preliminary mass of only 50 M_J. This putative brown dwarf is especially intriguing as it shows clear signatures of accretion. The components of this quadruple, which span a wide range in mass, are used to test theoretical low mass PMS evolutionary models under the assumption that the components should be coeval.

  10. Amplified total internal reflection: theory, analysis, and demonstration of existence via FDTD.

    PubMed

    Willis, Keely J; Schneider, John B; Hagness, Susan C

    2008-02-04

    The explanation of wave behavior upon total internal reflection from a gainy medium has defied consensus for 40 years. We examine this question using both the finite-difference time-domain (FDTD) method and theoretical analyses. FDTD simulations of a localized wave impinging on a gainy half space are based directly on Maxwell's equations and make no underlying assumptions. They reveal that amplification occurs upon total internal reflection from a gainy medium; conversely, amplification does not occur for incidence below the critical angle. Excellent agreement is obtained between the FDTD results and an analytical formulation that employs a new branch cut in the complex "propagation-constant" plane.

  11. Information theoretic quantification of diagnostic uncertainty.

    PubMed

    Westover, M Brandon; Eiseman, Nathaniel A; Cash, Sydney S; Bianchi, Matt T

    2012-01-01

    Diagnostic test interpretation remains a challenge in clinical practice. Most physicians receive training in the use of Bayes' rule, which specifies how the sensitivity and specificity of a test for a given disease combine with the pre-test probability to quantify the change in disease probability incurred by a new test result. However, multiple studies demonstrate physicians' deficiencies in probabilistic reasoning, especially with unexpected test results. Information theory, a branch of probability theory dealing explicitly with the quantification of uncertainty, has been proposed as an alternative framework for diagnostic test interpretation, but is even less familiar to physicians. We have previously addressed one key challenge in the practical application of Bayes theorem: the handling of uncertainty in the critical first step of estimating the pre-test probability of disease. This essay aims to present the essential concepts of information theory to physicians in an accessible manner, and to extend previous work regarding uncertainty in pre-test probability estimation by placing this type of uncertainty within a principled information theoretic framework. We address several obstacles hindering physicians' application of information theoretic concepts to diagnostic test interpretation. These include issues of terminology (mathematical meanings of certain information theoretic terms differ from clinical or common parlance) as well as the underlying mathematical assumptions. Finally, we illustrate how, in information theoretic terms, one can understand the effect on diagnostic uncertainty of considering ranges instead of simple point estimates of pre-test probability.

  12. A normative inference approach for optimal sample sizes in decisions from experience

    PubMed Central

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    “Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  13. A simple analytical model for dynamics of time-varying target leverage ratios

    NASA Astrophysics Data System (ADS)

    Lo, C. F.; Hui, C. H.

    2012-03-01

    In this paper we have formulated a simple theoretical model for the dynamics of the time-varying target leverage ratio of a firm under some assumptions based upon empirical observations. In our theoretical model the time evolution of the target leverage ratio of a firm can be derived self-consistently from a set of coupled Ito's stochastic differential equations governing the leverage ratios of an ensemble of firms by the nonlinear Fokker-Planck equation approach. The theoretically derived time paths of the target leverage ratio bear great resemblance to those used in the time-dependent stationary-leverage (TDSL) model [Hui et al., Int. Rev. Financ. Analy. 15, 220 (2006)]. Thus, our simple model is able to provide a theoretical foundation for the selected time paths of the target leverage ratio in the TDSL model. We also examine how the pace of the adjustment of a firm's target ratio, the volatility of the leverage ratio and the current leverage ratio affect the dynamics of the time-varying target leverage ratio. Hence, with the proposed dynamics of the time-dependent target leverage ratio, the TDSL model can be readily applied to generate the default probabilities of individual firms and to assess the default risk of the firms.

  14. On neutral metacommunity patterns of river basins at different scales of aggregation

    NASA Astrophysics Data System (ADS)

    Convertino, Matteo; Muneepeerakul, Rachata; Azaele, Sandro; Bertuzzo, Enrico; Rinaldo, Andrea; Rodriguez-Iturbe, Ignacio

    2009-08-01

    Neutral metacommunity models for spatial biodiversity patterns are implemented on river networks acting as ecological corridors at different resolution. Coarse-graining elevation fields (under the constraint of preserving the basin mean elevation) produce a set of reconfigured drainage networks. The hydrologic assumption made implies uniform runoff production such that each link has the same habitat capacity. Despite the universal scaling properties shown by river basins regardless of size, climate, vegetation, or exposed lithology, we find that species richness at local and regional scales exhibits resolution-dependent behavior. In addition, we investigate species-area relationships and rank-abundance patterns. The slopes of the species-area relationships, which are consistent over coarse-graining resolutions, match those found in real landscapes in the case of long-distance dispersal. The rank-abundance patterns are independent of the resolution over a broad range of dispersal length. Our results confirm that strong interactions occur between network structure and the dispersal of species and that under the assumption of neutral dynamics, these interactions produce resolution-dependent biodiversity patterns that diverge from expectations following from universal geomorphic scaling laws. Both in theoretical and in applied ecology studying how patterns change in resolution is relevant for understanding how ecological dynamics work in fragmented landscape and for sampling and biodiversity management campaigns, especially in consideration of climate change.

  15. Why you cannot transform your way out of trouble for small counts.

    PubMed

    Warton, David I

    2018-03-01

    While data transformation is a common strategy to satisfy linear modeling assumptions, a theoretical result is used to show that transformation cannot reasonably be expected to stabilize variances for small counts. Under broad assumptions, as counts get smaller, it is shown that the variance becomes proportional to the mean under monotonic transformations g(·) that satisfy g(0)=0, excepting a few pathological cases. A suggested rule-of-thumb is that if many predicted counts are less than one then data transformation cannot reasonably be expected to stabilize variances, even for a well-chosen transformation. This result has clear implications for the analysis of counts as often implemented in the applied sciences, but particularly for multivariate analysis in ecology. Multivariate discrete data are often collected in ecology, typically with a large proportion of zeros, and it is currently widespread to use methods of analysis that do not account for differences in variance across observations nor across responses. Simulations demonstrate that failure to account for the mean-variance relationship can have particularly severe consequences in this context, and also in the univariate context if the sampling design is unbalanced. © 2017 The Authors. Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  16. Why Are There Social Gradients in Preventative Health Behavior? A Perspective from Behavioral Ecology

    PubMed Central

    Nettle, Daniel

    2010-01-01

    Background Within affluent populations, there are marked socioeconomic gradients in health behavior, with people of lower socioeconomic position smoking more, exercising less, having poorer diets, complying less well with therapy, using medical services less, ignoring health and safety advice more, and being less health-conscious overall, than their more affluent peers. Whilst the proximate mechanisms underlying these behavioral differences have been investigated, the ultimate causes have not. Methodology/Principal Findings This paper presents a theoretical model of why socioeconomic gradients in health behavior might be found. I conjecture that lower socioeconomic position is associated with greater exposure to extrinsic mortality risks (that is, risks that cannot be mitigated through behavior), and that health behavior competes for people's time and energy against other activities which contribute to their fitness. Under these two assumptions, the model shows that the optimal amount of health behavior to perform is indeed less for people of lower socioeconomic position. Conclusions/Significance The model predicts an exacerbatory dynamic of poverty, whereby the greater exposure of poor people to unavoidable harms engenders a disinvestment in health behavior, resulting in a final inequality in health outcomes which is greater than the initial inequality in material conditions. I discuss the assumptions of the model, and its implications for strategies for the reduction of health inequalities. PMID:20967214

  17. Sense of Community as Construct and Theory: Authors' Response to McMillan

    ERIC Educational Resources Information Center

    Nowell, Branda; Boyd, Neil

    2011-01-01

    In this article, we respond to criticisms posed by McMillan (2011) of our recent paper, "Viewing Community as Responsibility as well as a Resource: Deconstructing the Theoretical Roots of Psychological Sense of Community." We clarify that the focus of our article was to explore the macro theoretical frameworks and second-order assumptions that…

  18. Argumentation and Participation in the Primary Mathematics Classroom: Two Episodes and Related Theoretical Abductions

    ERIC Educational Resources Information Center

    Krummheuer, Gotz

    2007-01-01

    The main assumption of this article is that learning mathematics depends on the student's participation in processes of collective argumentation. On the empirical level, such processes will be analyzed with Toulmin's theory of argumentation and Goffman's idea of decomposition of the speaker's role. On the theoretical level, different statuses of…

  19. Social Representations of the Development of Intelligence, Parental Values and Parenting Styles: A Theoretical Model for Analysis

    ERIC Educational Resources Information Center

    Miguel, Isabel; Valentim, Joaquim Pires; Carugati, Felice

    2013-01-01

    Within the theoretical framework of social representations theory, a substantial body of literature has advocated and shown that, as interpretative systems and forms of knowledge concurring in the construction of a social reality, social representations are guides for action, influencing behaviours and social relations. Based on this assumption,…

  20. Energy transport in weakly nonlinear wave systems with narrow frequency band excitation.

    PubMed

    Kartashova, Elena

    2012-10-01

    A novel discrete model (D model) is presented describing nonlinear wave interactions in systems with small and moderate nonlinearity under narrow frequency band excitation. It integrates in a single theoretical frame two mechanisms of energy transport between modes, namely, intermittency and energy cascade, and gives the conditions under which each regime will take place. Conditions for the formation of a cascade, cascade direction, conditions for cascade termination, etc., are given and depend strongly on the choice of excitation parameters. The energy spectra of a cascade may be computed, yielding discrete and continuous energy spectra. The model does not require statistical assumptions, as all effects are derived from the interaction of distinct modes. In the example given-surface water waves with dispersion function ω(2)=gk and small nonlinearity-the D model predicts asymmetrical growth of side-bands for Benjamin-Feir instability, while the transition from discrete to continuous energy spectrum, excitation parameters properly chosen, yields the saturated Phillips' power spectrum ~g(2)ω(-5). The D model can be applied to the experimental and theoretical study of numerous wave systems appearing in hydrodynamics, nonlinear optics, electrodynamics, plasma, convection theory, etc.

  1. A Test of Major Assumptions about Behavior Change: A Comprehensive Look at the Effects of Passive and Active HIV-Prevention Interventions Since the Beginning of the Epidemic

    ERIC Educational Resources Information Center

    Albarracin, Dolores; Gillette, Jeffrey C.; Earl, Allison N.; Glasman, Laura R.; Durantini, Marta R.; Ho, Moon-Ho

    2005-01-01

    This meta-analysis tested the major theoretical assumptions about behavior change by examining the outcomes and mediating mechanisms of different preventive strategies in a sample of 354 HIV-prevention interventions and 99 control groups, spanning the past 17 years. There were 2 main conclusions from this extensive review. First, the most…

  2. Theoretical models and simulation codes to investigate bystander effects and cellular communication at low doses

    NASA Astrophysics Data System (ADS)

    Ballarini, F.; Alloni, D.; Facoetti, A.; Mairani, A.; Nano, R.; Ottolenghi, A.

    Astronauts in space are continuously exposed to low doses of ionizing radiation from Galactic Cosmic Rays During the last ten years the effects of low radiation doses have been widely re-discussed following a large number of observations on the so-called non targeted effects in particular bystander effects The latter consist of induction of cytogenetic damage in cells not directly traversed by radiation most likely as a response to molecular messengers released by directly irradiated cells Bystander effects which are observed both for lethal endpoints e g clonogenic inactivation and apoptosis and for non-lethal ones e g mutations and neoplastic transformation tend to show non-linear dose responses This might have significant consequences in terms of low-dose risk which is generally calculated on the basis of the Linear No Threshold hypothesis Although the mechanisms underlying bystander effects are still largely unknown it is now clear that two types of cellular communication i e via gap junctions and or release of molecular messengers into the extracellular environment play a fundamental role Theoretical models and simulation codes can be of help in elucidating such mechanisms In the present paper we will review different available modelling approaches including one that is being developed at the University of Pavia The focus will be on the different assumptions adopted by the various authors and on the implications of such assumptions in terms of non-targeted radiobiological damage and more generally low-dose

  3. Three regularities of recognition memory: the role of bias.

    PubMed

    Hilford, Andrew; Maloney, Laurence T; Glanzer, Murray; Kim, Kisok

    2015-12-01

    A basic assumption of Signal Detection Theory is that decisions are made on the basis of likelihood ratios. In a preceding paper, Glanzer, Hilford, and Maloney (Psychonomic Bulletin & Review, 16, 431-455, 2009) showed that the likelihood ratio assumption implies that three regularities will occur in recognition memory: (1) the Mirror Effect, (2) the Variance Effect, (3) the normalized Receiver Operating Characteristic (z-ROC) Length Effect. The paper offered formal proofs and computational demonstrations that decisions based on likelihood ratios produce the three regularities. A survey of data based on group ROCs from 36 studies validated the likelihood ratio assumption by showing that its three implied regularities are ubiquitous. The study noted, however, that bias, another basic factor in Signal Detection Theory, can obscure the Mirror Effect. In this paper we examine how bias affects the regularities at the theoretical level. The theoretical analysis shows: (1) how bias obscures the Mirror Effect, not the other two regularities, and (2) four ways to counter that obscuring. We then report the results of five experiments that support the theoretical analysis. The analyses and the experimental results also demonstrate: (1) that the three regularities govern individual, as well as group, performance, (2) alternative explanations of the regularities are ruled out, and (3) that Signal Detection Theory, correctly applied, gives a simple and unified explanation of recognition memory data.

  4. Implementing Geographical Key Concepts: Design of a Symbiotic Teacher Training Course Based on Empirical and Theoretical Evidence

    ERIC Educational Resources Information Center

    Fögele, Janis; Mehren, Rainer

    2015-01-01

    A central desideratum for the professionalization of qualified teachers is an improved practice of further teacher education. The present work constitutes a course of in-service training, which is built upon both a review of empirical findings concerning the efficacy of in-service training courses for teachers and theoretical assumptions about the…

  5. Critical assessment of inverse gas chromatography as means of assessing surface free energy and acid-base interaction of pharmaceutical powders.

    PubMed

    Telko, Martin J; Hickey, Anthony J

    2007-10-01

    Inverse gas chromatography (IGC) has been employed as a research tool for decades. Despite this record of use and proven utility in a variety of applications, the technique is not routinely used in pharmaceutical research. In other fields the technique has flourished. IGC is experimentally relatively straightforward, but analysis requires that certain theoretical assumptions are satisfied. The assumptions made to acquire some of the recently reported data are somewhat modified compared to initial reports. Most publications in the pharmaceutical literature have made use of a simplified equation for the determination of acid/base surface properties resulting in parameter values that are inconsistent with prior methods. In comparing the surface properties of different batches of alpha-lactose monohydrate, new data has been generated and compared with literature to allow critical analysis of the theoretical assumptions and their importance to the interpretation of the data. The commonly used (simplified) approach was compared with the more rigorous approach originally outlined in the surface chemistry literature. (c) 2007 Wiley-Liss, Inc.

  6. Critical frontier of the triangular Ising antiferromagnet in a field

    NASA Astrophysics Data System (ADS)

    Qian, Xiaofeng; Wegewijs, Maarten; Blöte, Henk W.

    2004-03-01

    We study the critical line of the triangular Ising antiferromagnet in an external magnetic field by means of a finite-size analysis of results obtained by transfer-matrix and Monte Carlo techniques. We compare the shape of the critical line with predictions of two different theoretical scenarios. Both scenarios, while plausible, involve assumptions. The first scenario is based on the generalization of the model to a vertex model, and the assumption that the exact analytic form of the critical manifold of this vertex model is determined by the zeroes of an O(2) gauge-invariant polynomial in the vertex weights. However, it is not possible to fit the coefficients of such polynomials of orders up to 10, such as to reproduce the numerical data for the critical points. The second theoretical prediction is based on the assumption that a renormalization mapping exists of the Ising model on the Coulomb gas, and analysis of the resulting renormalization equations. It leads to a shape of the critical line that is inconsistent with the first prediction, but consistent with the numerical data.

  7. Additive Genetic Variability and the Bayesian Alphabet

    PubMed Central

    Gianola, Daniel; de los Campos, Gustavo; Hill, William G.; Manfredi, Eduardo; Fernando, Rohan

    2009-01-01

    The use of all available molecular markers in statistical models for prediction of quantitative traits has led to what could be termed a genomic-assisted selection paradigm in animal and plant breeding. This article provides a critical review of some theoretical and statistical concepts in the context of genomic-assisted genetic evaluation of animals and crops. First, relationships between the (Bayesian) variance of marker effects in some regression models and additive genetic variance are examined under standard assumptions. Second, the connection between marker genotypes and resemblance between relatives is explored, and linkages between a marker-based model and the infinitesimal model are reviewed. Third, issues associated with the use of Bayesian models for marker-assisted selection, with a focus on the role of the priors, are examined from a theoretical angle. The sensitivity of a Bayesian specification that has been proposed (called “Bayes A”) with respect to priors is illustrated with a simulation. Methods that can solve potential shortcomings of some of these Bayesian regression procedures are discussed briefly. PMID:19620397

  8. Ontological addiction theory: Attachment to me, mine, and I.

    PubMed

    Van Gordon, William; Shonin, Edo; Diouri, Sofiane; Garcia-Campayo, Javier; Kotera, Yasuhiro; Griffiths, Mark D

    2018-06-07

    Background Ontological addiction theory (OAT) is a novel metaphysical model of psychopathology and posits that human beings are prone to forming implausible beliefs concerning the way they think they exist, and that these beliefs can become addictive leading to functional impairments and mental illness. The theoretical underpinnings of OAT derive from the Buddhist philosophical perspective that all phenomena, including the self, do not manifest inherently or independently. Aims and methods This paper outlines the theoretical foundations of OAT along with indicative supportive empirical evidence from studies evaluating meditation awareness training as well as studies investigating non-attachment, emptiness, compassion, and loving-kindness. Results OAT provides a novel perspective on addiction, the factors that underlie mental illness, and how beliefs concerning selfhood are shaped and reified. Conclusion In addition to continuing to test the underlying assumptions of OAT, future empirical research needs to determine how ontological addiction fits with extant theories of self, reality, and suffering, as well with more established models of addiction.

  9. Single cell proteomics in biomedicine: High-dimensional data acquisition, visualization, and analysis.

    PubMed

    Su, Yapeng; Shi, Qihui; Wei, Wei

    2017-02-01

    New insights on cellular heterogeneity in the last decade provoke the development of a variety of single cell omics tools at a lightning pace. The resultant high-dimensional single cell data generated by these tools require new theoretical approaches and analytical algorithms for effective visualization and interpretation. In this review, we briefly survey the state-of-the-art single cell proteomic tools with a particular focus on data acquisition and quantification, followed by an elaboration of a number of statistical and computational approaches developed to date for dissecting the high-dimensional single cell data. The underlying assumptions, unique features, and limitations of the analytical methods with the designated biological questions they seek to answer will be discussed. Particular attention will be given to those information theoretical approaches that are anchored in a set of first principles of physics and can yield detailed (and often surprising) predictions. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Recent theoretical developments and experimental studies pertinent to vortex flow aerodynamics - With a view towards design

    NASA Technical Reports Server (NTRS)

    Lamar, J. E.; Luckring, J. M.

    1978-01-01

    A review is presented of recent progress in a research program directed towards the development of an improved vortex-flow technology base. It is pointed out that separation induced vortex-flows from the leading and side edges play an important role in the high angle-of-attack aerodynamic characteristics of a wide range of modern aircraft. In the analysis and design of high-speed aircraft, a detailed knowledge of this type of separation is required, particularly with regard to critical wind loads and the stability and performance at various off-design conditions. A description of analytical methods is presented. The theoretical methods employed are divided into two classes which are dependent upon the underlying aerodynamic assumptions. One conical flow method is considered along with three different nonconical flow methods. Comparisons are conducted between the described methods and available aerodynamic data. Attention is also given to a vortex flow drag study and a vortex flow wing design using suction analogy.

  11. Conceptualizing structural change in health promotion: why we still need to know more about theory.

    PubMed

    Gelius, Peter; Rütten, Alfred

    2017-02-28

    As recently discussed in the public health literature, many questions concerning 'structural' approaches in health promotion seem to remain unanswered. We argue that, before attempting to provide answers, it is essential to clarify the underlying theoretical assumptions in order to arrive at the right questions one should ask. To this end, we introduce into the current debate an existing theoretical framework that helps conceptualize structural and individual aspects of health promotion interventions at different levels of action. Using an example from the field of physical activity promotion, we illustrate how an integrated framework can help researchers and health promoters rethink important issues and design better interventions. In particular, such an approach may help overcome perceived distinctions between different types of approaches, re-conceptualize ideas about the effectiveness of interventions, and appropriately address issues of health disparities. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Perfect blind restoration of images blurred by multiple filters: theory and efficient algorithms.

    PubMed

    Harikumar, G; Bresler, Y

    1999-01-01

    We address the problem of restoring an image from its noisy convolutions with two or more unknown finite impulse response (FIR) filters. We develop theoretical results about the existence and uniqueness of solutions, and show that under some generically true assumptions, both the filters and the image can be determined exactly in the absence of noise, and stably estimated in its presence. We present efficient algorithms to estimate the blur functions and their sizes. These algorithms are of two types, subspace-based and likelihood-based, and are extensions of techniques proposed for the solution of the multichannel blind deconvolution problem in one dimension. We present memory and computation-efficient techniques to handle the very large matrices arising in the two-dimensional (2-D) case. Once the blur functions are determined, they are used in a multichannel deconvolution step to reconstruct the unknown image. The theoretical and practical implications of edge effects, and "weakly exciting" images are examined. Finally, the algorithms are demonstrated on synthetic and real data.

  13. Epidemiology as discourse: the politics of development institutions in the Epidemiological Profile of El Salvador

    PubMed Central

    Aviles, L

    2001-01-01

    STUDY OBJECTIVE—To determine the ways in which institutions devoted to international development influence epidemiological studies.
DESIGN—This article takes a descriptive epidemiological study of El Salvador, Epidemiological Profile, conducted in 1994 by the US Agency for International Development, as a case study. The methods include discourse analysis in order to uncover the ideological basis of the report and its characteristics as a discourse of development.
SETTING—El Salvador.
RESULTS—The Epidemiological Profile theoretical basis, the epidemiological transition theory, embodies the ethnocentrism of a "colonizer's model of the world." This report follows the logic of a discourse of development by depoliticising development, creating abnormalities, and relying on the development consulting industry. The epidemiological transition theory serves as an ideology that legitimises and dissimulates the international order.
CONCLUSIONS—Even descriptive epidemiological assessments or epidemiological profiles are imbued with theoretical assumptions shaped by the institutional setting under which epidemiological investigations are conducted.


Keywords: El Salvador; politics PMID:11160170

  14. The brainstem reticular formation is a small-world, not scale-free, network

    PubMed Central

    Humphries, M.D; Gurney, K; Prescott, T.J

    2005-01-01

    Recently, it has been demonstrated that several complex systems may have simple graph-theoretic characterizations as so-called ‘small-world’ and ‘scale-free’ networks. These networks have also been applied to the gross neural connectivity between primate cortical areas and the nervous system of Caenorhabditis elegans. Here, we extend this work to a specific neural circuit of the vertebrate brain—the medial reticular formation (RF) of the brainstem—and, in doing so, we have made three key contributions. First, this work constitutes the first model (and quantitative review) of this important brain structure for over three decades. Second, we have developed the first graph-theoretic analysis of vertebrate brain connectivity at the neural network level. Third, we propose simple metrics to quantitatively assess the extent to which the networks studied are small-world or scale-free. We conclude that the medial RF is configured to create small-world (implying coherent rapid-processing capabilities), but not scale-free, type networks under assumptions which are amenable to quantitative measurement. PMID:16615219

  15. Time evolution of predictability of epidemics on networks.

    PubMed

    Holme, Petter; Takaguchi, Taro

    2015-04-01

    Epidemic outbreaks of new pathogens, or known pathogens in new populations, cause a great deal of fear because they are hard to predict. For theoretical models of disease spreading, on the other hand, quantities characterizing the outbreak converge to deterministic functions of time. Our goal in this paper is to shed some light on this apparent discrepancy. We measure the diversity of (and, thus, the predictability of) outbreak sizes and extinction times as functions of time given different scenarios of the amount of information available. Under the assumption of perfect information-i.e., knowing the state of each individual with respect to the disease-the predictability decreases exponentially, or faster, with time. The decay is slowest for intermediate values of the per-contact transmission probability. With a weaker assumption on the information available, assuming that we know only the fraction of currently infectious, recovered, or susceptible individuals, the predictability also decreases exponentially most of the time. There are, however, some peculiar regions in this scenario where the predictability decreases. In other words, to predict its final size with a given accuracy, we would need increasingly more information about the outbreak.

  16. Swiss and Dutch "consumer-driven health care": ideal model or reality?

    PubMed

    Okma, Kieke G H; Crivelli, Luca

    2013-02-01

    This article addresses three topics. First, it reports on the international interest in the health care reforms of Switzerland and The Netherlands in the 1990s and early 2000s that operate under the label "managed competition" or "consumer-driven health care." Second, the article reviews the behavior assumptions that make plausible the case for the model of "managed competition." Third, it analyze the actual reform experience of Switzerland and Holland to assess to what extent they confirm the validity of those assumptions. The article concludes that there is a triple gap in understanding of those topics: a gap between the theoretical model of managed competition and the reforms as implemented in both Switzerland and The Netherlands; second, a gap between the expectations of policy-makers and the results of the reforms, and third, a gap between reform outcomes and the observations of external commentators that have embraced the reforms as the ultimate success of "consumer-driven health care." The article concludes with a discussion of the implications of this "triple gap". Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  17. Time evolution of predictability of epidemics on networks

    NASA Astrophysics Data System (ADS)

    Holme, Petter; Takaguchi, Taro

    2015-04-01

    Epidemic outbreaks of new pathogens, or known pathogens in new populations, cause a great deal of fear because they are hard to predict. For theoretical models of disease spreading, on the other hand, quantities characterizing the outbreak converge to deterministic functions of time. Our goal in this paper is to shed some light on this apparent discrepancy. We measure the diversity of (and, thus, the predictability of) outbreak sizes and extinction times as functions of time given different scenarios of the amount of information available. Under the assumption of perfect information—i.e., knowing the state of each individual with respect to the disease—the predictability decreases exponentially, or faster, with time. The decay is slowest for intermediate values of the per-contact transmission probability. With a weaker assumption on the information available, assuming that we know only the fraction of currently infectious, recovered, or susceptible individuals, the predictability also decreases exponentially most of the time. There are, however, some peculiar regions in this scenario where the predictability decreases. In other words, to predict its final size with a given accuracy, we would need increasingly more information about the outbreak.

  18. Authors' response: the primacy of conscious decision making.

    PubMed

    Shanks, David R; Newell, Ben R

    2014-02-01

    The target article sought to question the common belief that our decisions are often biased by unconscious influences. While many commentators offer additional support for this perspective, others question our theoretical assumptions, empirical evaluations, and methodological criteria. We rebut in particular the starting assumption that all decision making is unconscious, and that the onus should be on researchers to prove conscious influences. Further evidence is evaluated in relation to the core topics we reviewed (multiple-cue judgment, deliberation without attention, and decisions under uncertainty), as well as priming effects. We reiterate a key conclusion from the target article, namely, that it now seems to be generally accepted that awareness should be operationally defined as reportable knowledge, and that such knowledge can only be evaluated by careful and thorough probing. We call for future research to pay heed to the different ways in which awareness can intervene in decision making (as identified in our lens model analysis) and to employ suitable methodology in the assessment of awareness, including the requirements that awareness assessment must be reliable, relevant, immediate, and sensitive.

  19. Failure of local thermal equilibrium in quantum friction

    DOE PAGES

    Intravaia, Francesco; Behunin, Ryan; Henkel, Carsten; ...

    2016-09-01

    Recent progress in manipulating atomic and condensed matter systems has instigated a surge of interest in nonequilibrium physics, including many-body dynamics of trapped ultracold atoms and ions, near-field radiative heat transfer, and quantum friction. Under most circumstances the complexity of such nonequilibrium systems requires a number of approximations to make theoretical descriptions tractable. In particular, it is often assumed that spatially separated components of a system thermalize with their immediate surroundings, although the global state of the system is out of equilibrium. This powerful assumption reduces the complexity of nonequilibrium systems to the local application of well-founded equilibrium concepts. Whilemore » this technique appears to be consistent for the description of some phenomena, we show that it fails for quantum friction by underestimating by approximately 80% the magnitude of the drag force. Here, our results show that the correlations among the components of driven, but steady-state, quantum systems invalidate the assumption of local thermal equilibrium, calling for a critical reexamination of this approach for describing the physics of nonequilibrium systems.« less

  20. Understanding the relationship between repetition priming and mere exposure.

    PubMed

    Butler, Laurie T; Berry, Dianne C

    2004-11-01

    Over the last two decades interest in implicit memory, most notably repetition priming, has grown considerably. During the same period, research has also focused on the mere exposure effect. Although the two areas have developed relatively independently, a number of studies has described the mere exposure effect as an example of implicit memory. Tacit in their comparisons is the assumption that the effect is more specifically a demonstration of repetition priming. Having noted that this assumption has attracted relatively little attention, this paper reviews current evidence and shows that it is by no means conclusive. Although some evidence is suggestive of a common underlying mechanism, even a modified repetition priming (perceptual fluency/attribution) framework cannot accommodate all of the differences between the two phenomena. Notwithstanding this, it seems likely that a version of this theoretical framework still offers the best hope of a comprehensive explanation for the mere exposure effect and its relationship to repetition priming. As such, the paper finishes by offering some initial guidance as to ways in which the perceptual fluency/attribution framework might be extended, as well as outlining important areas for future research.

  1. Irrelevance of phase size in purification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harvey, A.H.

    1988-11-03

    Recently, Reis has suggested that it might be possible to remove a solute species completely from a small (or finely dispersed) phase by a reduction to some low but finite value of the chemical potential of that species in the medium surrounding the phase. Sciamanna and Prausnitz, while expressing some doubts about the rigor of the theoretical approach, used similar arguments to examine the possibility of obtaining ultrapurity in a small dispersed phase by equilibrium purification operations such as distillation and extraction. Here they demonstrate that Reis' original suggestion is incorrect. Furthermore, they show that, under well-defined and reasonable assumptions,more » the size of a phase has no influence on its purity.« less

  2. Far-infrared rotational emission by carbon monoxide

    NASA Technical Reports Server (NTRS)

    Mckee, C. F.; Storey, J. W. V.; Watson, D. M.; Green, S.

    1982-01-01

    Accurate theoretical collisional excitation rates are used to determine the emissivities of CO rotational lines for an H2 molecule content of at least 10,000/cu cm, temperature in the range 100-3000 K, and J not more than 60 under the assumption that the lines are optically thin. An approximate analytic expression for the emissivities which is valid in this region is obtained. Population inversions in the lower rotational levels occur for densities of molecular H2 around 1000-100,000/cu cm and temperatures T not more than about 50 K provided photon trapping is unimportant. Interstellar shocks observed edge-on are a potential source of weak millimeter-wave CO maser emission.

  3. Inductive reasoning 2.0.

    PubMed

    Hayes, Brett K; Heit, Evan

    2018-05-01

    Inductive reasoning entails using existing knowledge to make predictions about novel cases. The first part of this review summarizes key inductive phenomena and critically evaluates theories of induction. We highlight recent theoretical advances, with a special emphasis on the structured statistical approach, the importance of sampling assumptions in Bayesian models, and connectionist modeling. A number of new research directions in this field are identified including comparisons of inductive and deductive reasoning, the identification of common core processes in induction and memory tasks and induction involving category uncertainty. The implications of induction research for areas as diverse as complex decision-making and fear generalization are discussed. This article is categorized under: Psychology > Reasoning and Decision Making Psychology > Learning. © 2017 Wiley Periodicals, Inc.

  4. Time Analysis of Building Dynamic Response Under Seismic Action. Part 1: Theoretical Propositions

    NASA Astrophysics Data System (ADS)

    Ufimtcev, E. M.

    2017-11-01

    The first part of the article presents the main provisions of the analytical approach - the time analysis method (TAM) developed for the calculation of the elastic dynamic response of rod structures as discrete dissipative systems (DDS) and based on the investigation of the characteristic matrix quadratic equation. The assumptions adopted in the construction of the mathematical model of structural oscillations as well as the features of seismic forces’ calculating and recording based on the data of earthquake accelerograms are given. A system to resolve equations is given to determine the nodal (kinematic and force) response parameters as well as the stress-strain state (SSS) parameters of the system’s rods.

  5. Losses from effluent taxes and quotas under uncertainty

    USGS Publications Warehouse

    Watson, W.D.; Ridker, R.G.

    1984-01-01

    Recent theoretical papers by Adar and Griffin (J. Environ. Econ. Manag.3, 178-188 (1976)), Fishelson (J. Environ. Econ. Manag.3, 189-197 (1976)), and Weitzman (Rev. Econ. Studies41, 477-491 (1974)) show that,different expected social losses arise from using effluent taxes and quotas as alternative control instruments when marginal control costs are uncertain. Key assumptions in these analyses are linear marginal cost and benefit functions and an additive error for the marginal cost function (to reflect uncertainty). In this paper, empirically derived nonlinear functions and more realistic multiplicative error terms are used to estimate expected control and damage costs and to identify (empirically) the mix of control instruments that minimizes expected losses. ?? 1984.

  6. Practical quantum digital signature

    NASA Astrophysics Data System (ADS)

    Yin, Hua-Lei; Fu, Yao; Chen, Zeng-Bing

    2016-03-01

    Guaranteeing nonrepudiation, unforgeability as well as transferability of a signature is one of the most vital safeguards in today's e-commerce era. Based on fundamental laws of quantum physics, quantum digital signature (QDS) aims to provide information-theoretic security for this cryptographic task. However, up to date, the previously proposed QDS protocols are impractical due to various challenging problems and most importantly, the requirement of authenticated (secure) quantum channels between participants. Here, we present the first quantum digital signature protocol that removes the assumption of authenticated quantum channels while remaining secure against the collective attacks. Besides, our QDS protocol can be practically implemented over more than 100 km under current mature technology as used in quantum key distribution.

  7. Understanding interprofessional education as an intergroup encounter: The use of contact theory in programme planning.

    PubMed

    Carpenter, John; Dickinson, Claire

    2016-01-01

    A key underlying assumption of interprofessional education (IPE) is that if the professions are brought together they have the opportunity to learn about each other and dispel the negative stereotypes which are presumed to hamper interprofessional collaboration in practice. This article explores the application of contact theory in IPE with reference to eight evaluation studies (1995-2012) which adopted this theoretical perspective. It proposes that educators should pay explicit attention to an intergroup perspective in designing IPE programmes and specifically to the "contact variables" identified by social psychologists studying intergroup encounters. This would increase the chances of the planned contact having a positive effect on attitude change.

  8. Statistical power as a function of Cronbach alpha of instrument questionnaire items.

    PubMed

    Heo, Moonseong; Kim, Namhee; Faith, Myles S

    2015-10-14

    In countless number of clinical trials, measurements of outcomes rely on instrument questionnaire items which however often suffer measurement error problems which in turn affect statistical power of study designs. The Cronbach alpha or coefficient alpha, here denoted by C(α), can be used as a measure of internal consistency of parallel instrument items that are developed to measure a target unidimensional outcome construct. Scale score for the target construct is often represented by the sum of the item scores. However, power functions based on C(α) have been lacking for various study designs. We formulate a statistical model for parallel items to derive power functions as a function of C(α) under several study designs. To this end, we assume fixed true score variance assumption as opposed to usual fixed total variance assumption. That assumption is critical and practically relevant to show that smaller measurement errors are inversely associated with higher inter-item correlations, and thus that greater C(α) is associated with greater statistical power. We compare the derived theoretical statistical power with empirical power obtained through Monte Carlo simulations for the following comparisons: one-sample comparison of pre- and post-treatment mean differences, two-sample comparison of pre-post mean differences between groups, and two-sample comparison of mean differences between groups. It is shown that C(α) is the same as a test-retest correlation of the scale scores of parallel items, which enables testing significance of C(α). Closed-form power functions and samples size determination formulas are derived in terms of C(α), for all of the aforementioned comparisons. Power functions are shown to be an increasing function of C(α), regardless of comparison of interest. The derived power functions are well validated by simulation studies that show that the magnitudes of theoretical power are virtually identical to those of the empirical power. Regardless of research designs or settings, in order to increase statistical power, development and use of instruments with greater C(α), or equivalently with greater inter-item correlations, is crucial for trials that intend to use questionnaire items for measuring research outcomes. Further development of the power functions for binary or ordinal item scores and under more general item correlation strutures reflecting more real world situations would be a valuable future study.

  9. Discriminating evidence accumulation from urgency signals in speeded decision making.

    PubMed

    Hawkins, Guy E; Wagenmakers, Eric-Jan; Ratcliff, Roger; Brown, Scott D

    2015-07-01

    The dominant theoretical paradigm in explaining decision making throughout both neuroscience and cognitive science is known as “evidence accumulation”--The core idea being that decisions are reached by a gradual accumulation of noisy information. Although this notion has been supported by hundreds of experiments over decades of study, a recent theory proposes that the fundamental assumption of evidence accumulation requires revision. The "urgency gating" model assumes decisions are made without accumulating evidence, using only moment-by-moment information. Under this assumption, the successful history of evidence accumulation models is explained by asserting that the two models are mathematically identical in standard experimental procedures. We demonstrate that this proof of equivalence is incorrect, and that the models are not identical, even when both models are augmented with realistic extra assumptions. We also demonstrate that the two models can be perfectly distinguished in realistic simulated experimental designs, and in two real data sets; the evidence accumulation model provided the best account for one data set, and the urgency gating model for the other. A positive outcome is that the opposing modeling approaches can be fruitfully investigated without wholesale change to the standard experimental paradigms. We conclude that future research must establish whether the urgency gating model enjoys the same empirical support in the standard experimental paradigms that evidence accumulation models have gathered over decades of study. Copyright © 2015 the American Physiological Society.

  10. Stress Wave Interaction Between Two Adjacent Blast Holes

    NASA Astrophysics Data System (ADS)

    Yi, Changping; Johansson, Daniel; Nyberg, Ulf; Beyglou, Ali

    2016-05-01

    Rock fragmentation by blasting is determined by the level and state of stress in the rock mass subjected to blasting. With the application of electronic detonators, some researchers stated that it is possible to achieve improved fragmentation through stress wave superposition with very short delay times. This hypothesis was studied through theoretical analysis in the paper. First, the stress in rock mass induced by a single-hole shot was analyzed with the assumptions of infinite velocity of detonation and infinite charge length. Based on the stress analysis of a single-hole shot, the stress history and tensile stress distribution between two adjacent holes were presented for cases of simultaneous initiation and 1 ms delayed initiation via stress superposition. The results indicated that the stress wave interaction is local around the collision point. Then, the tensile stress distribution at the extended line of two adjacent blast holes was analyzed for a case of 2 ms delay. The analytical results showed that the tensile stress on the extended line increases due to the stress wave superposition under the assumption that the influence of neighboring blast hole on the stress wave propagation can be neglected. However, the numerical results indicated that this assumption is unreasonable and yields contrary results. The feasibility of improving fragmentation via stress wave interaction with precise initiation was also discussed. The analysis in this paper does not support that the interaction of stress waves improves the fragmentation.

  11. Generalization bounds of ERM-based learning processes for continuous-time Markov chains.

    PubMed

    Zhang, Chao; Tao, Dacheng

    2012-12-01

    Many existing results on statistical learning theory are based on the assumption that samples are independently and identically distributed (i.i.d.). However, the assumption of i.i.d. samples is not suitable for practical application to problems in which samples are time dependent. In this paper, we are mainly concerned with the empirical risk minimization (ERM) based learning process for time-dependent samples drawn from a continuous-time Markov chain. This learning process covers many kinds of practical applications, e.g., the prediction for a time series and the estimation of channel state information. Thus, it is significant to study its theoretical properties including the generalization bound, the asymptotic convergence, and the rate of convergence. It is noteworthy that, since samples are time dependent in this learning process, the concerns of this paper cannot (at least straightforwardly) be addressed by existing methods developed under the sample i.i.d. assumption. We first develop a deviation inequality for a sequence of time-dependent samples drawn from a continuous-time Markov chain and present a symmetrization inequality for such a sequence. By using the resultant deviation inequality and symmetrization inequality, we then obtain the generalization bounds of the ERM-based learning process for time-dependent samples drawn from a continuous-time Markov chain. Finally, based on the resultant generalization bounds, we analyze the asymptotic convergence and the rate of convergence of the learning process.

  12. More similarities than differences in contemporary theories of social development?: a plea for theory bridging.

    PubMed

    Leaper, Campbell

    2011-01-01

    Many contemporary theories of social development are similar and/or share complementary constructs. Yet, there have been relatively few efforts toward theoretical integration. The present chapter represents a call for increased theory bridging. The problem of theoretical fragmentation in psychology is reviewed. Seven highlighted reasons for this predicament include differences between behavioral sciences and other sciences, theoretical paradigms as social identities, the uniqueness assumption, information overload, field fixation, linguistic fragmentation, and few incentives for theoretical integration. Afterward, the feasibility of theoretical synthesis is considered. Finally, some possible directions are proposed for theoretical integration among five contemporary theories of social and gender development: social cognitive theory, expectancy-value theory, cognitive-developmental theory, gender schema theory, and self-categorization theory.

  13. Effects of fish movement assumptions on the design of a marine protected area to protect an overfished stock.

    PubMed

    Cornejo-Donoso, Jorge; Einarsson, Baldvin; Birnir, Bjorn; Gaines, Steven D

    2017-01-01

    Marine Protected Areas (MPA) are important management tools shown to protect marine organisms, restore biomass, and increase fisheries yields. While MPAs have been successful in meeting these goals for many relatively sedentary species, highly mobile organisms may get few benefits from this type of spatial protection due to their frequent movement outside the protected area. The use of a large MPA can compensate for extensive movement, but testing this empirically is challenging, as it requires both large areas and sufficient time series to draw conclusions. To overcome this limitation, MPA models have been used to identify designs and predict potential outcomes, but these simulations are highly sensitive to the assumptions describing the organism's movements. Due to recent improvements in computational simulations, it is now possible to include very complex movement assumptions in MPA models (e.g. Individual Based Model). These have renewed interest in MPA simulations, which implicitly assume that increasing the detail in fish movement overcomes the sensitivity to the movement assumptions. Nevertheless, a systematic comparison of the designs and outcomes obtained under different movement assumptions has not been done. In this paper, we use an individual based model, interconnected to population and fishing fleet models, to explore the value of increasing the detail of the movement assumptions using four scenarios of increasing behavioral complexity: a) random, diffusive movement, b) aggregations, c) aggregations that respond to environmental forcing (e.g. sea surface temperature), and d) aggregations that respond to environmental forcing and are transported by currents. We then compare these models to determine how the assumptions affect MPA design, and therefore the effective protection of the stocks. Our results show that the optimal MPA size to maximize fisheries benefits increases as movement complexity increases from ~10% for the diffusive assumption to ~30% when full environment forcing was used. We also found that in cases of limited understanding of the movement dynamics of a species, simplified assumptions can be used to provide a guide for the minimum MPA size needed to effectively protect the stock. However, using oversimplified assumptions can produce suboptimal designs and lead to a density underestimation of ca. 30%; therefore, the main value of detailed movement dynamics is to provide more reliable MPA design and predicted outcomes. Large MPAs can be effective in recovering overfished stocks, protect pelagic fish and provide significant increases in fisheries yields. Our models provide a means to empirically test this spatial management tool, which theoretical evidence consistently suggests as an effective alternative to managing highly mobile pelagic stocks.

  14. The four-principle formulation of common morality is at the core of bioethics mediation method.

    PubMed

    Ahmadi Nasab Emran, Shahram

    2015-08-01

    Bioethics mediation is increasingly used as a method in clinical ethics cases. My goal in this paper is to examine the implicit theoretical assumptions of the bioethics mediation method developed by Dubler and Liebman. According to them, the distinguishing feature of bioethics mediation is that the method is useful in most cases of clinical ethics in which conflict is the main issue, which implies that there is either no real ethical issue or if there were, they are not the key to finding a resolution. I question the tacit assumption of non-normativity of the mediation method in bioethics by examining the various senses in which bioethics mediation might be non-normative or neutral. The major normative assumption of the mediation method is the existence of common morality. In addition, the four-principle formulation of the theory articulated by Beauchamp and Childress implicitly provides the normative content for the method. Full acknowledgement of the theoretical and normative assumptions of bioethics mediation helps clinical ethicists better understand the nature of their job. In addition, the need for a robust philosophical background even in what appears to be a purely practical method of mediation cannot be overemphasized. Acknowledgement of the normative nature of bioethics mediation method necessitates a more critical attitude of the bioethics mediators towards the norms they usually take for granted uncritically as valid.

  15. Automatic and controlled components of judgment and decision making.

    PubMed

    Ferreira, Mario B; Garcia-Marques, Leonel; Sherman, Steven J; Sherman, Jeffrey W

    2006-11-01

    The categorization of inductive reasoning into largely automatic processes (heuristic reasoning) and controlled analytical processes (rule-based reasoning) put forward by dual-process approaches of judgment under uncertainty (e.g., K. E. Stanovich & R. F. West, 2000) has been primarily a matter of assumption with a scarcity of direct empirical findings supporting it. The present authors use the process dissociation procedure (L. L. Jacoby, 1991) to provide convergent evidence validating a dual-process perspective to judgment under uncertainty based on the independent contributions of heuristic and rule-based reasoning. Process dissociations based on experimental manipulation of variables were derived from the most relevant theoretical properties typically used to contrast the two forms of reasoning. These include processing goals (Experiment 1), cognitive resources (Experiment 2), priming (Experiment 3), and formal training (Experiment 4); the results consistently support the author's perspective. They conclude that judgment under uncertainty is neither an automatic nor a controlled process but that it reflects both processes, with each making independent contributions.

  16. Unraveling the sub-processes of selective attention: insights from dynamic modeling and continuous behavior.

    PubMed

    Frisch, Simon; Dshemuchadse, Maja; Görner, Max; Goschke, Thomas; Scherbaum, Stefan

    2015-11-01

    Selective attention biases information processing toward stimuli that are relevant for achieving our goals. However, the nature of this bias is under debate: Does it solely rely on the amplification of goal-relevant information or is there a need for additional inhibitory processes that selectively suppress currently distracting information? Here, we explored the processes underlying selective attention with a dynamic, modeling-based approach that focuses on the continuous evolution of behavior over time. We present two dynamic neural field models incorporating the diverging theoretical assumptions. Simulations with both models showed that they make similar predictions with regard to response times but differ markedly with regard to their continuous behavior. Human data observed via mouse tracking as a continuous measure of performance revealed evidence for the model solely based on amplification but no indication of persisting selective distracter inhibition.

  17. Internal density waves of shock type induced by chemoconvection in miscible reacting liquids

    NASA Astrophysics Data System (ADS)

    Bratsun, D. A.

    2017-10-01

    A theoretical explanation of the phenomenon of spontaneous emergence of density waves experimentally observed recently in bilayered systems of miscible liquids placed in a narrow vertical gap of the Hele-Shaw cell in the gravitational field is provided. Upper and lower layers represent aqueous solutions of acids and bases, respectively, whose contact leads to the beginning of a neutralization reaction. The process is accompanied by a strong dependence of the reagent's diffusion coefficients on their concentrations, giving rise to the generation of local density pockets, in which convection develops. The cavities collapse under certain conditions, causing a density jump, which moves faster than typical perturbations in a medium and takes the form of a shock wave. A mathematical model of the phenomenon is proposed, which can be formally reduced to equations of motion of a compressible gas under certain assumptions. Numerical calculations are given and compared with the experimental data.

  18. A fiber-reinforced-fluid model of anisotropic plant root cell growth

    NASA Astrophysics Data System (ADS)

    Jensen, Oliver E.; Dyson, Rosemary J.

    2009-11-01

    We present a theoretical model of a single cell in the expansion zone of the primary root of the plant Arabidopsis thaliana. The cell undergoes rapid elongation with approximately constant radius. Growth is driven by high internal turgor pressure causing viscous stretching of the cell wall, with embedded cellulose microfibrils providing the wall with strongly anisotropic properties. We represent the cell as a thin cylindrical fiber-reinforced viscous sheet between rigid end plates. Asymptotic reduction of the governing equations, under simple sets of assumptions about fiber and wall properties, yields variants of the traditional Lockhart equation that relates the axial cell growth rate to the internal pressure. The model provides insights into the geometric and biomechanical parameters underlying bulk quantities such as wall extensibility and shows how either dynamical changes in wall material properties or passive fibre reorientation may suppress cell elongation.

  19. Physical Modeling of Gate-Controlled Schottky Barrier Lowering of Metal-Graphene Contacts in Top-Gated Graphene Field-Effect Transistors

    NASA Astrophysics Data System (ADS)

    Mao, Ling-Feng; Ning, Huansheng; Huo, Zong-Liang; Wang, Jin-Yan

    2015-12-01

    A new physical model of the gate controlled Schottky barrier height (SBH) lowering in top-gated graphene field-effect transistors (GFETs) under saturation bias condition is proposed based on the energy conservation equation with the balance assumption. The theoretical prediction of the SBH lowering agrees well with the experimental data reported in literatures. The reduction of the SBH increases with the increasing of gate voltage and relative dielectric constant of the gate oxide, while it decreases with the increasing of oxide thickness, channel length and acceptor density. The magnitude of the reduction is slightly enhanced under high drain voltage. Moreover, it is found that the gate oxide materials with large relative dielectric constant (>20) have a significant effect on the gate controlled SBH lowering, implying that the energy relaxation of channel electrons should be taken into account for modeling SBH in GFETs.

  20. Physical Modeling of Gate-Controlled Schottky Barrier Lowering of Metal-Graphene Contacts in Top-Gated Graphene Field-Effect Transistors.

    PubMed

    Mao, Ling-Feng; Ning, Huansheng; Huo, Zong-Liang; Wang, Jin-Yan

    2015-12-17

    A new physical model of the gate controlled Schottky barrier height (SBH) lowering in top-gated graphene field-effect transistors (GFETs) under saturation bias condition is proposed based on the energy conservation equation with the balance assumption. The theoretical prediction of the SBH lowering agrees well with the experimental data reported in literatures. The reduction of the SBH increases with the increasing of gate voltage and relative dielectric constant of the gate oxide, while it decreases with the increasing of oxide thickness, channel length and acceptor density. The magnitude of the reduction is slightly enhanced under high drain voltage. Moreover, it is found that the gate oxide materials with large relative dielectric constant (>20) have a significant effect on the gate controlled SBH lowering, implying that the energy relaxation of channel electrons should be taken into account for modeling SBH in GFETs.

  1. Game-theoretic approach for improving cooperation in wireless multihop networks.

    PubMed

    Ng, See-Kee; Seah, Winston K G

    2010-06-01

    Traditional networks are built on the assumption that network entities cooperate based on a mandatory network communication semantic to achieve desirable qualities such as efficiency and scalability. Over the years, this assumption has been eroded by the emergence of users that alter network behavior in a way to benefit themselves at the expense of others. At one extreme, a malicious user/node may eavesdrop on sensitive data or deliberately inject packets into the network to disrupt network operations. The solution to this generally lies in encryption and authentication. In contrast, a rational node acts only to achieve an outcome that he desires most. In such a case, cooperation is still achievable if the outcome is to the best interest of the node. The node misbehavior problem would be more pronounced in multihop wireless networks like mobile ad hoc and sensor networks, which are typically made up of wireless battery-powered devices that must cooperate to forward packets for one another. However, cooperation may be hard to maintain as it consumes scarce resources such as bandwidth, computational power, and battery power. This paper applies game theory to achieve collusive networking behavior in such network environments. In this paper, pricing, promiscuous listening, and mass punishments are avoided altogether. Our model builds on recent work in the field of Economics on the theory of imperfect private monitoring for the dynamic Bertrand oligopoly, and adapts it to the wireless multihop network. The model derives conditions for collusive packet forwarding, truthful routing broadcasts, and packet acknowledgments under a lossy wireless multihop environment, thus capturing many important characteristics of the network layer and link layer in one integrated analysis that has not been achieved previously. We also provide a proof of the viability of the model under a theoretical wireless environment. Finally, we show how the model can be applied to design a generic protocol which we call the Selfishness Resilient Resource Reservation protocol, and validate the effectiveness of this protocol in ensuring cooperation using simulations.

  2. Shifts in rotifer life history in response to stable isotope enrichment: testing theories of isotope effects on organismal growth

    PubMed Central

    2017-01-01

    In ecology, stable isotope labelling is commonly used for tracing material transfer in trophic interactions, nutrient budgets and biogeochemical processes. The main assumption in this approach is that the enrichment with a heavy isotope has no effect on the organism growth and metabolism. This assumption is, however, challenged by theoretical considerations and experimental studies on kinetic isotope effects in vivo. Here, I demonstrate profound changes in life histories of the rotifer Brachionus plicatilis fed 15N-enriched algae (0.4–5.0 at%); i.e. at the enrichment levels commonly used in ecological studies. These findings support theoretically predicted effects of heavy isotope enrichment on growth, metabolism and ageing in biological systems and underline the importance of accounting for such effects when using stable isotope labelling in experimental studies. PMID:28405367

  3. Personality psychology: lexical approaches, assessment methods, and trait concepts reveal only half of the story--why it is time for a paradigm shift.

    PubMed

    Uher, Jana

    2013-03-01

    This article develops a comprehensive philosophy-of-science for personality psychology that goes far beyond the scope of the lexical approaches, assessment methods, and trait concepts that currently prevail. One of the field's most important guiding scientific assumptions, the lexical hypothesis, is analysed from meta-theoretical viewpoints to reveal that it explicitly describes two sets of phenomena that must be clearly differentiated: 1) lexical repertoires and the representations that they encode and 2) the kinds of phenomena that are represented. Thus far, personality psychologists largely explored only the former, but have seriously neglected studying the latter. Meta-theoretical analyses of these different kinds of phenomena and their distinct natures, commonalities, differences, and interrelations reveal that personality psychology's focus on lexical approaches, assessment methods, and trait concepts entails a) erroneous meta-theoretical assumptions about what the phenomena being studied actually are, and thus how they can be analysed and interpreted, b) that contemporary personality psychology is largely based on everyday psychological knowledge, and c) a fundamental circularity in the scientific explanations used in trait psychology. These findings seriously challenge the widespread assumptions about the causal and universal status of the phenomena described by prominent personality models. The current state of knowledge about the lexical hypothesis is reviewed, and implications for personality psychology are discussed. Ten desiderata for future research are outlined to overcome the current paradigmatic fixations that are substantially hampering intellectual innovation and progress in the field.

  4. Cosmic Star Formation: A Simple Model of the SFRD(z)

    NASA Astrophysics Data System (ADS)

    Chiosi, Cesare; Sciarratta, Mauro; D’Onofrio, Mauro; Chiosi, Emanuela; Brotto, Francesca; De Michele, Rosaria; Politino, Valeria

    2017-12-01

    We investigate the evolution of the cosmic star formation rate density (SFRD) from redshift z = 20 to z = 0 and compare it with the observational one by Madau and Dickinson derived from recent compilations of ultraviolet (UV) and infrared (IR) data. The theoretical SFRD(z) and its evolution are obtained using a simple model that folds together the star formation histories of prototype galaxies that are designed to represent real objects of different morphological type along the Hubble sequence and the hierarchical growing of structures under the action of gravity from small perturbations to large-scale objects in Λ-CDM cosmogony, i.e., the number density of dark matter halos N(M,z). Although the overall model is very simple and easy to set up, it provides results that mimic results obtained from highly complex large-scale N-body simulations well. The simplicity of our approach allows us to test different assumptions for the star formation law in galaxies, the effects of energy feedback from stars to interstellar gas, the efficiency of galactic winds, and also the effect of N(M,z). The result of our analysis is that in the framework of the hierarchical assembly of galaxies, the so-called time-delayed star formation under plain assumptions mainly for the energy feedback and galactic winds can reproduce the observational SFRD(z).

  5. Is the Surface Potential Integral of a Dipole in a Volume Conductor Always Zero? A Cloud Over the Average Reference of EEG and ERP.

    PubMed

    Yao, Dezhong

    2017-03-01

    Currently, average reference is one of the most widely adopted references in EEG and ERP studies. The theoretical assumption is the surface potential integral of a volume conductor being zero, thus the average of scalp potential recordings might be an approximation of the theoretically desired zero reference. However, such a zero integral assumption has been proved only for a spherical surface. In this short communication, three counter-examples are given to show that the potential integral over the surface of a dipole in a volume conductor may not be zero. It depends on the shape of the conductor and the orientation of the dipole. This fact on one side means that average reference is not a theoretical 'gold standard' reference, and on the other side reminds us that the practical accuracy of average reference is not only determined by the well-known electrode array density and its coverage but also intrinsically by the head shape. It means that reference selection still is a fundamental problem to be fixed in various EEG and ERP studies.

  6. Design and Analysis of an Electromagnetic Thrust Bearing

    NASA Technical Reports Server (NTRS)

    Banerjee, Bibhuti; Rao, Dantam K.

    1996-01-01

    A double-acting electromagnetic thrust bearing is normally used to counter the axial loads in many rotating machines that employ magnetic bearings. It essentially consists of an actuator and drive electronics. Existing thrust bearing design programs are based on several assumptions. These assumptions, however, are often violated in practice. For example, no distinction is made between maximum external loads and maximum bearing forces, which are assumed to be identical. Furthermore, it is assumed that the maximum flux density in the air gap occurs at the nominal gap position of the thrust runner. The purpose of this paper is to present a clear theoretical basis for the design of the electromagnetic thrust bearing which obviates such assumptions.

  7. Numerical distance effect size is a poor metric of approximate number system acuity.

    PubMed

    Chesney, Dana

    2018-04-12

    Individual differences in the ability to compare and evaluate nonsymbolic numerical magnitudes-approximate number system (ANS) acuity-are emerging as an important predictor in many research areas. Unfortunately, recent empirical studies have called into question whether a historically common ANS-acuity metric-the size of the numerical distance effect (NDE size)-is an effective measure of ANS acuity. NDE size has been shown to frequently yield divergent results from other ANS-acuity metrics. Given these concerns and the measure's past popularity, it behooves us to question whether the use of NDE size as an ANS-acuity metric is theoretically supported. This study seeks to address this gap in the literature by using modeling to test the basic assumption underpinning use of NDE size as an ANS-acuity metric: that larger NDE size indicates poorer ANS acuity. This assumption did not hold up under test. Results demonstrate that the theoretically ideal relationship between NDE size and ANS acuity is not linear, but rather resembles an inverted J-shaped distribution, with the inflection points varying based on precise NDE task methodology. Thus, depending on specific methodology and the distribution of ANS acuity in the tested population, positive, negative, or null correlations between NDE size and ANS acuity could be predicted. Moreover, peak NDE sizes would be found for near-average ANS acuities on common NDE tasks. This indicates that NDE size has limited and inconsistent utility as an ANS-acuity metric. Past results should be interpreted on a case-by-case basis, considering both specifics of the NDE task and expected ANS acuity of the sampled population.

  8. Teaching for Tomorrow: An Exploratory Study of Prekindergarten Teachers' Underlying Assumptions about How Children Learn

    ERIC Educational Resources Information Center

    Flynn, Erin E.; Schachter, Rachel E.

    2017-01-01

    This study investigated eight prekindergarten teachers' underlying assumptions about how children learn, and how these assumptions were used to inform and enact instruction. By contextualizing teachers' knowledge and understanding as it is used in practice we were able to provide unique insight into the work of teaching. Participants focused on…

  9. Using effort information with change-in-ratio data for population estimation

    USGS Publications Warehouse

    Udevitz, Mark S.; Pollock, Kenneth H.

    1995-01-01

    Most change-in-ratio (CIR) methods for estimating fish and wildlife population sizes have been based only on assumptions about how encounter probabilities vary among population subclasses. When information on sampling effort is available, it is also possible to derive CIR estimators based on assumptions about how encounter probabilities vary over time. This paper presents a generalization of previous CIR models that allows explicit consideration of a range of assumptions about the variation of encounter probabilities among subclasses and over time. Explicit estimators are derived under this model for specific sets of assumptions about the encounter probabilities. Numerical methods are presented for obtaining estimators under the full range of possible assumptions. Likelihood ratio tests for these assumptions are described. Emphasis is on obtaining estimators based on assumptions about variation of encounter probabilities over time.

  10. Autogenic Deposits as A Potential Recorder of High-Frequency Signals: The Role of Autogenic Processes Revisited

    NASA Astrophysics Data System (ADS)

    Li, H.; Plink-Bjorklund, P.

    2017-12-01

    Studies (e.g., Jerolmack and Paola, 2010) have suggested that autogenic processes act as a filter for high-frequency environmental signals, and the underlying assumption is that autogenic processes can cause fluctuations in sediment and water discharge that modify or shred the signal. This assumption, however, fails to recognize that autogenic processes and their final products are dynamic and that they can respond to allogenic forcings. We compile a database containing published field studies, physical experiments, and numerical modeling works, and analyze the data under different boundary conditions. Our analyses suggest different conclusions. Autogenic processes are intrinsic to the sedimentary system, and they possess distinct patterns under steady boundary conditions. Upon changing boundary conditions, the autogenic patterns are also likely to change (depending on the magnitude of the change in the boundary conditions). Therefore, the pattern change provides us with the opportunity to restore the high-frequency signals that may not pass through the transfer zone. Here we present the theoretical basis for using autogenic deposits to infer high-frequency signals as well as modern and ancient field examples, physical experiments, and modeling works to illustrate the autogenic response to allogenic forcings. The field studies show the potential of using autogenic deposits to restore short-term climatic variability. The experiments demonstrate that autogenic processes in rivers are closely linked to sediment and water discharge. The modeling examples reveal the counteracting effects of some autogenic processes to form a self-organized pattern under a set of specific boundary conditions. We also highlight the limitations and challenges that need more research efforts to restore high-frequency signals. Some critical issues include the magnitude of the signals, the effect of the interference between different signals, and the incompleteness of the autogenic deposits.

  11. NMR studies of excluded volume interactions in peptide dendrimers.

    PubMed

    Sheveleva, Nadezhda N; Markelov, Denis A; Vovk, Mikhail A; Mikhailova, Maria E; Tarasenko, Irina I; Neelov, Igor M; Lähderanta, Erkki

    2018-06-11

    Peptide dendrimers are good candidates for diverse biomedical applications due to their biocompatibility and low toxicity. The local orientational mobility of groups with different radial localization inside dendrimers is important characteristic for drug and gene delivery, synthesis of nanoparticles, and other specific purposes. In this paper we focus on the validation of two theoretical assumptions for dendrimers: (i) independence of NMR relaxations on excluded volume effects and (ii) similarity of mobilities of side and terminal segments of dendrimers. For this purpose we study 1 H NMR spin-lattice relaxation time, T 1H , of two similar peptide dendrimers of the second generation, with and without side fragments in their inner segments. Temperature dependences of 1/T 1H in the temperature range from 283 to 343 K were measured for inner and terminal groups of the dendrimers dissolved in deuterated water. We have shown that the 1/T 1H temperature dependences of inner groups for both dendrimers (with and without side fragments) practically coincide despite different densities of atoms inside these dendrimers. This result confirms the first theoretical assumption. The second assumption is confirmed by the 1/T 1H temperature dependences of terminal groups which are similar for both dendrimers.

  12. A measure for assessing the effects of audiovisual speech integration.

    PubMed

    Altieri, Nicholas; Townsend, James T; Wenger, Michael J

    2014-06-01

    We propose a measure of audiovisual speech integration that takes into account accuracy and response times. This measure should prove beneficial for researchers investigating multisensory speech recognition, since it relates to normal-hearing and aging populations. As an example, age-related sensory decline influences both the rate at which one processes information and the ability to utilize cues from different sensory modalities. Our function assesses integration when both auditory and visual information are available, by comparing performance on these audiovisual trials with theoretical predictions for performance under the assumptions of parallel, independent self-terminating processing of single-modality inputs. We provide example data from an audiovisual identification experiment and discuss applications for measuring audiovisual integration skills across the life span.

  13. Floating potential in electronegative plasmas for non-zero ion temperatures

    NASA Astrophysics Data System (ADS)

    Regodón, Guillermo Fernando; Fernández Palop, José Ignacio; Tejero-del-Caz, Antonio; Díaz-Cabrera, Juan Manuel; Carmona-Cabezas, Rafael; Ballesteros, Jerónimo

    2018-02-01

    The floating potential of a Langmuir probe immersed in an electronegative plasma is studied theoretically under the assumption of radial positive ion fluid movement for non-zero positive ion temperature: both cylindrical and spherical geometries are studied. The model is solvable exactly. The special characteristics of the electronegative pre-sheath are found and the influence of the stratified electronegative pre-sheath is shown to be very small in practical applications. It is suggested that the use of the floating potential in the measurement of negative ions population density is convenient, in view of the numerical results obtained. The differences between the two radial geometries, which become very important for small probe radii of the order of magnitude of the Debye length, are studied.

  14. Do uniform tangential interfacial stresses enhance adhesion?

    NASA Astrophysics Data System (ADS)

    Menga, Nicola; Carbone, Giuseppe; Dini, Daniele

    2018-03-01

    We present theoretical arguments, based on linear elasticity and thermodynamics, to show that interfacial tangential stresses in sliding adhesive soft contacts may lead to a significant increase of the effective energy of adhesion. A sizable expansion of the contact area is predicted in conditions corresponding to such scenario. These results are easily explained and are valid under the assumptions that: (i) sliding at the interface does not lead to any loss of adhesive interaction and (ii) spatial fluctuations of frictional stresses can be considered negligible. Our results are seemingly supported by existing experiments, and show that frictional stresses may lead to an increase of the effective energy of adhesion depending on which conditions are established at the interface of contacting bodies in the presence of adhesive forces.

  15. A new delay-independent condition for global robust stability of neural networks with time delays.

    PubMed

    Samli, Ruya

    2015-06-01

    This paper studies the problem of robust stability of dynamical neural networks with discrete time delays under the assumptions that the network parameters of the neural system are uncertain and norm-bounded, and the activation functions are slope-bounded. By employing the results of Lyapunov stability theory and matrix theory, new sufficient conditions for the existence, uniqueness and global asymptotic stability of the equilibrium point for delayed neural networks are presented. The results reported in this paper can be easily tested by checking some special properties of symmetric matrices associated with the parameter uncertainties of neural networks. We also present a numerical example to show the effectiveness of the proposed theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Controlling the light shift of the CPT resonance by modulation technique

    NASA Astrophysics Data System (ADS)

    Tsygankov, E. A.; Petropavlovsky, S. V.; Vaskovskaya, M. I.; Zibrov, S. A.; Velichansky, V. L.; Yakovlev, V. P.

    2017-12-01

    Motivated by recent developments in atomic frequency standards employing the effect of coherent population trapping (CPT), we propose a theoretical framework for the frequency modulation spectroscopy of the CPT resonances. Under realistic assumptions we provide simple yet non-trivial analytical formulae for the major spectroscopic signals such as the CPT resonance line and the in-phase/quadrature responses. We discuss the influence of the light shift and, in particular, derive a simple expression for the displacement of the resonance as a function of modulation index. The performance of the model is checked against numerical simulations, the agreement is good to perfect. The obtained results can be used in more general models accounting for light absorption in the thick optical medium.

  17. Fish optimize sensing and respiration during undulatory swimming.

    PubMed

    Akanyeti, O; Thornycroft, P J M; Lauder, G V; Yanagitsuru, Y R; Peterson, A N; Liao, J C

    2016-03-24

    Previous work in fishes considers undulation as a means of propulsion without addressing how it may affect other functions such as sensing and respiration. Here we show that undulation can optimize propulsion, flow sensing and respiration concurrently without any apparent tradeoffs when head movements are coupled correctly with the movements of the body. This finding challenges a long-held assumption that head movements are simply an unintended consequence of undulation, existing only because of the recoil of an oscillating tail. We use a combination of theoretical, biological and physical experiments to reveal the hydrodynamic mechanisms underlying this concerted optimization. Based on our results we develop a parsimonious control architecture that can be used by both undulatory animals and machines in dynamic environments.

  18. Fish optimize sensing and respiration during undulatory swimming

    PubMed Central

    Akanyeti, O.; Thornycroft, P. J. M.; Lauder, G. V.; Yanagitsuru, Y. R.; Peterson, A. N.; Liao, J. C.

    2016-01-01

    Previous work in fishes considers undulation as a means of propulsion without addressing how it may affect other functions such as sensing and respiration. Here we show that undulation can optimize propulsion, flow sensing and respiration concurrently without any apparent tradeoffs when head movements are coupled correctly with the movements of the body. This finding challenges a long-held assumption that head movements are simply an unintended consequence of undulation, existing only because of the recoil of an oscillating tail. We use a combination of theoretical, biological and physical experiments to reveal the hydrodynamic mechanisms underlying this concerted optimization. Based on our results we develop a parsimonious control architecture that can be used by both undulatory animals and machines in dynamic environments. PMID:27009352

  19. Statistical Issues for Uncontrolled Reentry Hazards

    NASA Technical Reports Server (NTRS)

    Matney, Mark

    2008-01-01

    A number of statistical tools have been developed over the years for assessing the risk of reentering objects to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. The statistical tools use this information to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of the analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper looks at a number of these theoretical assumptions, examining the mathematical basis for the hazard calculations, and outlining the conditions under which the simplifying assumptions hold. In addition, this paper will also outline some new tools for assessing ground hazard risk in useful ways. Also, this study is able to make use of a database of known uncontrolled reentry locations measured by the United States Department of Defense. By using data from objects that were in orbit more than 30 days before reentry, sufficient time is allowed for the orbital parameters to be randomized in the way the models are designed to compute. The predicted ground footprint distributions of these objects are based on the theory that their orbits behave basically like simple Kepler orbits. However, there are a number of factors - including the effects of gravitational harmonics, the effects of the Earth's equatorial bulge on the atmosphere, and the rotation of the Earth and atmosphere - that could cause them to diverge from simple Kepler orbit behavior and change the ground footprints. The measured latitude and longitude distributions of these objects provide data that can be directly compared with the predicted distributions, providing a fundamental empirical test of the model assumptions.

  20. Attenuation characteristics in eastern Himalaya and southern Tibetan Plateau: An understanding of the physical state of the medium

    NASA Astrophysics Data System (ADS)

    Singh, Sagar; Singh, Chandrani; Biswas, Rahul; Mukhopadhyay, Sagarika; Sahu, Himanshu

    2016-08-01

    Attenuation characteristics of the crust in the eastern Himalaya and the southern Tibetan Plateau are investigated using high quality data recorded by Himalayan Nepal Tibet Seismic Experiment (HIMNT) during 2001-2003. The present study aims to provide an attenuation model that can address the physical mechanism governing the attenuation characteristics in the underlying medium. We have studied the Coda wave attenuation (Qc) in the single isotropic scattering model hypothesis, S wave attenuation (Qs) by using the coda normalization method and intrinsic (Qi-1) and scattering (Qsc-1) quality factors by the multiple Lapse Time Window Analysis (MLTWA) method under the assumption of multiple isotropic scattering in a 3-D half space within the frequency range 2-12 Hz. All the values of Q exhibit frequency dependent nature for a seismically active area. At all the frequencies intrinsic absorption is predominant compared to scattering attenuation and seismic albedo (B0) are found to be lower than 0.5. The observed discrepancies between the observed and theoretical models can be corroborated by the depth-dependent velocity and attenuation structure as well as the assumption of a uniform distribution of scatterers. Our results correlate well with the existing geo-tectonic model of the area, which may suggest the possible existence of trapped fluids in the crust or its thermal nature. Surprisingly the underlying cause of high attenuation in the crust of eastern Himalaya and southern Tibet makes this region distinct from its adjacent western Himalayan segment. The results are comparable with the other regions reported globally.

  1. Application of random survival forests in understanding the determinants of under-five child mortality in Uganda in the presence of covariates that satisfy the proportional and non-proportional hazards assumption.

    PubMed

    Nasejje, Justine B; Mwambi, Henry

    2017-09-07

    Uganda just like any other Sub-Saharan African country, has a high under-five child mortality rate. To inform policy on intervention strategies, sound statistical methods are required to critically identify factors strongly associated with under-five child mortality rates. The Cox proportional hazards model has been a common choice in analysing data to understand factors strongly associated with high child mortality rates taking age as the time-to-event variable. However, due to its restrictive proportional hazards (PH) assumption, some covariates of interest which do not satisfy the assumption are often excluded in the analysis to avoid mis-specifying the model. Otherwise using covariates that clearly violate the assumption would mean invalid results. Survival trees and random survival forests are increasingly becoming popular in analysing survival data particularly in the case of large survey data and could be attractive alternatives to models with the restrictive PH assumption. In this article, we adopt random survival forests which have never been used in understanding factors affecting under-five child mortality rates in Uganda using Demographic and Health Survey data. Thus the first part of the analysis is based on the use of the classical Cox PH model and the second part of the analysis is based on the use of random survival forests in the presence of covariates that do not necessarily satisfy the PH assumption. Random survival forests and the Cox proportional hazards model agree that the sex of the household head, sex of the child, number of births in the past 1 year are strongly associated to under-five child mortality in Uganda given all the three covariates satisfy the PH assumption. Random survival forests further demonstrated that covariates that were originally excluded from the earlier analysis due to violation of the PH assumption were important in explaining under-five child mortality rates. These covariates include the number of children under the age of five in a household, number of births in the past 5 years, wealth index, total number of children ever born and the child's birth order. The results further indicated that the predictive performance for random survival forests built using covariates including those that violate the PH assumption was higher than that for random survival forests built using only covariates that satisfy the PH assumption. Random survival forests are appealing methods in analysing public health data to understand factors strongly associated with under-five child mortality rates especially in the presence of covariates that violate the proportional hazards assumption.

  2. Do hospital treatments represent a 'teachable moment' for quitting smoking? A study from a stage-theoretical perspective.

    PubMed

    Dohnke, B; Ziemann, C; Will, K E; Weiss-Gerlach, E; Spies, C D

    2012-01-01

    Hospital treatments are assumed to be a 'teachable moment'. This phenomenon, however, is only poorly conceptualised and untested. A stage-theoretical perspective implies that a cueing event such as hospital treatments is a teachable moment if a stage progression, change of cognitions, or both occur. This concept is examined in a cross-sectional study by comparing smokers in two treatment settings, an emergency department (ED) and inpatient treatment after elective surgery, with smokers in a control setting. Setting differences were hypothesised in stage distribution, and levels of and stage differences in social-cognitive factors under control for possible confounders. Stage, social-cognitive factors and possible confounders were assessed in 185 ED smokers, 193 inpatient smokers and 290 control smokers. Compared to control smokers, ED and inpatient smokers were in higher stages; they perceived fewer risks and cons; inpatient smokers reported more concrete plans. Stage differences in self-efficacy among ED and inpatient smokers differed from those among control smokers, but the former corresponded more strongly to the theoretical stage assumptions. The results suggest that hospital treatments lead to a stage progression and change of corresponding cognitions, and thus represent a 'teachable moment'. Stage-matched interventions should be provided but consider differences in cognitions to be effective.

  3. Informational analysis for compressive sampling in radar imaging.

    PubMed

    Zhang, Jingxiong; Yang, Ke

    2015-03-24

    Compressive sampling or compressed sensing (CS) works on the assumption of the sparsity or compressibility of the underlying signal, relies on the trans-informational capability of the measurement matrix employed and the resultant measurements, operates with optimization-based algorithms for signal reconstruction and is thus able to complete data compression, while acquiring data, leading to sub-Nyquist sampling strategies that promote efficiency in data acquisition, while ensuring certain accuracy criteria. Information theory provides a framework complementary to classic CS theory for analyzing information mechanisms and for determining the necessary number of measurements in a CS environment, such as CS-radar, a radar sensor conceptualized or designed with CS principles and techniques. Despite increasing awareness of information-theoretic perspectives on CS-radar, reported research has been rare. This paper seeks to bridge the gap in the interdisciplinary area of CS, radar and information theory by analyzing information flows in CS-radar from sparse scenes to measurements and determining sub-Nyquist sampling rates necessary for scene reconstruction within certain distortion thresholds, given differing scene sparsity and average per-sample signal-to-noise ratios (SNRs). Simulated studies were performed to complement and validate the information-theoretic analysis. The combined strategy proposed in this paper is valuable for information-theoretic orientated CS-radar system analysis and performance evaluation.

  4. A theoretical framework for modeling dilution enhancement of non-reactive solutes in heterogeneous porous media.

    PubMed

    de Barros, F P J; Fiori, A; Boso, F; Bellin, A

    2015-01-01

    Spatial heterogeneity of the hydraulic properties of geological porous formations leads to erratically shaped solute clouds, thus increasing the edge area of the solute body and augmenting the dilution rate. In this study, we provide a theoretical framework to quantify dilution of a non-reactive solute within a steady state flow as affected by the spatial variability of the hydraulic conductivity. Embracing the Lagrangian concentration framework, we obtain explicit semi-analytical expressions for the dilution index as a function of the structural parameters of the random hydraulic conductivity field, under the assumptions of uniform-in-the-average flow, small injection source and weak-to-mild heterogeneity. Results show how the dilution enhancement of the solute cloud is strongly dependent on both the statistical anisotropy ratio and the heterogeneity level of the porous medium. The explicit semi-analytical solution also captures the temporal evolution of the dilution rate; for the early- and late-time limits, the proposed solution recovers previous results from the literature, while at intermediate times it reflects the increasing interplay between large-scale advection and local-scale dispersion. The performance of the theoretical framework is verified with high resolution numerical results and successfully tested against the Cape Cod field data. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Anastasia Might Still Be Alive, But the Monarchy Is Dead.

    ERIC Educational Resources Information Center

    Eisner, Elliot W.

    1983-01-01

    Criticizes the previous article on positivism in educational thought by Denis Phillips. Takes issue with Phillips' assumption that, at the base of theoretical disputes and inquiry, there exists a final and absolute truth. (GC)

  6. The zoom lens of attention: Simulating shuffled versus normal text reading using the SWIFT model

    PubMed Central

    Schad, Daniel J.; Engbert, Ralf

    2012-01-01

    Assumptions on the allocation of attention during reading are crucial for theoretical models of eye guidance. The zoom lens model of attention postulates that attentional deployment can vary from a sharp focus to a broad window. The model is closely related to the foveal load hypothesis, i.e., the assumption that the perceptual span is modulated by the difficulty of the fixated word. However, these important theoretical concepts for cognitive research have not been tested quantitatively in eye movement models. Here we show that the zoom lens model, implemented in the SWIFT model of saccade generation, captures many important patterns of eye movements. We compared the model's performance to experimental data from normal and shuffled text reading. Our results demonstrate that the zoom lens of attention might be an important concept for eye movement control in reading. PMID:22754295

  7. Coherent structures in turbulence and Prandtl's mixing length theory (27th Ludwig Prandtl Memorial Lecture)

    NASA Astrophysics Data System (ADS)

    Landahl, M. T.

    1984-08-01

    The fundamental ideas behind Prandtl's famous mixing length theory are discussed in the light of newer findings from experimental and theoretical research on coherent turbulence structures in the region near solid walls. A simple theoretical model for 'flat' structures is used to examine the fundamental assumptions behind Prandtl's theory. The model is validated by comparisons with conditionally sampled velocity data obtained in recent channel flow experiments. Particular attention is given to the role of pressure fluctuations on the evolution of flat eddies. The validity of Prandtl's assumption that an element of fluid retains its streamwise momentum as it is moved around by turbulence is confirmed for flat eddies. It is demonstrated that spanwise pressure gradients give rise to a contribution to the vertical displacement of a fluid element which is proportional to the distance from the wall. This contribution is particularly important for eddies that are highly elongated in the streamwise direction.

  8. Physical context for theoretical approaches to sediment transport magnitude-frequency analysis in alluvial channels

    NASA Astrophysics Data System (ADS)

    Sholtes, Joel; Werbylo, Kevin; Bledsoe, Brian

    2014-10-01

    Theoretical approaches to magnitude-frequency analysis (MFA) of sediment transport in channels couple continuous flow probability density functions (PDFs) with power law flow-sediment transport relations (rating curves) to produce closed-form equations relating MFA metrics such as the effective discharge, Qeff, and fraction of sediment transported by discharges greater than Qeff, f+, to statistical moments of the flow PDF and rating curve parameters. These approaches have proven useful in understanding the theoretical drivers behind the magnitude and frequency of sediment transport. However, some of their basic assumptions and findings may not apply to natural rivers and streams with more complex flow-sediment transport relationships or management and design scenarios, which have finite time horizons. We use simple numerical experiments to test the validity of theoretical MFA approaches in predicting the magnitude and frequency of sediment transport. Median values of Qeff and f+ generated from repeated, synthetic, finite flow series diverge from those produced with theoretical approaches using the same underlying flow PDF. The closed-form relation for f+ is a monotonically increasing function of flow variance. However, using finite flow series, we find that f+ increases with flow variance to a threshold that increases with flow record length. By introducing a sediment entrainment threshold, we present a physical mechanism for the observed diverging relationship between Qeff and flow variance in fine and coarse-bed channels. Our work shows that through complex and threshold-driven relationships sediment transport mode, channel morphology, flow variance, and flow record length all interact to influence estimates of what flow frequencies are most responsible for transporting sediment in alluvial channels.

  9. A non-traditional fluid problem: transition between theoretical models from Stokes’ to turbulent flow

    NASA Astrophysics Data System (ADS)

    Salomone, Horacio D.; Olivieri, Néstor A.; Véliz, Maximiliano E.; Raviola, Lisandro A.

    2018-05-01

    In the context of fluid mechanics courses, it is customary to consider the problem of a sphere falling under the action of gravity inside a viscous fluid. Under suitable assumptions, this phenomenon can be modelled using Stokes’ law and is routinely reproduced in teaching laboratories to determine terminal velocities and fluid viscosities. In many cases, however, the measured physical quantities show important deviations with respect to the predictions deduced from the simple Stokes’ model, and the causes of these apparent ‘anomalies’ (for example, whether the flow is laminar or turbulent) are seldom discussed in the classroom. On the other hand, there are various variable-mass problems that students tackle during elementary mechanics courses and which are discussed in many textbooks. In this work, we combine both kinds of problems and analyse—both theoretically and experimentally—the evolution of a system composed of a sphere pulled by a chain of variable length inside a tube filled with water. We investigate the effects of different forces acting on the system such as weight, buoyancy, viscous friction and drag force. By means of a sequence of mathematical models of increasing complexity, we obtain a progressive fit that accounts for the experimental data. The contrast between the various models exposes the strengths and weaknessess of each one. The proposed experience can be useful for integrating concepts of elementary mechanics and fluids, and is suitable as laboratory practice, stressing the importance of the experimental validation of theoretical models and showing the model-building processes in a didactic framework.

  10. On how role versatility boosts an STI.

    PubMed

    Cortés, Andrés J

    2017-12-19

    The prevalence of the HIV-1 infection has decayed in the last decades in western heterosexual populations. However, among men who have sex with men (MSM) the prevalence is still high, despite intensive campaigns and treatment programs that keep infected men as undetectable (Beyrer et al. 2012). Promiscuity and condom fatigue (Adam et al. 2005), which are not unique to the MSM community, are making unprotected anal intercourse (UAI) more common and sexually transmitted infections (STIs) presumably harder to track. Yet, MSM communities are peculiar in the sense that men can adopt fixed (insertive or receptive) or versatile (both practices) roles. Some old theoretical work (Wiley & Herschkorn 1989, Van Druten et al. 1992, Trichopoulos et al. 1998) predicted that the transmission of HIV-1 would be enhanced in MSM populations engaged more in role versatility than in role segregation, in which fixed roles are predominantly adopted. These predictions were based on the assumption that the probability of acquisition from unprotected insertive anal (UIA) sex was neglectable. However, as later shown (Vittinghoff et al. 1999, Goodreau et al. 2005), this assumption is inappropriate and HIV-1 may still be acquired via UIA sex. Here I show through a stochastic model that the increase of the HIV-1 prevalence among MSM due to role versatility holds under a stronger assumption of bidirectional virus transmission. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Automatic Spike Sorting Using Tuning Information

    PubMed Central

    Ventura, Valérie

    2011-01-01

    Current spike sorting methods focus on clustering neurons’ characteristic spike waveforms. The resulting spike-sorted data are typically used to estimate how covariates of interest modulate the firing rates of neurons. However, when these covariates do modulate the firing rates, they provide information about spikes’ identities, which thus far have been ignored for the purpose of spike sorting. This letter describes a novel approach to spike sorting, which incorporates both waveform information and tuning information obtained from the modulation of firing rates. Because it efficiently uses all the available information, this spike sorter yields lower spike misclassification rates than traditional automatic spike sorters. This theoretical result is verified empirically on several examples. The proposed method does not require additional assumptions; only its implementation is different. It essentially consists of performing spike sorting and tuning estimation simultaneously rather than sequentially, as is currently done. We used an expectation-maximization maximum likelihood algorithm to implement the new spike sorter. We present the general form of this algorithm and provide a detailed implementable version under the assumptions that neurons are independent and spike according to Poisson processes. Finally, we uncover a systematic flaw of spike sorting based on waveform information only. PMID:19548802

  12. Automatic spike sorting using tuning information.

    PubMed

    Ventura, Valérie

    2009-09-01

    Current spike sorting methods focus on clustering neurons' characteristic spike waveforms. The resulting spike-sorted data are typically used to estimate how covariates of interest modulate the firing rates of neurons. However, when these covariates do modulate the firing rates, they provide information about spikes' identities, which thus far have been ignored for the purpose of spike sorting. This letter describes a novel approach to spike sorting, which incorporates both waveform information and tuning information obtained from the modulation of firing rates. Because it efficiently uses all the available information, this spike sorter yields lower spike misclassification rates than traditional automatic spike sorters. This theoretical result is verified empirically on several examples. The proposed method does not require additional assumptions; only its implementation is different. It essentially consists of performing spike sorting and tuning estimation simultaneously rather than sequentially, as is currently done. We used an expectation-maximization maximum likelihood algorithm to implement the new spike sorter. We present the general form of this algorithm and provide a detailed implementable version under the assumptions that neurons are independent and spike according to Poisson processes. Finally, we uncover a systematic flaw of spike sorting based on waveform information only.

  13. Sparse PCA with Oracle Property.

    PubMed

    Gu, Quanquan; Wang, Zhaoran; Liu, Han

    In this paper, we study the estimation of the k -dimensional sparse principal subspace of covariance matrix Σ in the high-dimensional setting. We aim to recover the oracle principal subspace solution, i.e., the principal subspace estimator obtained assuming the true support is known a priori. To this end, we propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations. In particular, under a weak assumption on the magnitude of the population projection matrix, one estimator within this family exactly recovers the true support with high probability, has exact rank- k , and attains a [Formula: see text] statistical rate of convergence with s being the subspace sparsity level and n the sample size. Compared to existing support recovery results for sparse PCA, our approach does not hinge on the spiked covariance model or the limited correlation condition. As a complement to the first estimator that enjoys the oracle property, we prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA, even when the previous assumption on the magnitude of the projection matrix is violated. We validate the theoretical results by numerical experiments on synthetic datasets.

  14. Sparse PCA with Oracle Property

    PubMed Central

    Gu, Quanquan; Wang, Zhaoran; Liu, Han

    2014-01-01

    In this paper, we study the estimation of the k-dimensional sparse principal subspace of covariance matrix Σ in the high-dimensional setting. We aim to recover the oracle principal subspace solution, i.e., the principal subspace estimator obtained assuming the true support is known a priori. To this end, we propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations. In particular, under a weak assumption on the magnitude of the population projection matrix, one estimator within this family exactly recovers the true support with high probability, has exact rank-k, and attains a s/n statistical rate of convergence with s being the subspace sparsity level and n the sample size. Compared to existing support recovery results for sparse PCA, our approach does not hinge on the spiked covariance model or the limited correlation condition. As a complement to the first estimator that enjoys the oracle property, we prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA, even when the previous assumption on the magnitude of the projection matrix is violated. We validate the theoretical results by numerical experiments on synthetic datasets. PMID:25684971

  15. Revealing patterns of cultural transmission from frequency data: equilibrium and non-equilibrium assumptions

    PubMed Central

    Crema, Enrico R.; Kandler, Anne; Shennan, Stephen

    2016-01-01

    A long tradition of cultural evolutionary studies has developed a rich repertoire of mathematical models of social learning. Early studies have laid the foundation of more recent endeavours to infer patterns of cultural transmission from observed frequencies of a variety of cultural data, from decorative motifs on potsherds to baby names and musical preferences. While this wide range of applications provides an opportunity for the development of generalisable analytical workflows, archaeological data present new questions and challenges that require further methodological and theoretical discussion. Here we examine the decorative motifs of Neolithic pottery from an archaeological assemblage in Western Germany, and argue that the widely used (and relatively undiscussed) assumption that observed frequencies are the result of a system in equilibrium conditions is unwarranted, and can lead to incorrect conclusions. We analyse our data with a simulation-based inferential framework that can overcome some of the intrinsic limitations in archaeological data, as well as handle both equilibrium conditions and instances where the mode of cultural transmission is time-variant. Results suggest that none of the models examined can produce the observed pattern under equilibrium conditions, and suggest. instead temporal shifts in the patterns of cultural transmission. PMID:27974814

  16. Target thrust measurement for applied-field magnetoplasmadynamic thruster

    NASA Astrophysics Data System (ADS)

    Wang, B.; Yang, W.; Tang, H.; Li, Z.; Kitaeva, A.; Chen, Z.; Cao, J.; Herdrich, G.; Zhang, K.

    2018-07-01

    In this paper, we present a flat target thrust stand which is designed to measure the thrust of a steady-state applied-field magnetoplasmadynamic thruster (AF-MPDT). In our experiments we varied target-thruster distances and target size to analyze their influence on the target thrust measurement results. The obtained thrust-distance curves increase to local maximum and then decreases with the increasing distance, which means that the plume of the AF-MPDT can still accelerate outside the thruster exit. The peak positions are related to the target sizes: larger targets can make the peak positions further from the thruster and decrease the measurement errors. To further improve the reliability of measurement results, a thermal equilibrium assumption combined with Knudsen’s cosine law is adapted to analyze the error caused by the back stream of plume particles. Under the assumption, the error caused by particle backflow is no more than 3.6% and the largest difference between the measured thrust and the theoretical thrust is 14%. Moreover, it was verified that target thrust measurement can disturb the working of the AF-MPD thruster, and the influence on the thrust measurement result is no more than 1% in our experiment.

  17. On Some Unwarranted Tacit Assumptions in Cognitive Neuroscience†

    PubMed Central

    Mausfeld, Rainer

    2011-01-01

    The cognitive neurosciences are based on the idea that the level of neurons or neural networks constitutes a privileged level of analysis for the explanation of mental phenomena. This paper brings to mind several arguments to the effect that this presumption is ill-conceived and unwarranted in light of what is currently understood about the physical principles underlying mental achievements. It then scrutinizes the question why such conceptions are nevertheless currently prevailing in many areas of psychology. The paper argues that corresponding conceptions are rooted in four different aspects of our common-sense conception of mental phenomena and their explanation, which are illegitimately transferred to scientific enquiry. These four aspects pertain to the notion of explanation, to conceptions about which mental phenomena are singled out for enquiry, to an inductivist epistemology, and, in the wake of behavioristic conceptions, to a bias favoring investigations of input–output relations at the expense of enquiries into internal principles. To the extent that the cognitive neurosciences methodologically adhere to these tacit assumptions, they are prone to turn into a largely a-theoretical and data-driven endeavor while at the same time enhancing the prospects for receiving widespread public appreciation of their empirical findings. PMID:22435062

  18. Work-site health promotion: an economic model.

    PubMed

    Patton, J P

    1991-08-01

    Despite a burgeoning interest in and acceptance of corporate health promotion, the overall economic effects of these programs are not clear. Although ultimate resolution of this question awaits detailed empiric research, a theoretical approach can be useful in structuring the problem and understanding the critical issues. The financial model presented views the firm as a value-maximizing enterprise and evaluates health promotion as a use of corporate assets. The model projects the benefits and costs to the firm of a 7-year health promotion program under a variety of assumptions regarding the employee mix and the effects of the health promotion program on health and productivity. The analysis reveals that the base case assumptions result in a program that creates value for the firm when the cost is less than $193 per participating employee per year. Firms with a highly productive, difficult to replace, and older employee group are most likely to find health promotion to be a good investment. Productivity gains produce the majority of the economic benefits of the program. Effects on health care expense alone are projected to be relatively small. Gains from reduction in employee mortality or retiree health expense are found to be insignificant in this model.

  19. Processing capacity under perceptual and cognitive load: a closer look at load theory.

    PubMed

    Fitousi, Daniel; Wenger, Michael J

    2011-06-01

    Variations in perceptual and cognitive demands (load) play a major role in determining the efficiency of selective attention. According to load theory (Lavie, Hirst, Fockert, & Viding, 2004) these factors (a) improve or hamper selectivity by altering the way resources (e.g., processing capacity) are allocated, and (b) tap resources rather than data limitations (Norman & Bobrow, 1975). Here we provide an extensive and rigorous set of tests of these assumptions. Predictions regarding changes in processing capacity are tested using the hazard function of the response time (RT) distribution (Townsend & Ashby, 1978; Wenger & Gibson, 2004). The assumption that load taps resource rather than data limitations is examined using measures of sensitivity and bias drawn from signal detection theory (Swets, 1964). All analyses were performed at two levels: the individual and the aggregate. Hypotheses regarding changes in processing capacity were confirmed at the level of the aggregate. Hypotheses regarding resource and data limitations were not completely supported at either level of analysis. And in all of the analyses, we observed substantial individual differences. In sum, the results suggest a need to expand the theoretical vocabulary of load theory, rather than a need to discard it.

  20. Revealing patterns of cultural transmission from frequency data: equilibrium and non-equilibrium assumptions

    NASA Astrophysics Data System (ADS)

    Crema, Enrico R.; Kandler, Anne; Shennan, Stephen

    2016-12-01

    A long tradition of cultural evolutionary studies has developed a rich repertoire of mathematical models of social learning. Early studies have laid the foundation of more recent endeavours to infer patterns of cultural transmission from observed frequencies of a variety of cultural data, from decorative motifs on potsherds to baby names and musical preferences. While this wide range of applications provides an opportunity for the development of generalisable analytical workflows, archaeological data present new questions and challenges that require further methodological and theoretical discussion. Here we examine the decorative motifs of Neolithic pottery from an archaeological assemblage in Western Germany, and argue that the widely used (and relatively undiscussed) assumption that observed frequencies are the result of a system in equilibrium conditions is unwarranted, and can lead to incorrect conclusions. We analyse our data with a simulation-based inferential framework that can overcome some of the intrinsic limitations in archaeological data, as well as handle both equilibrium conditions and instances where the mode of cultural transmission is time-variant. Results suggest that none of the models examined can produce the observed pattern under equilibrium conditions, and suggest. instead temporal shifts in the patterns of cultural transmission.

  1. The Vocational Turn in Adult Literacy Education and the Impact of the International Adult Literacy Survey

    NASA Astrophysics Data System (ADS)

    Druine, Nathalie; Wildemeersch, Danny

    2000-09-01

    The authors critically examine some of the underlying epistemological and theoretical assumptions of the IALS. In doing so, they distinguish among two basic orientations towards literacy. First, the standard approach (of which IALS is an example) subscribes to the possibility of measuring literacy as abstract, cognitive skills, and endorses the claim that there is an important relationship between literacy skills and economic success in the so-called 'knowledge society.' The second, called a socio-cultural approach, insists on the contextual and power-related character of people's literacy practices. The authors further illustrate that the assumptions of the IALS are rooted in a neo-liberal ideology that forces all members of society to adjust to the exigencies of the globalised economy. In the current, contingent conditions of the risk society, however, it does not seem very wise to limit the learning of adults to enhancing labour-market competencies. Adult education should relate to the concrete literacy practices people already have in their lives. It should make its learners co-responsible actors of their own learning process and participants in a democratic debate on defining the kind of society people want to build.

  2. A new solution of measuring thermal response of prestressed concrete bridge girders for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Jiao, Pengcheng; Borchani, Wassim; Hasni, Hassene; Lajnef, Nizar

    2017-08-01

    This study develops a novel buckling-based mechanism to measure the thermal response of prestressed concrete bridge girders under continuous temperature changes for structural health monitoring. The measuring device consists of a bilaterally constrained beam and a piezoelectric polyvinylidene fluoride transducer that is attached to the beam. Under thermally induced displacement, the slender beam is buckled. The post-buckling events are deployed to convert the low-rate and low-frequency excitations into localized high-rate motions and, therefore, the attached piezoelectric transducer is triggered to generate electrical signals. Attaching the measuring device to concrete bridge girders, the electrical signals are used to detect the thermal response of concrete bridges. Finite element simulations are conducted to obtain the displacement of prestressed concrete girders under thermal loads. Using the thermal-induced displacement as input, experiments are carried out on a 3D printed measuring device to investigate the buckling response and corresponding electrical signals. A theoretical model is developed based on the nonlinear Euler-Bernoulli beam theory and large deformation assumptions to predict the buckling mode transitions of the beam. Based on the presented theoretical model, the geometry properties of the measuring device can be designed such that its buckling response is effectively controlled. Consequently, the thermally induced displacement can be designed as limit states to detect excessive thermal loads on concrete bridge girders. The proposed solution sufficiently measures the thermal response of concrete bridges.

  3. Grammar Is a System That Characterizes Talk in Interaction

    PubMed Central

    Ginzburg, Jonathan; Poesio, Massimo

    2016-01-01

    Much of contemporary mainstream formal grammar theory is unable to provide analyses for language as it occurs in actual spoken interaction. Its analyses are developed for a cleaned up version of language which omits the disfluencies, non-sentential utterances, gestures, and many other phenomena that are ubiquitous in spoken language. Using evidence from linguistics, conversation analysis, multimodal communication, psychology, language acquisition, and neuroscience, we show these aspects of language use are rule governed in much the same way as phenomena captured by conventional grammars. Furthermore, we argue that over the past few years some of the tools required to provide a precise characterizations of such phenomena have begun to emerge in theoretical and computational linguistics; hence, there is no reason for treating them as “second class citizens” other than pre-theoretical assumptions about what should fall under the purview of grammar. Finally, we suggest that grammar formalisms covering such phenomena would provide a better foundation not just for linguistic analysis of face-to-face interaction, but also for sister disciplines, such as research on spoken dialogue systems and/or psychological work on language acquisition. PMID:28066279

  4. Simultaneously constraining the astrophysics of reionisation and the epoch of heating with 21CMMC

    NASA Astrophysics Data System (ADS)

    Greig, Bradley; Mesinger, Andrei

    2018-05-01

    We extend our MCMC sampler of 3D EoR simulations, 21CMMC, to perform parameter estimation directly on light-cones of the cosmic 21cm signal. This brings theoretical analysis one step closer to matching the expected 21-cm signal from next generation interferometers like HERA and the SKA. Using the light-cone version of 21CMMC, we quantify biases in the recovered astrophysical parameters obtained from the 21cm power spectrum when using the co-eval approximation to fit a mock 3D light-cone observation. While ignoring the light-cone effect does not bias the parameters under most assumptions, it can still underestimate their uncertainties. However, significant biases (~few - 10 σ) are possible if all of the following conditions are met: (i) foreground removal is very efficient, allowing large physical scales (k ~ 0.1 Mpc-1) to be used in the analysis; (ii) theoretical modelling is accurate to ~10 per cent in the power spectrum amplitude; and (iii) the 21cm signal evolves rapidly (i.e. the epochs of reionisation and heating overlap significantly

  5. Nonlocal transport in the presence of transport barriers

    NASA Astrophysics Data System (ADS)

    Del-Castillo-Negrete, D.

    2013-10-01

    There is experimental, numerical, and theoretical evidence that transport in plasmas can, under certain circumstances, depart from the standard local, diffusive description. Examples include fast pulse propagation phenomena in perturbative experiments, non-diffusive scaling in L-mode plasmas, and non-Gaussian statistics of fluctuations. From the theoretical perspective, non-diffusive transport descriptions follow from the relaxation of the restrictive assumptions (locality, scale separation, and Gaussian/Markovian statistics) at the foundation of diffusive models. We discuss an alternative class of models able to capture some of the observed non-diffusive transport phenomenology. The models are based on a class of nonlocal, integro-differential operators that provide a unifying framework to describe non- Fickian scale-free transport, and non-Markovian (memory) effects. We study the interplay between nonlocality and internal transport barriers (ITBs) in perturbative transport including cold edge pulses and power modulation. Of particular interest in the nonlocal ``tunnelling'' of perturbations through ITBs. Also, flux-gradient diagrams are discussed as diagnostics to detect nonlocal transport processes in numerical simulations and experiments. Work supported by the US Department of Energy.

  6. Crystal structure and mechanical strain in polycrystalline ferrite films on polycrystalline sapphire substrates

    NASA Astrophysics Data System (ADS)

    Bogdanovich, M. P.

    1996-10-01

    We have grown films of magnesium, lithium, zinc, and nickel-zinc ferrites, varying in thickness from 0.5 to 8 μm on polycrystalline sapphiresubstrates by coating the surface of the substrate with an aqueous nitric acid solution of salts of the elements which compose the ferrite. The lattice parameter of the ferrite film increases with the film thickness and becomes constant at thicknesses greater than 8 μm. We have determined the ratio of the theoretical strength limit to the macroscopic one in the film based on the change in the interplanar distance d 220 and the lattice parameter calculated from it, under the assumption that the change Δa(h)=a ∞=a(h) results from macroscopic stresses in the film. This ratio shows that when h=1 μm the microstresses in the film are an order of magnitude smaller than the theoretical strength limit. At larger film thicknesses this macroscopic stress becomes even lower, and at the external surface of thick films it goes completely to zero.

  7. Theoretical studies in support of the 3M-vapor transport (PVTOS-) experiments

    NASA Technical Reports Server (NTRS)

    Rosner, Daniel E.; Keyes, David E.

    1989-01-01

    Results are reported for a preliminary theoretical study of the coupled mass-, momentum-, and heat-transfer conditions expected within small ampoules used to grow oriented organic solid (OS-) films, by physical vapor transport (PVT) in microgravity environments. It is show that previous studies made restrictive assumptions (e.g., smallness of delta T/T, equality of molecular diffusivities) not valid under PVTOS conditions, whereas the important phenomena of sidewall gas creep, Soret transport of the organic vapor, and large vapor phase supersaturations associated with the large prevailing temperature gradients were not previously considered. Rational estimates are made of the molecular transport properties relevant to copper-phthalocyanine monomeric vapor in a gas mixture containing H2(g) and Xe(g). Efficient numerical methods have been developed and are outlined/illustrated here to making steady axisymmetric gas flow calculations within such ampoules, allowing for realistic realistic delta T/T(sub)w-values, and even corrections to Navier-Stokes-Fourier 'closure' for the governing continuum differential equations. High priority follow-on studies are outlined based on these new results.

  8. Albertian errors in head-mounted displays: I. Choice of eye-point location for a near- or far-field task visualization.

    PubMed

    Rolland, Jannick; Ha, Yonggang; Fidopiastis, Cali

    2004-06-01

    A theoretical investigation of rendered depth and angular errors, or Albertian errors, linked to natural eye movements in binocular head-mounted displays (HMDs) is presented for three possible eye-point locations: the center of the entrance pupil, the nodal point, and the center of rotation of the eye. A numerical quantification was conducted for both the pupil and the center of rotation of the eye under the assumption that the user will operate solely in either the near field under an associated instrumentation setting or the far field under a different setting. Under these conditions, the eyes are taken to gaze in the plane of the stereoscopic images. Across conditions, results show that the center of the entrance pupil minimizes rendered angular errors, while the center of rotation minimizes rendered position errors. Significantly, this investigation quantifies that under proper setting of the HMD and correct choice of the eye points, rendered depth and angular errors can be brought to be either negligible or within specification of even the most stringent applications in performance of tasks in either the near field or the far field.

  9. Challenging Assumptions of International Public Relations: When Government Is the Most Important Public.

    ERIC Educational Resources Information Center

    Taylor, Maureen; Kent, Michael L.

    1999-01-01

    Explores assumptions underlying Malaysia's and the United States' public-relations practice. Finds many assumptions guiding Western theories and practices are not applicable to other countries. Examines the assumption that the practice of public relations targets a variety of key organizational publics. Advances international public-relations…

  10. Humans display a reduced set of consistent behavioral phenotypes in dyadic games.

    PubMed

    Poncela-Casasnovas, Julia; Gutiérrez-Roig, Mario; Gracia-Lázaro, Carlos; Vicens, Julian; Gómez-Gardeñes, Jesús; Perelló, Josep; Moreno, Yamir; Duch, Jordi; Sánchez, Angel

    2016-08-01

    Socially relevant situations that involve strategic interactions are widespread among animals and humans alike. To study these situations, theoretical and experimental research has adopted a game theoretical perspective, generating valuable insights about human behavior. However, most of the results reported so far have been obtained from a population perspective and considered one specific conflicting situation at a time. This makes it difficult to extract conclusions about the consistency of individuals' behavior when facing different situations and to define a comprehensive classification of the strategies underlying the observed behaviors. We present the results of a lab-in-the-field experiment in which subjects face four different dyadic games, with the aim of establishing general behavioral rules dictating individuals' actions. By analyzing our data with an unsupervised clustering algorithm, we find that all the subjects conform, with a large degree of consistency, to a limited number of behavioral phenotypes (envious, optimist, pessimist, and trustful), with only a small fraction of undefined subjects. We also discuss the possible connections to existing interpretations based on a priori theoretical approaches. Our findings provide a relevant contribution to the experimental and theoretical efforts toward the identification of basic behavioral phenotypes in a wider set of contexts without aprioristic assumptions regarding the rules or strategies behind actions. From this perspective, our work contributes to a fact-based approach to the study of human behavior in strategic situations, which could be applied to simulating societies, policy-making scenario building, and even a variety of business applications.

  11. NMR properties of 3He-A in biaxially anisotropic aerogel

    NASA Astrophysics Data System (ADS)

    Dmitriev, V. V.; Krasnikhin, D. A.; Senin, A. A.; Yudin, A. N.

    2012-12-01

    Theoretical model of G.E. Volovik for A-like phase of 3He in aerogel suggests formation of Larkin-Imry-Ma state of Anderson-Brinkmann-Morel order parameter. Most of results of NMR studies of A-like phase are in a good agreement with this model in assumption of uniaxial anisotropy, except for some of experiments in weakly anisotropic aerogel samples. We demonstrate that these results can be described in frames of the same model in assumption of biaxial anisotropy. Parameters of anisotropy in these experiments can be determined from the NMR data.

  12. Derivation of an applied nonlinear Schroedinger equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pitts, Todd Alan; Laine, Mark Richard; Schwarz, Jens

    We derive from first principles a mathematical physics model useful for understanding nonlinear optical propagation (including filamentation). All assumptions necessary for the development are clearly explained. We include the Kerr effect, Raman scattering, and ionization (as well as linear and nonlinear shock, diffraction and dispersion). We explain the phenomenological sub-models and each assumption required to arrive at a complete and consistent theoretical description. The development includes the relationship between shock and ionization and demonstrates why inclusion of Drude model impedance effects alters the nature of the shock operator. Unclassified Unlimited Release

  13. The prevalence of terraced treescapes in analyses of phylogenetic data sets.

    PubMed

    Dobrin, Barbara H; Zwickl, Derrick J; Sanderson, Michael J

    2018-04-04

    The pattern of data availability in a phylogenetic data set may lead to the formation of terraces, collections of equally optimal trees. Terraces can arise in tree space if trees are scored with parsimony or with partitioned, edge-unlinked maximum likelihood. Theory predicts that terraces can be large, but their prevalence in contemporary data sets has never been surveyed. We selected 26 data sets and phylogenetic trees reported in recent literature and investigated the terraces to which the trees would belong, under a common set of inference assumptions. We examined terrace size as a function of the sampling properties of the data sets, including taxon coverage density (the proportion of taxon-by-gene positions with any data present) and a measure of gene sampling "sufficiency". We evaluated each data set in relation to the theoretical minimum gene sampling depth needed to reduce terrace size to a single tree, and explored the impact of the terraces found in replicate trees in bootstrap methods. Terraces were identified in nearly all data sets with taxon coverage densities < 0.90. They were not found, however, in high-coverage-density (i.e., ≥ 0.94) transcriptomic and genomic data sets. The terraces could be very large, and size varied inversely with taxon coverage density and with gene sampling sufficiency. Few data sets achieved a theoretical minimum gene sampling depth needed to reduce terrace size to a single tree. Terraces found during bootstrap resampling reduced overall support. If certain inference assumptions apply, trees estimated from empirical data sets often belong to large terraces of equally optimal trees. Terrace size correlates to data set sampling properties. Data sets seldom include enough genes to reduce terrace size to one tree. When bootstrap replicate trees lie on a terrace, statistical support for phylogenetic hypotheses may be reduced. Although some of the published analyses surveyed were conducted with edge-linked inference models (which do not induce terraces), unlinked models have been used and advocated. The present study describes the potential impact of that inference assumption on phylogenetic inference in the context of the kinds of multigene data sets now widely assembled for large-scale tree construction.

  14. Inhibition of return: A phenomenon in search of a definition and a theoretical framework.

    PubMed

    Dukewich, Kristie R; Klein, Raymond M

    2015-07-01

    In a study of scientific nomenclature, we explore the diversity of perspectives researchers endorse for the phenomenon of inhibition of return (IOR). IOR is often described as an effect whereby people are slower to respond to a target presented at a recently stimulated or inspected location as compared to a target presented at a new location. Since its discovery, scores of papers have been published on IOR, and researchers have proposed, accepted and rejected a variety of potential causes, mechanisms, effects and components for the phenomenon. Experts in IOR were surveyed about their opinions regarding various aspects of IOR and the literature exploring it. We found variety both between and within experts surveyed, suggesting that most researchers hold implicit, and often quite unique assumptions about IOR. These widely varied assumptions may be hindering the creation or acceptance of a central theoretical framework regarding IOR; and this variety may portend that what has been given the label "IOR" may be more than one phenomenon requiring more than one theoretical explanation. We wonder whether scientific progress in domains other than IOR might be affected by too broad (or perhaps too narrow) a range of phenomena to which our nomenclature is applied.

  15. Using a matrix-analytical approach to synthesizing evidence solved incompatibility problem in the hierarchy of evidence.

    PubMed

    Walach, Harald; Loef, Martin

    2015-11-01

    The hierarchy of evidence presupposes linearity and additivity of effects, as well as commutativity of knowledge structures. It thereby implicitly assumes a classical theoretical model. This is an argumentative article that uses theoretical analysis based on pertinent literature and known facts to examine the standard view of methodology. We show that the assumptions of the hierarchical model are wrong. The knowledge structures gained by various types of studies are not sequentially indifferent, that is, do not commute. External validity and internal validity are at least partially incompatible concepts. Therefore, one needs a different theoretical structure, typical of quantum-type theories, to model this situation. The consequence of this situation is that the implicit assumptions of the hierarchical model are wrong, if generalized to the concept of evidence in total. The problem can be solved by using a matrix-analytical approach to synthesizing evidence. Here, research methods that produce different types of evidence that complement each other are synthesized to yield the full knowledge. We show by an example how this might work. We conclude that the hierarchical model should be complemented by a broader reasoning in methodology. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Empathy for Carnivores

    DTIC Science & Technology

    2013-05-23

    this section. It helps to identify and remove cognitive biases and unseen assumptions. THEORETICAL TIES TO EMPATHY We had been hopelessly labouring ...attempts to gauge the satisfaction of future circumstances and their sustainability in light of the anticipated future system as a whole. In simulating his

  17. Reporting dream experience: Why (not) to be skeptical about dream reports

    PubMed Central

    Windt, Jennifer M.

    2013-01-01

    Are dreams subjective experiences during sleep? Is it like something to dream, or is it only like something to remember dreams after awakening? Specifically, can dream reports be trusted to reveal what it is like to dream, and should they count as evidence for saying that dreams are conscious experiences at all? The goal of this article is to investigate the relationship between dreaming, dream reporting and subjective experience during sleep. I discuss different variants of philosophical skepticism about dream reporting and argue that they all fail. Consequently, skeptical doubts about the trustworthiness of dream reports are misguided, and for systematic reasons. I suggest an alternative, anti-skeptical account of the trustworthiness of dream reports. On this view, dream reports, when gathered under ideal reporting conditions and according to the principle of temporal proximity, are trustworthy (or transparent) with respect to conscious experience during sleep. The transparency assumption has the status of a methodologically necessary default assumption and is theoretically justified because it provides the best explanation of dream reporting. At the same time, it inherits important insights from the discussed variants of skepticism about dream reporting, suggesting that the careful consideration of these skeptical arguments ultimately leads to a positive account of why and under which conditions dream reports can and should be trusted. In this way, moderate distrust can be fruitfully combined with anti-skepticism about dream reporting. Several perspectives for future dream research and for the comparative study of dreaming and waking experience are suggested. PMID:24223542

  18. Reporting dream experience: Why (not) to be skeptical about dream reports.

    PubMed

    Windt, Jennifer M

    2013-01-01

    Are dreams subjective experiences during sleep? Is it like something to dream, or is it only like something to remember dreams after awakening? Specifically, can dream reports be trusted to reveal what it is like to dream, and should they count as evidence for saying that dreams are conscious experiences at all? The goal of this article is to investigate the relationship between dreaming, dream reporting and subjective experience during sleep. I discuss different variants of philosophical skepticism about dream reporting and argue that they all fail. Consequently, skeptical doubts about the trustworthiness of dream reports are misguided, and for systematic reasons. I suggest an alternative, anti-skeptical account of the trustworthiness of dream reports. On this view, dream reports, when gathered under ideal reporting conditions and according to the principle of temporal proximity, are trustworthy (or transparent) with respect to conscious experience during sleep. The transparency assumption has the status of a methodologically necessary default assumption and is theoretically justified because it provides the best explanation of dream reporting. At the same time, it inherits important insights from the discussed variants of skepticism about dream reporting, suggesting that the careful consideration of these skeptical arguments ultimately leads to a positive account of why and under which conditions dream reports can and should be trusted. In this way, moderate distrust can be fruitfully combined with anti-skepticism about dream reporting. Several perspectives for future dream research and for the comparative study of dreaming and waking experience are suggested.

  19. Re-examination of globally flat space-time.

    PubMed

    Feldman, Michael R

    2013-01-01

    In the following, we offer a novel approach to modeling the observed effects currently attributed to the theoretical concepts of "dark energy," "dark matter," and "dark flow." Instead of assuming the existence of these theoretical concepts, we take an alternative route and choose to redefine what we consider to be inertial motion as well as what constitutes an inertial frame of reference in flat space-time. We adopt none of the features of our current cosmological models except for the requirement that special and general relativity be local approximations within our revised definition of inertial systems. Implicit in our ideas is the assumption that at "large enough" scales one can treat objects within these inertial systems as point-particles having an insignificant effect on the curvature of space-time. We then proceed under the assumption that time and space are fundamentally intertwined such that time- and spatial-translational invariance are not inherent symmetries of flat space-time (i.e., observable clock rates depend upon both relative velocity and spatial position within these inertial systems) and take the geodesics of this theory in the radial Rindler chart as the proper characterization of inertial motion. With this commitment, we are able to model solely with inertial motion the observed effects expected to be the result of "dark energy," "dark matter," and "dark flow." In addition, we examine the potential observable implications of our theory in a gravitational system located within a confined region of an inertial reference frame, subsequently interpreting the Pioneer anomaly as support for our redefinition of inertial motion. As well, we extend our analysis into quantum mechanics by quantizing for a real scalar field and find a possible explanation for the asymmetry between matter and antimatter within the framework of these redefined inertial systems.

  20. Information Theoretic Characterization of Physical Theories with Projective State Space

    NASA Astrophysics Data System (ADS)

    Zaopo, Marco

    2015-08-01

    Probabilistic theories are a natural framework to investigate the foundations of quantum theory and possible alternative or deeper theories. In a generic probabilistic theory, states of a physical system are represented as vectors of outcomes probabilities and state spaces are convex cones. In this picture the physics of a given theory is related to the geometric shape of the cone of states. In quantum theory, for instance, the shape of the cone of states corresponds to a projective space over complex numbers. In this paper we investigate geometric constraints on the state space of a generic theory imposed by the following information theoretic requirements: every non completely mixed state of a system is perfectly distinguishable from some other state in a single shot measurement; information capacity of physical systems is conserved under making mixtures of states. These assumptions guarantee that a generic physical system satisfies a natural principle asserting that the more a state of the system is mixed the less information can be stored in the system using that state as logical value. We show that all theories satisfying the above assumptions are such that the shape of their cones of states is that of a projective space over a generic field of numbers. Remarkably, these theories constitute generalizations of quantum theory where superposition principle holds with coefficients pertaining to a generic field of numbers in place of complex numbers. If the field of numbers is trivial and contains only one element we obtain classical theory. This result tells that superposition principle is quite common among probabilistic theories while its absence gives evidence of either classical theory or an implausible theory.

  1. A review of the findings and theories on surface size effects on visual attention

    PubMed Central

    Peschel, Anne O.; Orquin, Jacob L.

    2013-01-01

    That surface size has an impact on attention has been well-known in advertising research for almost a century; however, theoretical accounts of this effect have been sparse. To address this issue, we review studies on surface size effects on eye movements in this paper. While most studies find that large objects are more likely to be fixated, receive more fixations, and are fixated faster than small objects, a comprehensive explanation of this effect is still lacking. To bridge the theoretical gap, we relate the findings from this review to three theories of surface size effects suggested in the literature: a linear model based on the assumption of random fixations (Lohse, 1997), a theory of surface size as visual saliency (Pieters etal., 2007), and a theory based on competition for attention (CA; Janiszewski, 1998). We furthermore suggest a fourth model – demand for attention – which we derive from the theory of CA by revising the underlying model assumptions. In order to test the models against each other, we reanalyze data from an eye tracking study investigating surface size and saliency effects on attention. The reanalysis revealed little support for the first three theories while the demand for attention model showed a much better alignment with the data. We conclude that surface size effects may best be explained as an increase in object signal strength which depends on object size, number of objects in the visual scene, and object distance to the center of the scene. Our findings suggest that advertisers should take into account how objects in the visual scene interact in order to optimize attention to, for instance, brands and logos. PMID:24367343

  2. A review of the findings and theories on surface size effects on visual attention.

    PubMed

    Peschel, Anne O; Orquin, Jacob L

    2013-12-09

    That surface size has an impact on attention has been well-known in advertising research for almost a century; however, theoretical accounts of this effect have been sparse. To address this issue, we review studies on surface size effects on eye movements in this paper. While most studies find that large objects are more likely to be fixated, receive more fixations, and are fixated faster than small objects, a comprehensive explanation of this effect is still lacking. To bridge the theoretical gap, we relate the findings from this review to three theories of surface size effects suggested in the literature: a linear model based on the assumption of random fixations (Lohse, 1997), a theory of surface size as visual saliency (Pieters etal., 2007), and a theory based on competition for attention (CA; Janiszewski, 1998). We furthermore suggest a fourth model - demand for attention - which we derive from the theory of CA by revising the underlying model assumptions. In order to test the models against each other, we reanalyze data from an eye tracking study investigating surface size and saliency effects on attention. The reanalysis revealed little support for the first three theories while the demand for attention model showed a much better alignment with the data. We conclude that surface size effects may best be explained as an increase in object signal strength which depends on object size, number of objects in the visual scene, and object distance to the center of the scene. Our findings suggest that advertisers should take into account how objects in the visual scene interact in order to optimize attention to, for instance, brands and logos.

  3. Re-Examination of Globally Flat Space-Time

    NASA Astrophysics Data System (ADS)

    Feldman, Michael R.

    2013-11-01

    In the following, we offer a novel approach to modeling the observed effects currently attributed to the theoretical concepts of "dark energy," "dark matter," and "dark flow." Instead of assuming the existence of these theoretical concepts, we take an alternative route and choose to redefine what we consider to be inertial motion as well as what constitutes an inertial frame of reference in flat space-time. We adopt none of the features of our current cosmological models except for the requirement that special and general relativity be local approximations within our revised definition of inertial systems. Implicit in our ideas is the assumption that at "large enough" scales one can treat objects within these inertial systems as point-particles having an insignificant effect on the curvature of space-time. We then proceed under the assumption that time and space are fundamentally intertwined such that time- and spatial-translational invariance are not inherent symmetries of flat space-time (i.e., observable clock rates depend upon both relative velocity and spatial position within these inertial systems) and take the geodesics of this theory in the radial Rindler chart as the proper characterization of inertial motion. With this commitment, we are able to model solely with inertial motion the observed effects expected to be the result of "dark energy," "dark matter," and "dark flow." In addition, we examine the potential observable implications of our theory in a gravitational system located within a confined region of an inertial reference frame, subsequently interpreting the Pioneer anomaly as support for our redefinition of inertial motion. As well, we extend our analysis into quantum mechanics by quantizing for a real scalar field and find a possible explanation for the asymmetry between matter and antimatter within the framework of these redefined inertial systems.

  4. Unified nano-mechanics based probabilistic theory of quasibrittle and brittle structures: II. Fatigue crack growth, lifetime and scaling

    NASA Astrophysics Data System (ADS)

    Le, Jia-Liang; Bažant, Zdeněk P.

    2011-07-01

    This paper extends the theoretical framework presented in the preceding Part I to the lifetime distribution of quasibrittle structures failing at the fracture of one representative volume element under constant amplitude fatigue. The probability distribution of the critical stress amplitude is derived for a given number of cycles and a given minimum-to-maximum stress ratio. The physical mechanism underlying the Paris law for fatigue crack growth is explained under certain plausible assumptions about the damage accumulation in the cyclic fracture process zone at the tip of subcritical crack. This law is then used to relate the probability distribution of critical stress amplitude to the probability distribution of fatigue lifetime. The theory naturally yields a power-law relation for the stress-life curve (S-N curve), which agrees with Basquin's law. Furthermore, the theory indicates that, for quasibrittle structures, the S-N curve must be size dependent. Finally, physical explanation is provided to the experimentally observed systematic deviations of lifetime histograms of various ceramics and bones from the Weibull distribution, and their close fits by the present theory are demonstrated.

  5. Contextual Multi-armed Bandits under Feature Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yun, Seyoung; Nam, Jun Hyun; Mo, Sangwoo

    We study contextual multi-armed bandit problems under linear realizability on rewards and uncertainty (or noise) on features. For the case of identical noise on features across actions, we propose an algorithm, coined NLinRel, having O(T⁷/₈(log(dT)+K√d)) regret bound for T rounds, K actions, and d-dimensional feature vectors. Next, for the case of non-identical noise, we observe that popular linear hypotheses including NLinRel are impossible to achieve such sub-linear regret. Instead, under assumption of Gaussian feature vectors, we prove that a greedy algorithm has O(T²/₃√log d)regret bound with respect to the optimal linear hypothesis. Utilizing our theoretical understanding on the Gaussian case,more » we also design a practical variant of NLinRel, coined Universal-NLinRel, for arbitrary feature distributions. It first runs NLinRel for finding the ‘true’ coefficient vector using feature uncertainties and then adjust it to minimize its regret using the statistical feature information. We justify the performance of Universal-NLinRel on both synthetic and real-world datasets.« less

  6. Valid statistical inference methods for a case-control study with missing data.

    PubMed

    Tian, Guo-Liang; Zhang, Chi; Jiang, Xuejun

    2018-04-01

    The main objective of this paper is to derive the valid sampling distribution of the observed counts in a case-control study with missing data under the assumption of missing at random by employing the conditional sampling method and the mechanism augmentation method. The proposed sampling distribution, called the case-control sampling distribution, can be used to calculate the standard errors of the maximum likelihood estimates of parameters via the Fisher information matrix and to generate independent samples for constructing small-sample bootstrap confidence intervals. Theoretical comparisons of the new case-control sampling distribution with two existing sampling distributions exhibit a large difference. Simulations are conducted to investigate the influence of the three different sampling distributions on statistical inferences. One finding is that the conclusion by the Wald test for testing independency under the two existing sampling distributions could be completely different (even contradictory) from the Wald test for testing the equality of the success probabilities in control/case groups under the proposed distribution. A real cervical cancer data set is used to illustrate the proposed statistical methods.

  7. Inferring the Mode of Selection from the Transient Response to Demographic Perturbations

    NASA Astrophysics Data System (ADS)

    Balick, Daniel; Do, Ron; Reich, David; Sunyaev, Shamil

    2014-03-01

    Despite substantial recent progress in theoretical population genetics, most models work under the assumption of a constant population size. Deviations from fixed population sizes are ubiquitous in natural populations, many of which experience population bottlenecks and re-expansions. The non-equilibrium dynamics introduced by a large perturbation in population size are generally viewed as a confounding factor. In the present work, we take advantage of the transient response to a population bottleneck to infer features of the mode of selection and the distribution of selective effects. We develop an analytic framework and a corresponding statistical test that qualitatively differentiates between alleles under additive and those under recessive or more general epistatic selection. This statistic can be used to bound the joint distribution of selective effects and dominance effects in any diploid sexual organism. We apply this technique to human population genetic data, and severely restrict the space of allowed selective coefficients in humans. Additionally, one can test a set of functionally or medically relevant alleles for the primary mode of selection, or determine the local regional variation in dominance coefficients along the genome.

  8. Optimal Regulation of Virtual Power Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall Anese, Emiliano; Guggilam, Swaroop S.; Simonetto, Andrea

    This paper develops a real-time algorithmic framework for aggregations of distributed energy resources (DERs) in distribution networks to provide regulation services in response to transmission-level requests. Leveraging online primal-dual-type methods for time-varying optimization problems and suitable linearizations of the nonlinear AC power-flow equations, we believe this work establishes the system-theoretic foundation to realize the vision of distribution-level virtual power plants. The optimization framework controls the output powers of dispatchable DERs such that, in aggregate, they respond to automatic-generation-control and/or regulation-services commands. This is achieved while concurrently regulating voltages within the feeder and maximizing customers' and utility's performance objectives. Convergence andmore » tracking capabilities are analytically established under suitable modeling assumptions. Simulations are provided to validate the proposed approach.« less

  9. Limits on spin-dependent WIMP-nucleon cross sections from the XENON10 experiment.

    PubMed

    Angle, J; Aprile, E; Arneodo, F; Baudis, L; Bernstein, A; Bolozdynya, A; Coelho, L C C; Dahl, C E; DeViveiros, L; Ferella, A D; Fernandes, L M P; Fiorucci, S; Gaitskell, R J; Giboni, K L; Gomez, R; Hasty, R; Kastens, L; Kwong, J; Lopes, J A M; Madden, N; Manalaysay, A; Manzur, A; McKinsey, D N; Monzani, M E; Ni, K; Oberlack, U; Orboeck, J; Plante, G; Santorelli, R; dos Santos, J M F; Shagin, P; Shutt, T; Sorensen, P; Schulte, S; Winant, C; Yamashita, M

    2008-08-29

    XENON10 is an experiment to directly detect weakly interacting massive particles (WIMPs), which may comprise the bulk of the nonbaryonic dark matter in our Universe. We report new results for spin-dependent WIMP-nucleon interactions with 129Xe and 131Xe from 58.6 live days of operation at the Laboratori Nazionali del Gran Sasso. Based on the nonobservation of a WIMP signal in 5.4 kg of fiducial liquid xenon mass, we exclude previously unexplored regions in the theoretically allowed parameter space for neutralinos. We also exclude a heavy Majorana neutrino with a mass in the range of approximately 10 GeV/c2-2 TeV/c2 as a dark matter candidate under standard assumptions for its density and distribution in the galactic halo.

  10. The surface brightness of reflection nebulae. Ph.D. Thesis, Dec. 1972

    NASA Technical Reports Server (NTRS)

    Rush, W. F.

    1974-01-01

    Hubble's equation relating the maximum apparent angular extent of a reflection nebula to the apparent magnitude of the illuminating star has been reconsidered under a set of less restrictive assumptions. A computational technique is developed which permits the use of fits to observed m, log a values to determine the albedo of the particles composing reflection nebulae, providing only that one assumes a particular phase function. Despite the fact that all orders of scattering, anisotropic phase functions, and illumination by the general stellar field are considered, the albedo which is determined for reflection nebulae by this method appears larger than that for interstellar particles in general. The possibility that the higher surface brightness might be due to a continuous fluorescence mechanism is considered both theoretically and observationally.

  11. Modelling nonlinearity in piezoceramic transducers: From equations to nonlinear equivalent circuits.

    PubMed

    Parenthoine, D; Tran-Huu-Hue, L-P; Haumesser, L; Vander Meulen, F; Lematre, M; Lethiecq, M

    2011-02-01

    Quadratic nonlinear equations of a piezoelectric element under the assumptions of 1D vibration and weak nonlinearity are derived by the perturbation theory. It is shown that the nonlinear response can be represented by controlled sources that are added to the classical hexapole used to model piezoelectric ultrasonic transducers. As a consequence, equivalent electrical circuits can be used to predict the nonlinear response of a transducer taking into account the acoustic loads on the rear and front faces. A generalisation of nonlinear equivalent electrical circuits to cases including passive layers and propagation media is then proposed. Experimental results, in terms of second harmonic generation, on a coupled resonator are compared to theoretical calculations from the proposed model. Copyright © 2010 Elsevier B.V. All rights reserved.

  12. On localizing a capsule endoscope using magnetic sensors.

    PubMed

    Moussakhani, Babak; Ramstad, Tor; Flåm, John T; Balasingham, Ilangko

    2012-01-01

    In this work, localizing a capsule endoscope within the gastrointestinal tract is addressed. It is assumed that the capsule is equipped with a magnet, and that a magnetic sensor network measures the flux from this magnet. We assume no prior knowledge on the source location, and that the measurements collected by the sensors are corrupted by thermal Gaussian noise only. Under these assumptions, we focus on determining the Cramer-Rao Lower Bound (CRLB) for the location of the endoscope. Thus, we are not studying specific estimators, but rather the theoretical performance of an optimal one. It is demonstrated that the CRLB is a function of the distance and angle between the sensor network and the magnet. By studying the CRLB with respect to different sensor array constellations, we are able to indicate favorable constellations.

  13. Transmission dynamics of Bacillus thuringiensis infecting Plodia interpunctella: a test of the mass action assumption with an insect pathogen.

    PubMed

    Knell, R J; Begon, M; Thompson, D J

    1996-01-22

    Central to theoretical studies of host-pathogen population dynamics is a term describing transmission of the pathogen. This usually assumes that transmission is proportional to the density of infectious hosts or particles and of susceptible individuals. We tested this assumption with the bacterial pathogen Bacillus thuringiensis infecting larvae of Plodia interpunctella, the Indian meal moth. Transmission was found to increase in a more than linear way with host density in fourth and fifth instar P. interpunctella, and to decrease with the density of infectious cadavers in the case of fifth instar larvae. Food availability was shown to play an important part in this process. Therefore, on a number of counts, the usual assumption was found not to apply in our experimental system.

  14. Double density dynamics: realizing a joint distribution of a physical system and a parameter system

    NASA Astrophysics Data System (ADS)

    Fukuda, Ikuo; Moritsugu, Kei

    2015-11-01

    To perform a variety of types of molecular dynamics simulations, we created a deterministic method termed ‘double density dynamics’ (DDD), which realizes an arbitrary distribution for both physical variables and their associated parameters simultaneously. Specifically, we constructed an ordinary differential equation that has an invariant density relating to a joint distribution of the physical system and the parameter system. A generalized density function leads to a physical system that develops under nonequilibrium environment-describing superstatistics. The joint distribution density of the physical system and the parameter system appears as the Radon-Nikodym derivative of a distribution that is created by a scaled long-time average, generated from the flow of the differential equation under an ergodic assumption. The general mathematical framework is fully discussed to address the theoretical possibility of our method, and a numerical example representing a 1D harmonic oscillator is provided to validate the method being applied to the temperature parameters.

  15. Extended Huygens-Fresnel principle and optical waves propagation in turbulence: discussion.

    PubMed

    Charnotskii, Mikhail

    2015-07-01

    Extended Huygens-Fresnel principle (EHF) currently is the most common technique used in theoretical studies of the optical propagation in turbulence. A recent review paper [J. Opt. Soc. Am. A31, 2038 (2014)JOAOD60740-323210.1364/JOSAA.31.002038] cites several dozens of papers that are exclusively based on the EHF principle. We revisit the foundations of the EHF, and show that it is burdened by very restrictive assumptions that make it valid only under weak scintillation conditions. We compare the EHF to the less-restrictive Markov approximation and show that both theories deliver identical results for the second moment of the field, rendering the EHF essentially worthless. For the fourth moment of the field, the EHF principle is accurate under weak scintillation conditions, but is known to provide erroneous results for strong scintillation conditions. In addition, since the EHF does not obey the energy conservation principle, its results cannot be accurate for scintillations of partially coherent beam waves.

  16. Reactions of Criegee Intermediates with Non-Water Greenhouse Gases: Implications for Metal Free Chemical Fixation of Carbon Dioxide.

    PubMed

    Kumar, Manoj; Francisco, Joseph S

    2017-09-07

    High-level theoretical calculations suggest that a Criegee intermediate preferably interacts with carbon dioxide compared to two other greenhouse gases, nitrous oxide and methane. The results also suggest that the interaction between Criegee intermediates and carbon dioxide involves a cycloaddition reaction, which results in the formation of a cyclic carbonate-type adduct with a barrier of 6.0-14.0 kcal/mol. These results are in contrast to a previous assumption that the reaction occurs barrierlessly. The subsequent decomposition of the cyclic adduct into formic acid and carbon dioxide follows both concerted and stepwise mechanisms. The latter mechanism has been overlooked previously. Under formic acid catalysis, the concerted decomposition of the cyclic carbonate may be favored under tropospheric conditions. Considering that there is a strong nexus between carbon dioxide levels in the atmosphere and global warming, the high reactivity of Criegee intermediates could be utilized for designing efficient carbon capture technologies.

  17. Feminist Theories and Media Studies.

    ERIC Educational Resources Information Center

    Steeves, H. Leslie

    1987-01-01

    Discusses the assumptions that ground radical, liberal, and socialist feminist theoretical frameworks, and reviews feminist media research. Argues that liberal feminism speaks only to White, heterosexual, middle and upper class women and is incapable of addressing most women's concerns. Concludes that socialist feminism offers the greatest…

  18. Ethnographic/Qualitative Research: Theoretical Perspectives and Methodological Strategies.

    ERIC Educational Resources Information Center

    Butler, E. Dean

    This paper examines the metatheoretical concepts associated with ethnographic/qualitative educational inquiry and overviews the more commonly utilized research designs, data collection methods, and analytical approaches. The epistemological and ontological assumptions of this newer approach differ greatly from those of the traditional educational…

  19. The Case for a Hierarchical Cosmology

    ERIC Educational Resources Information Center

    Vaucouleurs, G. de

    1970-01-01

    The development of modern theoretical cosmology is presented and some questionable assumptions of orthodox cosmology are pointed out. Suggests that recent observations indicate that hierarchical clustering is a basic factor in cosmology. The implications of hierarchical models of the universe are considered. Bibliography. (LC)

  20. 7 CFR 1957.2 - Transfer with assumptions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Rural Housing Trust 1987-1, and who are eligible for an FmHA or its successor agency under Public Law 103-354 § 502 loan will be given the same priority by FmHA or its successor agency under Public Law.... FmHA or its successor agency under Public Law 103-354 regulations governing transfers and assumptions...

  1. Conditionally Increased Acoustic Pressures in Nonfetal Diagnostic Ultrasound Examinations Without Contrast Agents: A Preliminary Assessment

    PubMed Central

    Nightingale, Kathryn R.; Church, Charles C.; Harris, Gerald; Wear, Keith A.; Bailey, Michael R.; Carson, Paul L.; Jiang, Hui; Sandstrom, Kurt L.; Szabo, Thomas L.; Ziskin, Marvin C.

    2016-01-01

    The mechanical index (MI) has been used by the US Food and Drug Administration (FDA) since 1992 for regulatory decisions regarding the acoustic output of diagnostic ultrasound equipment. Its formula is based on predictions of acoustic cavitation under specific conditions. Since its implementation over 2 decades ago, new imaging modes have been developed that employ unique beam sequences exploiting higher-order acoustic phenomena, and, concurrently, studies of the bioeffects of ultrasound under a range of imaging scenarios have been conducted. In 2012, the American Institute of Ultrasound in Medicine Technical Standards Committee convened a working group of its Output Standards Subcommittee to examine and report on the potential risks and benefits of the use of conditionally increased acoustic pressures (CIP) under specific diagnostic imaging scenarios. The term “conditionally” is included to indicate that CIP would be considered on a per-patient basis for the duration required to obtain the necessary diagnostic information. This document is a result of that effort. In summary, a fundamental assumption in the MI calculation is the presence of a preexisting gas body. For tissues not known to contain preexisting gas bodies, based on theoretical predications and experimentally reported cavitation thresholds, we find this assumption to be invalid. We thus conclude that exceeding the recommended maximum MI level given in the FDA guidance could be warranted without concern for increased risk of cavitation in these tissues. However, there is limited literature assessing the potential clinical benefit of exceeding the MI guidelines in these tissues. The report proposes a 3-tiered approach for CIP that follows the model for employing elevated output in magnetic resonance imaging and concludes with summary recommendations to facilitate Institutional Review Board (IRB)-monitored clinical studies investigating CIP in specific tissues. PMID:26112617

  2. Timing and proximate causes of mortality in wild bird populations: testing Ashmole’s hypothesis

    USGS Publications Warehouse

    Barton, Daniel C.; Martin, Thomas E.

    2012-01-01

    Fecundity in birds is widely recognized to increase with latitude across diverse phylogenetic groups and regions, yet the causes of this variation remain enigmatic. Ashmole’s hypothesis is one of the most broadly accepted explanations for this pattern. This hypothesis suggests that increasing seasonality leads to increasing overwinter mortality due to resource scarcity during the lean season (e.g., winter) in higher latitude climates. This mortality is then thought to yield increased per-capita resources for breeding that allow larger clutch sizes at high latitudes. Support for this hypothesis has been based on indirect tests, whereas the underlying mechanisms and assumptions remain poorly explored. We used a meta-analysis of over 150 published studies to test two underlying and critical assumptions of Ashmole’s hypothesis: first, that ad ult mortality is greatest during the season of greatest resource scarcity, and second, t hat most mortality is caused by starvation. We found that the lean season (winter) was generally not the season of greatest mortality. Instead, spring or summer was most frequently the season of greatest mortality. Moreover, monthly survival rates were not explained by monthly productivity, again opposing predictions from Ashmole’s hypothesis. Finally, predation, rather than starvation, was the most frequent proximate cause o f mortality. Our results do not support the mechanistic predictions of Ashmole‘s hypothesis, and suggest alternative explanations of latitudinal variation in clutch size should remain under consideration. Our meta-analysis also highlights a paucity of data available on the timing and causes of mortality in many bird populations, particularly tropical bird populations, despite the clear theoretical and empirical importance of such data.

  3. Artificial Intelligence: Underlying Assumptions and Basic Objectives.

    ERIC Educational Resources Information Center

    Cercone, Nick; McCalla, Gordon

    1984-01-01

    Presents perspectives on methodological assumptions underlying research efforts in artificial intelligence (AI) and charts activities, motivations, methods, and current status of research in each of the major AI subareas: natural language understanding; computer vision; expert systems; search, problem solving, planning; theorem proving and logic…

  4. TARGETED SEQUENTIAL DESIGN FOR TARGETED LEARNING INFERENCE OF THE OPTIMAL TREATMENT RULE AND ITS MEAN REWARD.

    PubMed

    Chambaz, Antoine; Zheng, Wenjing; van der Laan, Mark J

    2017-01-01

    This article studies the targeted sequential inference of an optimal treatment rule (TR) and its mean reward in the non-exceptional case, i.e. , assuming that there is no stratum of the baseline covariates where treatment is neither beneficial nor harmful, and under a companion margin assumption. Our pivotal estimator, whose definition hinges on the targeted minimum loss estimation (TMLE) principle, actually infers the mean reward under the current estimate of the optimal TR. This data-adaptive statistical parameter is worthy of interest on its own. Our main result is a central limit theorem which enables the construction of confidence intervals on both mean rewards under the current estimate of the optimal TR and under the optimal TR itself. The asymptotic variance of the estimator takes the form of the variance of an efficient influence curve at a limiting distribution, allowing to discuss the efficiency of inference. As a by product, we also derive confidence intervals on two cumulated pseudo-regrets, a key notion in the study of bandits problems. A simulation study illustrates the procedure. One of the corner-stones of the theoretical study is a new maximal inequality for martingales with respect to the uniform entropy integral.

  5. Nature of the optical band shapes in polymethine dyes and H-aggregates: dozy chaos and excitons. Comparison with dimers, H*- and J-aggregates.

    PubMed

    Egorov, Vladimir V

    2017-05-01

    Results on the theoretical explanation of the shape of optical bands in polymethine dyes, their dimers and aggregates are summarized. The theoretical dependence of the shape of optical bands for the dye monomers in the vinylogous series in line with a change in the solvent polarity is considered. A simple physical (analytical) model of the shape of optical absorption bands in H-aggregates of polymethine dyes is developed based on taking the dozy-chaos dynamics of the transient state and the Frenkel exciton effect in the theory of molecular quantum transitions into account. As an example, the details of the experimental shape of one of the known H-bands are well reproduced by this analytical model under the assumption that the main optical chromophore of H-aggregates is a tetramer resulting from the two most probable processes of inelastic binary collisions in sequence: first, monomers between themselves, and then, between the resulting dimers. The obtained results indicate that in contrast with the compact structure of J-aggregates (brickwork structure), the structure of H-aggregates is not the compact pack-of-cards structure, as stated in the literature, but a loose alternate structure. Based on this theoretical model, a simple general (analytical) method for treating the more complex shapes of optical bands in polymethine dyes in comparison with the H-band under consideration is proposed. This method mirrors the physical process of molecular aggregates forming in liquid solutions: aggregates are generated in the most probable processes of inelastic multiple binary collisions between polymethine species generally differing in complexity. The results obtained are given against a background of the theoretical results on the shape of optical bands in polymethine dyes and their aggregates (dimers, H*- and J-aggregates) previously obtained by V.V.E.

  6. Nature of the optical band shapes in polymethine dyes and H-aggregates: dozy chaos and excitons. Comparison with dimers, H*- and J-aggregates

    PubMed Central

    2017-01-01

    Results on the theoretical explanation of the shape of optical bands in polymethine dyes, their dimers and aggregates are summarized. The theoretical dependence of the shape of optical bands for the dye monomers in the vinylogous series in line with a change in the solvent polarity is considered. A simple physical (analytical) model of the shape of optical absorption bands in H-aggregates of polymethine dyes is developed based on taking the dozy-chaos dynamics of the transient state and the Frenkel exciton effect in the theory of molecular quantum transitions into account. As an example, the details of the experimental shape of one of the known H-bands are well reproduced by this analytical model under the assumption that the main optical chromophore of H-aggregates is a tetramer resulting from the two most probable processes of inelastic binary collisions in sequence: first, monomers between themselves, and then, between the resulting dimers. The obtained results indicate that in contrast with the compact structure of J-aggregates (brickwork structure), the structure of H-aggregates is not the compact pack-of-cards structure, as stated in the literature, but a loose alternate structure. Based on this theoretical model, a simple general (analytical) method for treating the more complex shapes of optical bands in polymethine dyes in comparison with the H-band under consideration is proposed. This method mirrors the physical process of molecular aggregates forming in liquid solutions: aggregates are generated in the most probable processes of inelastic multiple binary collisions between polymethine species generally differing in complexity. The results obtained are given against a background of the theoretical results on the shape of optical bands in polymethine dyes and their aggregates (dimers, H*- and J-aggregates) previously obtained by V.V.E. PMID:28572984

  7. Nature of the optical band shapes in polymethine dyes and H-aggregates: dozy chaos and excitons. Comparison with dimers, H*- and J-aggregates

    NASA Astrophysics Data System (ADS)

    Egorov, Vladimir V.

    2017-05-01

    Results on the theoretical explanation of the shape of optical bands in polymethine dyes, their dimers and aggregates are summarized. The theoretical dependence of the shape of optical bands for the dye monomers in the vinylogous series in line with a change in the solvent polarity is considered. A simple physical (analytical) model of the shape of optical absorption bands in H-aggregates of polymethine dyes is developed based on taking the dozy-chaos dynamics of the transient state and the Frenkel exciton effect in the theory of molecular quantum transitions into account. As an example, the details of the experimental shape of one of the known H-bands are well reproduced by this analytical model under the assumption that the main optical chromophore of H-aggregates is a tetramer resulting from the two most probable processes of inelastic binary collisions in sequence: first, monomers between themselves, and then, between the resulting dimers. The obtained results indicate that in contrast with the compact structure of J-aggregates (brickwork structure), the structure of H-aggregates is not the compact pack-of-cards structure, as stated in the literature, but a loose alternate structure. Based on this theoretical model, a simple general (analytical) method for treating the more complex shapes of optical bands in polymethine dyes in comparison with the H-band under consideration is proposed. This method mirrors the physical process of molecular aggregates forming in liquid solutions: aggregates are generated in the most probable processes of inelastic multiple binary collisions between polymethine species generally differing in complexity. The results obtained are given against a background of the theoretical results on the shape of optical bands in polymethine dyes and their aggregates (dimers, H*- and J-aggregates) previously obtained by V.V.E.

  8. Artifacts, assumptions, and ambiguity: Pitfalls in comparing experimental results to numerical simulations when studying electrical stimulation of the heart.

    PubMed

    Roth, Bradley J.

    2002-09-01

    Insidious experimental artifacts and invalid theoretical assumptions complicate the comparison of numerical predictions and observed data. Such difficulties are particularly troublesome when studying electrical stimulation of the heart. During unipolar stimulation of cardiac tissue, the artifacts include nonlinearity of membrane dyes, optical signals blocked by the stimulating electrode, averaging of optical signals with depth, lateral averaging of optical signals, limitations of the current source, and the use of excitation-contraction uncouplers. The assumptions involve electroporation, membrane models, electrode size, the perfusing bath, incorrect model parameters, the applicability of a continuum model, and tissue damage. Comparisons of theory and experiment during far-field stimulation are limited by many of these same factors, plus artifacts from plunge and epicardial recording electrodes and assumptions about the fiber angle at an insulating boundary. These pitfalls must be overcome in order to understand quantitatively how the heart responds to an electrical stimulus. (c) 2002 American Institute of Physics.

  9. Capacity and optimal collusion attack channels for Gaussian fingerprinting games

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Moulin, Pierre

    2007-02-01

    In content fingerprinting, the same media covertext - image, video, audio, or text - is distributed to many users. A fingerprint, a mark unique to each user, is embedded into each copy of the distributed covertext. In a collusion attack, two or more users may combine their copies in an attempt to "remove" their fingerprints and forge a pirated copy. To trace the forgery back to members of the coalition, we need fingerprinting codes that can reliably identify the fingerprints of those members. Researchers have been focusing on designing or testing fingerprints for Gaussian host signals and the mean square error (MSE) distortion under some classes of collusion attacks, in terms of the detector's error probability in detecting collusion members. For example, under the assumptions of Gaussian fingerprints and Gaussian attacks (the fingerprinted signals are averaged and then the result is passed through a Gaussian test channel), Moulin and Briassouli1 derived optimal strategies in a game-theoretic framework that uses the detector's error probability as the performance measure for a binary decision problem (whether a user participates in the collusion attack or not); Stone2 and Zhao et al. 3 studied average and other non-linear collusion attacks for Gaussian-like fingerprints; Wang et al. 4 stated that the average collusion attack is the most efficient one for orthogonal fingerprints; Kiyavash and Moulin 5 derived a mathematical proof of the optimality of the average collusion attack under some assumptions. In this paper, we also consider Gaussian cover signals, the MSE distortion, and memoryless collusion attacks. We do not make any assumption about the fingerprinting codes used other than an embedding distortion constraint. Also, our only assumptions about the attack channel are an expected distortion constraint, a memoryless constraint, and a fairness constraint. That is, the colluders are allowed to use any arbitrary nonlinear strategy subject to the above constraints. Under those constraints on the fingerprint embedder and the colluders, fingerprinting capacity is obtained as the solution of a mutual-information game involving probability density functions (pdf's) designed by the embedder and the colluders. We show that the optimal fingerprinting strategy is a Gaussian test channel where the fingerprinted signal is the sum of an attenuated version of the cover signal plus a Gaussian information-bearing noise, and the optimal collusion strategy is to average fingerprinted signals possessed by all the colluders and pass the averaged copy through a Gaussian test channel. The capacity result and the optimal strategies are the same for both the private and public games. In the former scenario, the original covertext is available to the decoder, while in the latter setup, the original covertext is available to the encoder but not to the decoder.

  10. A generating function approach to HIV transmission with dynamic contact rates

    DOE PAGES

    Romero-Severson, Ethan O.; Meadors, Grant D.; Volz, Erik M.

    2014-04-24

    The basic reproduction number, R 0, is often defined as the average number of infections generated by a newly infected individual in a fully susceptible population. The interpretation, meaning, and derivation of R 0 are controversial. However, in the context of mean field models, R 0 demarcates the epidemic threshold below which the infected population approaches zero in the limit of time. In this manner, R 0 has been proposed as a method for understanding the relative impact of public health interventions with respect to disease eliminations from a theoretical perspective. The use of R 0 is made more complexmore » by both the strong dependency of R 0 on the model form and the stochastic nature of transmission. A common assumption in models of HIV transmission that have closed form expressions for R 0 is that a single individual’s behavior is constant over time. For this research, we derive expressions for both R 0 and probability of an epidemic in a finite population under the assumption that people periodically change their sexual behavior over time. We illustrate the use of generating functions as a general framework to model the effects of potentially complex assumptions on the number of transmissions generated by a newly infected person in a susceptible population. In conclusion, we find that the relationship between the probability of an epidemic and R 0 is not straightforward, but, that as the rate of change in sexual behavior increases both R 0 and the probability of an epidemic also decrease.« less

  11. A generating function approach to HIV transmission with dynamic contact rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero-Severson, Ethan O.; Meadors, Grant D.; Volz, Erik M.

    The basic reproduction number, R 0, is often defined as the average number of infections generated by a newly infected individual in a fully susceptible population. The interpretation, meaning, and derivation of R 0 are controversial. However, in the context of mean field models, R 0 demarcates the epidemic threshold below which the infected population approaches zero in the limit of time. In this manner, R 0 has been proposed as a method for understanding the relative impact of public health interventions with respect to disease eliminations from a theoretical perspective. The use of R 0 is made more complexmore » by both the strong dependency of R 0 on the model form and the stochastic nature of transmission. A common assumption in models of HIV transmission that have closed form expressions for R 0 is that a single individual’s behavior is constant over time. For this research, we derive expressions for both R 0 and probability of an epidemic in a finite population under the assumption that people periodically change their sexual behavior over time. We illustrate the use of generating functions as a general framework to model the effects of potentially complex assumptions on the number of transmissions generated by a newly infected person in a susceptible population. In conclusion, we find that the relationship between the probability of an epidemic and R 0 is not straightforward, but, that as the rate of change in sexual behavior increases both R 0 and the probability of an epidemic also decrease.« less

  12. Some Remarks on the Theory of Political Education. German Studies Notes.

    ERIC Educational Resources Information Center

    Holtmann, Antonius

    This theoretical discussion explores pedagogical assumptions of political education in West Germany. Three major methodological orientations are discussed: the normative-ontological, empirical-analytical, and dialectical-historical. The author recounts the aims, methods, and basic presuppositions of each of these approaches. Topics discussed…

  13. Epilepsy: An Overview for the Special Educator.

    ERIC Educational Resources Information Center

    Nivens, Maryruth K.

    Intended to dispel myths concerning epilepsy, the paper discusses the history, symptoms and characteristics, possible causes and current medication approaches to the condition, theoretical assumptions are traced, and a definition explained. Charts depict the location of discharge; seizure patterns and accompanying physical/psychological symptoms;…

  14. Cognitive-Developmental and Behavior-Analytic Theories: Evolving into Complementarity

    ERIC Educational Resources Information Center

    Overton, Willis F.; Ennis, Michelle D.

    2006-01-01

    Historically, cognitive-developmental and behavior-analytic approaches to the study of human behavior change and development have been presented as incompatible alternative theoretical and methodological perspectives. This presumed incompatibility has been understood as arising from divergent sets of metatheoretical assumptions that take the form…

  15. Dreaming and Schizophrenia.

    ERIC Educational Resources Information Center

    Stickney, Jeffrey L.

    Parallels between dream states and schizophrenia suggest that the study of dreams may offer some information about schizophrenia. A major theoretical assumption of the research on dreaming and schizophrenia is that, in schizophrenics, the dream state intrudes on the awake state creating a dreamlike symptomatology. This theory, called the REM…

  16. Escaping the Tyranny of Belief

    ERIC Educational Resources Information Center

    Wiswell, Albert K.; Wells, C. Leanne

    2004-01-01

    This study describes an action research case study through which the dynamics of identifying and changing strongly held assumptions illustrate the differences between experiences that serve to strengthen beliefs from those that lead to learning. Theoretical considerations are presented linking cognitive schema, action science, attribution theory,…

  17. Relative coronal abundances derived from X-ray observations 3: The effect of cascades on the relative intensity of Fe (XVII) line fluxes, and a revised iron abundance

    NASA Technical Reports Server (NTRS)

    Walker, A. B. C., Jr.; Rugge, H. R.; Weiss, K.

    1974-01-01

    Permitted lines in the optically thin coronal X-ray spectrum were analyzed to find the distribution of coronal material, as a function of temperature, without special assumptions concerning coronal conditions. The resonance lines of N, O, Ne, Na, Mg, Al, Si, S, and Ar which dominate the quiet coronal spectrum below 25A were observed. Coronal models were constructed and the relative abundances of these elements were determined. The intensity in the lines of the 2p-3d transitions near 15A was used in conjunction with these coronal models, with the assumption of coronal excitation, to determine the Fe XVII abundance. The relative intensities of the 2p-3d Fe XVII lines observed in the corona agreed with theoretical prediction. Using a more complete theoretical model, and higher resolution observations, a revised calculation of iron abundance relative to hydrogen of 0.000026 was made.

  18. Survival estimation and the effects of dependency among animals

    USGS Publications Warehouse

    Schmutz, Joel A.; Ward, David H.; Sedinger, James S.; Rexstad, Eric A.

    1995-01-01

    Survival models assume that fates of individuals are independent, yet the robustness of this assumption has been poorly quantified. We examine how empirically derived estimates of the variance of survival rates are affected by dependency in survival probability among individuals. We used Monte Carlo simulations to generate known amounts of dependency among pairs of individuals and analyzed these data with Kaplan-Meier and Cormack-Jolly-Seber models. Dependency significantly increased these empirical variances as compared to theoretically derived estimates of variance from the same populations. Using resighting data from 168 pairs of black brant, we used a resampling procedure and program RELEASE to estimate empirical and mean theoretical variances. We estimated that the relationship between paired individuals caused the empirical variance of the survival rate to be 155% larger than the empirical variance for unpaired individuals. Monte Carlo simulations and use of this resampling strategy can provide investigators with information on how robust their data are to this common assumption of independent survival probabilities.

  19. A control-volume method for analysis of unsteady thrust augmenting ejector flows

    NASA Technical Reports Server (NTRS)

    Drummond, Colin K.

    1988-01-01

    A method for predicting transient thrust augmenting ejector characteristics is presented. The analysis blends classic self-similar turbulent jet descriptions with a control volume mixing region discretization to solicit transient effects in a new way. Division of the ejector into an inlet, diffuser, and mixing region corresponds with the assumption of viscous-dominated phenomenon in the latter. Inlet and diffuser analyses are simplified by a quasi-steady analysis, justified by the assumptions that pressure is the forcing function in those regions. Details of the theoretical foundation, the solution algorithm, and sample calculations are given.

  20. Co-Dependency: An Examination of Underlying Assumptions.

    ERIC Educational Resources Information Center

    Myer, Rick A.; And Others

    1991-01-01

    Discusses need for careful examination of codependency as diagnostic category. Critically examines assumptions that codependency is disease, addiction, or predetermined by the environment. Discusses implications of assumptions. Offers recommendations for mental health counselors focusing on need for systematic research, redirection of efforts to…

  1. Of mental models, assumptions and heuristics: The case of acids and acid strength

    NASA Astrophysics Data System (ADS)

    McClary, Lakeisha Michelle

    This study explored what cognitive resources (i.e., units of knowledge necessary to learn) first-semester organic chemistry students used to make decisions about acid strength and how those resources guided the prediction, explanation and justification of trends in acid strength. We were specifically interested in the identifying and characterizing the mental models, assumptions and heuristics that students relied upon to make their decisions, in most cases under time constraints. The views about acids and acid strength were investigated for twenty undergraduate students. Data sources for this study included written responses and individual interviews. The data was analyzed using a qualitative methodology to answer five research questions. Data analysis regarding these research questions was based on existing theoretical frameworks: problem representation (Chi, Feltovich & Glaser, 1981), mental models (Johnson-Laird, 1983); intuitive assumptions (Talanquer, 2006), and heuristics (Evans, 2008). These frameworks were combined to develop the framework from which our data were analyzed. Results indicated that first-semester organic chemistry students' use of cognitive resources was complex and dependent on their understanding of the behavior of acids. Expressed mental models were generated using prior knowledge and assumptions about acids and acid strength; these models were then employed to make decisions. Explicit and implicit features of the compounds in each task mediated participants' attention, which triggered the use of a very limited number of heuristics, or shortcut reasoning strategies. Many students, however, were able to apply more effortful analytic reasoning, though correct trends were predicted infrequently. Most students continued to use their mental models, assumptions and heuristics to explain a given trend in acid strength and to justify their predicted trends, but the tasks influenced a few students to shift from one model to another model. An emergent finding from this project was that the problem representation greatly influenced students' ability to make correct predictions in acid strength. Many students, however, were able to apply more effortful analytic reasoning, though correct trends were predicted infrequently. Most students continued to use their mental models, assumptions and heuristics to explain a given trend in acid strength and to justify their predicted trends, but the tasks influenced a few students to shift from one model to another model. An emergent finding from this project was that the problem representation greatly influenced students' ability to make correct predictions in acid strength.

  2. Why is it Doing That? - Assumptions about the FMS

    NASA Technical Reports Server (NTRS)

    Feary, Michael; Immanuel, Barshi; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    In the glass cockpit, it's not uncommon to hear exclamations such as "why is it doing that?". Sometimes pilots ask "what were they thinking when they set it this way?" or "why doesn't it tell me what it's going to do next?". Pilots may hold a conceptual model of the automation that is the result of fleet lore, which may or may not be consistent with what the engineers had in mind. But what did the engineers have in mind? In this study, we present some of the underlying assumptions surrounding the glass cockpit. Engineers and designers make assumptions about the nature of the flight task; at the other end, instructor and line pilots make assumptions about how the automation works and how it was intended to be used. These underlying assumptions are seldom recognized or acknowledged, This study is an attempt to explicitly arti culate such assumptions to better inform design and training developments. This work is part of a larger project to support training strategies for automation.

  3. Differentiating and defusing theoretical Ecology's criticisms: A rejoinder to Sagoff's reply to Donhauser (2016).

    PubMed

    Donhauser, Justin

    2017-06-01

    In a (2016) paper in this journal, I defuse allegations that theoretical ecological research is problematic because it relies on teleological metaphysical assumptions. Mark Sagoff offers a formal reply. In it, he concedes that I succeeded in establishing that ecologists abandoned robust teleological views long ago and that they use teleological characterizations as metaphors that aid in developing mechanistic explanations of ecological phenomena. Yet, he contends that I did not give enduring criticisms of theoretical ecology a fair shake in my paper. He says this is because enduring criticisms center on concerns about the nature of ecological networks and forces, the instrumentality of ecological laws and theoretical models, and the relation between theoretical and empirical methods in ecology that that paper does not broach. Below I set apart the distinct criticisms Sagoff presents in his commentary and respond to each in turn. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Satellite Power Systems (SPS) space transportation cost analysis and evaluation

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A picture of Space Power Systems space transportation costs at the present time is given with respect to accuracy as stated, reasonableness of the methods used, assumptions made, and uncertainty associated with the estimates. The approach used consists of examining space transportation costs from several perspectives to perform a variety of sensitivity analyses or reviews and examine the findings in terms of internal consistency and external comparison with analogous systems. These approaches are summarized as a theoretical and historical review including a review of stated and unstated assumptions used to derive the costs, and a performance or technical review. These reviews cover the overall transportation program as well as the individual vehicles proposed. The review of overall cost assumptions is the principal means used for estimating the cost uncertainty derived. The cost estimates used as the best current estimate are included.

  5. Traumatic memories, eye movements, phobia, and panic: a critical note on the proliferation of EMDR.

    PubMed

    Muris, P; Merckelbach, H

    1999-01-01

    In the past years, Eye Movement Desensitization and Reprocessing (EMDR) has become increasingly popular as a treatment method for Posttraumatic Stress Disorder (PTSD). The current article critically evaluates three recurring assumptions in EMDR literature: (a) the notion that traumatic memories are fixed and stable and that flashbacks are accurate reproductions of the traumatic incident; (b) the idea that eye movements, or other lateralized rhythmic behaviors have an inhibitory effect on emotional memories; and (c) the assumption that EMDR is not only effective in treating PTSD, but can also be successfully applied to other psychopathological conditions. There is little support for any of these three assumptions. Meanwhile, the expansion of the theoretical underpinnings of EMDR in the absence of a sound empirical basis casts doubts on the massive proliferation of this treatment method.

  6. The current theoretical assumptions of the Bobath concept as determined by the members of BBTA.

    PubMed

    Raine, Sue

    2007-01-01

    The Bobath concept is a problem-solving approach to the assessment and treatment of individuals following a lesion of the central nervous system that offers therapists a framework for their clinical practice. The aim of this study was to facilitate a group of experts in determining the current theoretical assumptions underpinning the Bobath concept.A four-round Delphi study was used. The expert sample included all 15 members of the British Bobath Tutors Association. Initial statements were identified from the literature with respondents generating additional statements. Level of agreement was determined by using a five-point Likert scale. Level of consensus was set at 80%. Eighty-five statements were rated from the literature along with 115 generated by the group. Ninety-three statements were identified as representing the theoretical underpinning of the Bobath concept. The Bobath experts agreed that therapists need to be aware of the principles of motor learning such as active participation, opportunities for practice and meaningful goals. They emphasized that therapy is an interactive process between individual, therapist, and the environment and aims to promote efficiency of movement to the individual's maximum potential rather than normal movement. Treatment was identified by the experts as having "change of functional outcome" at its center.

  7. Interfacing theories of program with theories of evaluation for advancing evaluation practice: Reductionism, systems thinking, and pragmatic synthesis.

    PubMed

    Chen, Huey T

    2016-12-01

    Theories of program and theories of evaluation form the foundation of program evaluation theories. Theories of program reflect assumptions on how to conceptualize an intervention program for evaluation purposes, while theories of evaluation reflect assumptions on how to design useful evaluation. These two types of theories are related, but often discussed separately. This paper attempts to use three theoretical perspectives (reductionism, systems thinking, and pragmatic synthesis) to interface them and discuss the implications for evaluation practice. Reductionism proposes that an intervention program can be broken into crucial components for rigorous analyses; systems thinking view an intervention program as dynamic and complex, requiring a holistic examination. In spite of their contributions, reductionism and systems thinking represent the extreme ends of a theoretical spectrum; many real-world programs, however, may fall in the middle. Pragmatic synthesis is being developed to serve these moderate- complexity programs. These three theoretical perspectives have their own strengths and challenges. Knowledge on these three perspectives and their evaluation implications can provide a better guide for designing fruitful evaluations, improving the quality of evaluation practice, informing potential areas for developing cutting-edge evaluation approaches, and contributing to advancing program evaluation toward a mature applied science. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Investigating the Uncertainty in Global SST Trends Due to Internal Variations Using an Improved Trend Estimator

    NASA Astrophysics Data System (ADS)

    Lian, Tao; Shen, Zheqi; Ying, Jun; Tang, Youmin; Li, Junde; Ling, Zheng

    2018-03-01

    A new criterion was proposed recently to measure the influence of internal variations on secular trends in a time series. When the magnitude of the trend is greater than a theoretical threshold that scales the influence from internal variations, the sign of the estimated trend can be interpreted as the underlying long-term change. Otherwise, the sign may depend on the period chosen. An improved least squares method is developed here to further reduce the theoretical threshold and is applied to eight sea surface temperature (SST) data sets covering the period 1881-2013 to investigate whether there are robust trends in global SSTs. It is found that the warming trends in the western boundary regions, the South Atlantic, and the tropical and southern-most Indian Ocean are robust. However, robust trends are not found in the North Pacific, the North Atlantic, or the South Indian Ocean. The globally averaged SST and Indian Ocean Dipole indices are found to have robustly increased, whereas trends in the zonal SST gradient across the equatorial Pacific, Niño 3.4 SST, and the Atlantic Multidecadal Oscillation indices are within the uncertainty range associated with internal variations. These results indicate that great care is required when interpreting SST trends using the available records in certain regions and indices. It is worth noting that the theoretical threshold can be strongly influenced by low-frequency oscillations, and the above conclusions are based on the assumption that trends are linear. Caution should be exercised when applying the theoretical threshold criterion to real data.

  9. Experimental Control of Simple Pendulum Model

    ERIC Educational Resources Information Center

    Medina, C.

    2004-01-01

    This paper conveys information about a Physics laboratory experiment for students with some theoretical knowledge about oscillatory motion. Students construct a simple pendulum that behaves as an ideal one, and analyze model assumption incidence on its period. The following aspects are quantitatively analyzed: vanishing friction, small amplitude,…

  10. Building Intuitions about Statistical Inference Based on Resampling

    ERIC Educational Resources Information Center

    Watson, Jane; Chance, Beth

    2012-01-01

    Formal inference, which makes theoretical assumptions about distributions and applies hypothesis testing procedures with null and alternative hypotheses, is notoriously difficult for tertiary students to master. The debate about whether this content should appear in Years 11 and 12 of the "Australian Curriculum: Mathematics" has gone on…

  11. The "New" Economics of Education: Towards a "Unified" Macro/Micro-Educational Planning Policy.

    ERIC Educational Resources Information Center

    Kraft, Richard H.; Nakib, Yasser

    1991-01-01

    Takes issue with conventional human capital theory, questioning assumptions regarding external benefits, internal efficiency, educational purposes, and returns-to-education and manpower needs approaches. Reviews new theoretical directions regarding supply and demand, socialization, labor market segmentation, and overeducation and undereducation,…

  12. Generational Differences in Technology Adoption in Community Colleges

    ERIC Educational Resources Information Center

    Rosario, Victoria C.

    2012-01-01

    This research study investigated the technological perceptions and expectations of community college students, faculty, administrators, and Information Technology (IT) staff. The theoretical framework is based upon two assumptions on the process of technological innovation: it can be explained by diffusion of adoption theory, and by studying the…

  13. Scaffolding Student Participation in Mathematical Practices

    ERIC Educational Resources Information Center

    Moschkovich, Judit N.

    2015-01-01

    The concept of scaffolding can be used to describe various types of adult guidance, in multiple settings, across different time scales. This article clarifies what we mean by scaffolding, considering several questions specifically for scaffolding in mathematics: What theoretical assumptions are framing scaffolding? What is being scaffolded? At…

  14. Selective Mutism: Phenomenological Characteristics.

    ERIC Educational Resources Information Center

    Ford, Mary Ann; Sladeczek, Ingrid E.; Carlson, John; Kratochwill, Thomas R.

    1998-01-01

    To explore factors related to selective mutism (SM), a survey of persons (N=153, including 135 children) with SM was undertaken. Three theoretical assumptions are supported: (1) variant talking behaviors prior to identification of SM; (2) link between SM and social anxiety; (3) potential link between temperament and SM. (EMK)

  15. The Newtonian Mechanistic Paradigm, Special Education, and Contours of Alternatives: An Overview.

    ERIC Educational Resources Information Center

    Heshusius, Lous

    1989-01-01

    The article examines theoretical reorientations in special education away from the Newtonian mechanistic paradigm toward an emerging holistic paradigm. Recent literature is critiqued for renaming theories as paradigms, thereby providing an illusion of change while leaving fundamental mechanistic assumptions in place. (Author/DB)

  16. Interactivism: Change, Sensory-Emotional Intelligence, and Intentionality in Being and Learning.

    ERIC Educational Resources Information Center

    Bichelmeyer, Barbara A.

    This paper documents the theoretical framework of interactivism; articulates the pedagogical theory which frames its assumptions regarding effective educational practice; positions the pedagogy of interactivism against traditional pedagogical practice; and argues for the educational importance of the interactivist view. Interactivism is the term…

  17. Cognitive Processes in Dissociation: An Analysis of Core Theoretical Assumptions

    ERIC Educational Resources Information Center

    Giesbrecht, Timo; Lilienfield, Scott O.; Lynn, Steven Jay; Merckelbach, Harald

    2008-01-01

    Dissociation is typically defined as the lack of normal integration of thoughts, feelings, and experiences into consciousness and memory. The present article critically evaluates the research literature on cognitive processes in dissociation. The authors' review indicates that dissociation is characterized by subtle deficits in neuropsychological…

  18. Theoretical studies of solar lasers and converters

    NASA Technical Reports Server (NTRS)

    Heinbockel, John H.

    1988-01-01

    The previously constructed one dimensional model for the simulated operation of an iodine laser assumed that the perfluoroalkyl iodide gas n-C3F7I was incompressible. The present study removes this simplifying assumption and considers n-C3F7I as a compressible fluid.

  19. Teaching "Instant Experience" with Graphical Model Validation Techniques

    ERIC Educational Resources Information Center

    Ekstrøm, Claus Thorn

    2014-01-01

    Graphical model validation techniques for linear normal models are often used to check the assumptions underlying a statistical model. We describe an approach to provide "instant experience" in looking at a graphical model validation plot, so it becomes easier to validate if any of the underlying assumptions are violated.

  20. Computation in generalised probabilisitic theories

    NASA Astrophysics Data System (ADS)

    Lee, Ciarán M.; Barrett, Jonathan

    2015-08-01

    From the general difficulty of simulating quantum systems using classical systems, and in particular the existence of an efficient quantum algorithm for factoring, it is likely that quantum computation is intrinsically more powerful than classical computation. At present, the best upper bound known for the power of quantum computation is that {{BQP}}\\subseteq {{AWPP}}, where {{AWPP}} is a classical complexity class (known to be included in {{PP}}, hence {{PSPACE}}). This work investigates limits on computational power that are imposed by simple physical, or information theoretic, principles. To this end, we define a circuit-based model of computation in a class of operationally-defined theories more general than quantum theory, and ask: what is the minimal set of physical assumptions under which the above inclusions still hold? We show that given only an assumption of tomographic locality (roughly, that multipartite states and transformations can be characterized by local measurements), efficient computations are contained in {{AWPP}}. This inclusion still holds even without assuming a basic notion of causality (where the notion is, roughly, that probabilities for outcomes cannot depend on future measurement choices). Following Aaronson, we extend the computational model by allowing post-selection on measurement outcomes. Aaronson showed that the corresponding quantum complexity class, {{PostBQP}}, is equal to {{PP}}. Given only the assumption of tomographic locality, the inclusion in {{PP}} still holds for post-selected computation in general theories. Hence in a world with post-selection, quantum theory is optimal for computation in the space of all operational theories. We then consider whether one can obtain relativized complexity results for general theories. It is not obvious how to define a sensible notion of a computational oracle in the general framework that reduces to the standard notion in the quantum case. Nevertheless, it is possible to define computation relative to a ‘classical oracle’. Then, we show there exists a classical oracle relative to which efficient computation in any theory satisfying the causality assumption does not include {{NP}}.

  1. Mass-conserving advection-diffusion Lattice Boltzmann model for multi-species reacting flows

    NASA Astrophysics Data System (ADS)

    Hosseini, S. A.; Darabiha, N.; Thévenin, D.

    2018-06-01

    Given the complex geometries usually found in practical applications, the Lattice Boltzmann (LB) method is becoming increasingly attractive. In addition to the simple treatment of intricate geometrical configurations, LB solvers can be implemented on very large parallel clusters with excellent scalability. However, reacting flows and especially combustion lead to additional challenges and have seldom been studied by LB methods. Indeed, overall mass conservation is a pressing issue in modeling multi-component flows. The classical advection-diffusion LB model recovers the species transport equations with the generalized Fick approximation under the assumption of an incompressible flow. However, for flows involving multiple species with different diffusion coefficients and density fluctuations - as is the case with weakly compressible solvers like Lattice Boltzmann -, this approximation is known not to conserve overall mass. In classical CFD, as the Fick approximation does not satisfy the overall mass conservation constraint a diffusion correction velocity is usually introduced. In the present work, a local expression is first derived for this correction velocity in a LB framework. In a second step, the error due to the incompressibility assumption is also accounted for through a modified equilibrium distribution function. Theoretical analyses and simulations show that the proposed scheme performs much better than the conventional advection-diffusion Lattice Boltzmann model in terms of overall mass conservation.

  2. Nurture Net of Nature: Re-Evaluating the Role of Shared Environments in Academic Achievement and Verbal Intelligence

    PubMed Central

    Daw, Jonathan; Guo, Guang; Harris, Kathie Mullan

    2016-01-01

    Prominent authors in the behavioral genetics tradition have long argued that shared environments do not meaningfully shape intelligence and academic achievement. However, we argue that these conclusions are erroneous due to large violations of the additivity assumption underlying behavioral genetics methods – that sources of genetic and shared and nonshared environmental variance are independent and non-interactive. This is compounded in some cases by the theoretical equation of the effective and objective environments, where the former is defined by whether siblings are made more or less similar, and the latter by whether siblings are equally subject to the environmental characteristic in question. Using monozygotic twin fixed effects models, which compare outcomes among genetically identical pairs, we show that many characteristics of objectively shared environments significantly moderate the effects of nonshared environments on adolescent academic achievement and verbal intelligence, violating the additivity assumption of behavioral genetic methods. Importantly, these effects would be categorized as nonshared environmental influences in standard twin models despite their roots in shared environments. These findings should encourage caution among those who claim that the frequently trivial variance attributed to shared environments in behavioral genetic models means that families, schools, and neighborhoods do not meaningfully influence these outcomes. PMID:26004471

  3. Jet Velocity Profile Effects on Spray Characteristics of Impinging Jets at High Reynolds and Weber Numbers

    NASA Astrophysics Data System (ADS)

    Rodrigues, Neil S.; Kulkarni, Varun; Sojka, Paul E.

    2014-11-01

    While like-on-like doublet impinging jet atomization has been extensively studied in the literature, there is poor agreement between experimentally observed spray characteristics and theoretical predictions (Ryan et al. 1995, Anderson et al. 2006). Recent works (Bremond and Villermaux 2006, Choo and Kang 2007) have introduced a non-uniform jet velocity profile, which lead to a deviation from the standard assumptions for the sheet velocity and the sheet thickness parameter. These works have assumed a parabolic profile to serve as another limit to the traditional uniform jet velocity profile assumption. Incorporating a non-uniform jet velocity profile results in the sheet velocity and the sheet thickness parameter depending on the sheet azimuthal angle. In this work, the 1/7th power-law turbulent velocity profile is assumed to provide a closer match to the flow behavior of jets at high Reynolds and Weber numbers, which correspond to the impact wave regime. Predictions for the maximum wavelength, sheet breakup length, ligament diameter, and drop diameter are compared with experimental observations. The results demonstrate better agreement between experimentally measured values and predictions, compared to previous models. U.S. Army Research Office under the Multi-University Research Initiative Grant Number W911NF-08-1-0171.

  4. Weighted least squares techniques for improved received signal strength based localization.

    PubMed

    Tarrío, Paula; Bernardos, Ana M; Casar, José R

    2011-01-01

    The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling.

  5. Missing heritability in the tails of quantitative traits? A simulation study on the impact of slightly altered true genetic models.

    PubMed

    Pütter, Carolin; Pechlivanis, Sonali; Nöthen, Markus M; Jöckel, Karl-Heinz; Wichmann, Heinz-Erich; Scherag, André

    2011-01-01

    Genome-wide association studies have identified robust associations between single nucleotide polymorphisms and complex traits. As the proportion of phenotypic variance explained is still limited for most of the traits, larger and larger meta-analyses are being conducted to detect additional associations. Here we investigate the impact of the study design and the underlying assumption about the true genetic effect in a bimodal mixture situation on the power to detect associations. We performed simulations of quantitative phenotypes analysed by standard linear regression and dichotomized case-control data sets from the extremes of the quantitative trait analysed by standard logistic regression. Using linear regression, markers with an effect in the extremes of the traits were almost undetectable, whereas analysing extremes by case-control design had superior power even for much smaller sample sizes. Two real data examples are provided to support our theoretical findings and to explore our mixture and parameter assumption. Our findings support the idea to re-analyse the available meta-analysis data sets to detect new loci in the extremes. Moreover, our investigation offers an explanation for discrepant findings when analysing quantitative traits in the general population and in the extremes. Copyright © 2011 S. Karger AG, Basel.

  6. Determination of mean pressure from PIV in compressible flows using the Reynolds-averaging approach

    NASA Astrophysics Data System (ADS)

    van Gent, Paul L.; van Oudheusden, Bas W.; Schrijer, Ferry F. J.

    2018-03-01

    The feasibility of computing the flow pressure on the basis of PIV velocity data has been demonstrated abundantly for low-speed conditions. The added complications occurring for high-speed compressible flows have, however, so far proved to be largely inhibitive for the accurate experimental determination of instantaneous pressure. Obtaining mean pressure may remain a worthwhile and realistic goal to pursue. In a previous study, a Reynolds-averaging procedure was developed for this, under the moderate-Mach-number assumption that density fluctuations can be neglected. The present communication addresses the accuracy of this assumption, and the consistency of its implementation, by evaluating of the relevance of the different contributions resulting from the Reynolds-averaging. The methodology involves a theoretical order-of-magnitude analysis, complemented with a quantitative assessment based on a simulated and a real PIV experiment. The assessments show that it is sufficient to account for spatial variations in the mean velocity and the Reynolds-stresses and that temporal and spatial density variations (fluctuations and gradients) are of secondary importance and comparable order-of-magnitude. This result permits to simplify the calculation of mean pressure from PIV velocity data and to validate the approximation of neglecting temporal and spatial density variations without having access to reference pressure data.

  7. Nurture net of nature: Re-evaluating the role of shared environments in academic achievement and verbal intelligence.

    PubMed

    Daw, Jonathan; Guo, Guang; Harris, Kathie Mullan

    2015-07-01

    Prominent authors in the behavioral genetics tradition have long argued that shared environments do not meaningfully shape intelligence and academic achievement. However, we argue that these conclusions are erroneous due to large violations of the additivity assumption underlying behavioral genetics methods - that sources of genetic and shared and nonshared environmental variance are independent and non-interactive. This is compounded in some cases by the theoretical equation of the effective and objective environments, where the former is defined by whether siblings are made more or less similar, and the latter by whether siblings are equally subject to the environmental characteristic in question. Using monozygotic twin fixed effects models, which compare outcomes among genetically identical pairs, we show that many characteristics of objectively shared environments significantly moderate the effects of nonshared environments on adolescent academic achievement and verbal intelligence, violating the additivity assumption of behavioral genetic methods. Importantly, these effects would be categorized as nonshared environmental influences in standard twin models despite their roots in shared environments. These findings should encourage caution among those who claim that the frequently trivial variance attributed to shared environments in behavioral genetic models means that families, schools, and neighborhoods do not meaningfully influence these outcomes. Copyright © 2015. Published by Elsevier Inc.

  8. Transient competitive complexation in biological kinetic isotope fractionation explains non-steady isotopic effects: Theory and application to denitrification in soils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maggi, F.M.; Riley, W.J.

    2009-06-01

    The theoretical formulation of biological kinetic reactions in isotopic applications often assume first-order or Michaelis-Menten-Monod kinetics under the quasi-steady-state assumption to simplify the system kinetics. However, isotopic e ects have the same order of magnitude as the potential error introduced by these simpli cations. Both formulations lead to a constant fractionation factor which may yield incorrect estimations of the isotopic effect and a misleading interpretation of the isotopic signature of a reaction. We have analyzed the isotopic signature of denitri cation in biogeochemical soil systems by Menyailo and Hungate [2006], where high {sup 15}N{sub 2}O enrichment during N{sub 2}O productionmore » and inverse isotope fractionation during N{sub 2}O consumption could not be explained with first-order kinetics and the Rayleigh equation, or with the quasi-steady-state Michaelis-Menten-Monod kinetics. When the quasi-steady-state assumption was relaxed, transient Michaelis-Menten-Monod kinetics accurately reproduced the observations and aided in interpretation of experimental isotopic signatures. These results may imply a substantial revision in using the Rayleigh equation for interpretation of isotopic signatures and in modeling biological kinetic isotope fractionation with first-order kinetics or quasi-steady-state Michaelis-Menten-Monod kinetics.« less

  9. Searching for quantum optimal controls under severe constraints

    DOE PAGES

    Riviello, Gregory; Tibbetts, Katharine Moore; Brif, Constantin; ...

    2015-04-06

    The success of quantum optimal control for both experimental and theoretical objectives is connected to the topology of the corresponding control landscapes, which are free from local traps if three conditions are met: (1) the quantum system is controllable, (2) the Jacobian of the map from the control field to the evolution operator is of full rank, and (3) there are no constraints on the control field. This paper investigates how the violation of assumption (3) affects gradient searches for globally optimal control fields. The satisfaction of assumptions (1) and (2) ensures that the control landscape lacks fundamental traps, butmore » certain control constraints can still prevent successful optimization of the objective. Using optimal control simulations, we show that the most severe field constraints are those that limit essential control resources, such as the number of control variables, the control duration, and the field strength. Proper management of these resources is an issue of great practical importance for optimization in the laboratory. For each resource, we show that constraints exceeding quantifiable limits can introduce artificial traps to the control landscape and prevent gradient searches from reaching a globally optimal solution. These results demonstrate that careful choice of relevant control parameters helps to eliminate artificial traps and facilitate successful optimization.« less

  10. On implementing maximum economic yield in commercial fisheries

    PubMed Central

    Dichmont, C. M.; Pascoe, S.; Kompas, T.; Punt, A. E.; Deng, R.

    2009-01-01

    Economists have long argued that a fishery that maximizes its economic potential usually will also satisfy its conservation objectives. Recently, maximum economic yield (MEY) has been identified as a primary management objective for Australian fisheries and is under consideration elsewhere. However, first attempts at estimating MEY as an actual management target for a real fishery (rather than a conceptual or theoretical exercise) have highlighted some substantial complexities generally unconsidered by fisheries economists. Here, we highlight some of the main issues encountered in our experience and their implications for estimating and transitioning to MEY. Using a bioeconomic model of an Australian fishery for which MEY is the management target, we note that unconstrained optimization may result in effort trajectories that would not be acceptable to industry or managers. Different assumptions regarding appropriate constraints result in different outcomes, each of which may be considered a valid MEY. Similarly, alternative treatments of prices and costs may result in differing estimates of MEY and their associated effort trajectories. To develop an implementable management strategy in an adaptive management framework, a set of assumptions must be agreed among scientists, economists, and industry and managers, indicating that operationalizing MEY is not simply a matter of estimating the numbers but requires strong industry commitment and involvement. PMID:20018676

  11. Of truth and pathways: chasing bits of information through myriads of articles.

    PubMed

    Krauthammer, Michael; Kra, Pauline; Iossifov, Ivan; Gomez, Shawn M; Hripcsak, George; Hatzivassiloglou, Vasileios; Friedman, Carol; Rzhetsky, Andrey

    2002-01-01

    Knowledge on interactions between molecules in living cells is indispensable for theoretical analysis and practical applications in modern genomics and molecular biology. Building such networks relies on the assumption that the correct molecular interactions are known or can be identified by reading a few research articles. However, this assumption does not necessarily hold, as truth is rather an emerging property based on many potentially conflicting facts. This paper explores the processes of knowledge generation and publishing in the molecular biology literature using modelling and analysis of real molecular interaction data. The data analysed in this article were automatically extracted from 50000 research articles in molecular biology using a computer system called GeneWays containing a natural language processing module. The paper indicates that truthfulness of statements is associated in the minds of scientists with the relative importance (connectedness) of substances under study, revealing a potential selection bias in the reporting of research results. Aiming at understanding the statistical properties of the life cycle of biological facts reported in research articles, we formulate a stochastic model describing generation and propagation of knowledge about molecular interactions through scientific publications. We hope that in the future such a model can be useful for automatically producing consensus views of molecular interaction data.

  12. Energy dependence of effective electron mass and laser-induced ionization of wide band-gap solids

    NASA Astrophysics Data System (ADS)

    Gruzdev, V. E.

    2008-10-01

    Most of the traditional theoretical models of laser-induced ionization were developed under the assumption of constant effective electron mass or weak dependence of the effective mass on electron energy. Those assumptions exclude from consideration all the effects resulting from significant increase of the effective mass with increasing of electron energy in real the conduction band. Promotion of electrons to the states with high effective mass can be done either via laserinduced electron oscillations or via electron-particle collisions. Increase of the effective mass during laser-material interactions can result in specific regimes of ionization. Performing a simple qualitative analysis by comparison of the constant-mass approximation vs realistic dependences of the effective mass on electron energy, we demonstrate that the traditional ionization models provide reliable estimation of the ionization rate in a very limited domain of laser intensity and wavelength. By taking into account increase of the effective mass with electron energy, we demonstrate that special regimes of high-intensity photo-ionization are possible depending on laser and material parameters. Qualitative analysis of the energy dependence of the effective mass also leads to conclusion that the avalanche ionization can be stopped by the effect of electron trapping in the states with large values of the effective mass.

  13. Reaction rates for mesoscopic reaction-diffusion kinetics

    DOE PAGES

    Hellander, Stefan; Hellander, Andreas; Petzold, Linda

    2015-02-23

    The mesoscopic reaction-diffusion master equation (RDME) is a popular modeling framework frequently applied to stochastic reaction-diffusion kinetics in systems biology. The RDME is derived from assumptions about the underlying physical properties of the system, and it may produce unphysical results for models where those assumptions fail. In that case, other more comprehensive models are better suited, such as hard-sphere Brownian dynamics (BD). Although the RDME is a model in its own right, and not inferred from any specific microscale model, it proves useful to attempt to approximate a microscale model by a specific choice of mesoscopic reaction rates. In thismore » paper we derive mesoscopic scale-dependent reaction rates by matching certain statistics of the RDME solution to statistics of the solution of a widely used microscopic BD model: the Smoluchowski model with a Robin boundary condition at the reaction radius of two molecules. We also establish fundamental limits on the range of mesh resolutions for which this approach yields accurate results and show both theoretically and in numerical examples that as we approach the lower fundamental limit, the mesoscopic dynamics approach the microscopic dynamics. Finally, we show that for mesh sizes below the fundamental lower limit, results are less accurate. Thus, the lower limit determines the mesh size for which we obtain the most accurate results.« less

  14. Verification of a Byzantine-Fault-Tolerant Self-stabilizing Protocol for Clock Synchronization

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2008-01-01

    This paper presents the mechanical verification of a simplified model of a rapid Byzantine-fault-tolerant self-stabilizing protocol for distributed clock synchronization systems. This protocol does not rely on any assumptions about the initial state of the system except for the presence of sufficient good nodes, thus making the weakest possible assumptions and producing the strongest results. This protocol tolerates bursts of transient failures, and deterministically converges within a time bound that is a linear function of the self-stabilization period. A simplified model of the protocol is verified using the Symbolic Model Verifier (SMV). The system under study consists of 4 nodes, where at most one of the nodes is assumed to be Byzantine faulty. The model checking effort is focused on verifying correctness of the simplified model of the protocol in the presence of a permanent Byzantine fault as well as confirmation of claims of determinism and linear convergence with respect to the self-stabilization period. Although model checking results of the simplified model of the protocol confirm the theoretical predictions, these results do not necessarily confirm that the protocol solves the general case of this problem. Modeling challenges of the protocol and the system are addressed. A number of abstractions are utilized in order to reduce the state space.

  15. Intermediate Band Gap Solar Cells: The Effect of Resonant Tunneling on Delocalization

    NASA Astrophysics Data System (ADS)

    William, Reid; Mathew, Doty; Sanwli, Shilpa; Gammon, Dan; Bracker, Allan

    2011-03-01

    Quantum dots (QD's) have many unique properties, including tunable discrete energy levels, that make them suitable for a variety of next generation photovoltaic applications. One application is an intermediate band solar cell (IBSC); in which QD's are incorporated into the bulk material. The QD's are tuned to absorb low energy photons that would otherwise be wasted because their energy is less than the solar cell's bulk band gap. Current theory concludes that identical QD's should be arranged in a superlattice to form a completely delocalized intermediate band maximizing absorption of low energy photons while minimizing the decrease in the efficiency of the bulk material. We use a T-matrix model to assess the feasibility of forming a delocalized band given that real QD ensembles have an inhomogeneous distribution of energy levels. Our results suggest that formation of a band delocalized through a large QD superlattice is challenging; suggesting that the assumptions underlying present IBSC theory require reexamination. We use time-resolved photoluminescence of coupled QD's to probe the effect of delocalized states on the dynamics of absorption, energy transport, and nonradiative relaxation. These results will allow us to reexamine the theoretical assumptions and determine the degree of delocalization necessary to create an efficient quantum dot-based IBSC.

  16. Reaction rates for mesoscopic reaction-diffusion kinetics

    PubMed Central

    Hellander, Stefan; Hellander, Andreas; Petzold, Linda

    2016-01-01

    The mesoscopic reaction-diffusion master equation (RDME) is a popular modeling framework frequently applied to stochastic reaction-diffusion kinetics in systems biology. The RDME is derived from assumptions about the underlying physical properties of the system, and it may produce unphysical results for models where those assumptions fail. In that case, other more comprehensive models are better suited, such as hard-sphere Brownian dynamics (BD). Although the RDME is a model in its own right, and not inferred from any specific microscale model, it proves useful to attempt to approximate a microscale model by a specific choice of mesoscopic reaction rates. In this paper we derive mesoscopic scale-dependent reaction rates by matching certain statistics of the RDME solution to statistics of the solution of a widely used microscopic BD model: the Smoluchowski model with a Robin boundary condition at the reaction radius of two molecules. We also establish fundamental limits on the range of mesh resolutions for which this approach yields accurate results and show both theoretically and in numerical examples that as we approach the lower fundamental limit, the mesoscopic dynamics approach the microscopic dynamics. We show that for mesh sizes below the fundamental lower limit, results are less accurate. Thus, the lower limit determines the mesh size for which we obtain the most accurate results. PMID:25768640

  17. Weighted Least Squares Techniques for Improved Received Signal Strength Based Localization

    PubMed Central

    Tarrío, Paula; Bernardos, Ana M.; Casar, José R.

    2011-01-01

    The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling. PMID:22164092

  18. Theoretical geology

    NASA Astrophysics Data System (ADS)

    Mikeš, Daniel

    2010-05-01

    Theoretical geology Present day geology is mostly empirical of nature. I claim that geology is by nature complex and that the empirical approach is bound to fail. Let's consider the input to be the set of ambient conditions and the output to be the sedimentary rock record. I claim that the output can only be deduced from the input if the relation from input to output be known. The fundamental question is therefore the following: Can one predict the output from the input or can one predict the behaviour of a sedimentary system? If one can, than the empirical/deductive method has changes, if one can't than that method is bound to fail. The fundamental problem to solve is therefore the following: How to predict the behaviour of a sedimentary system? It is interesting to observe that this question is never asked and many a study is conducted by the empirical/deductive method; it seems that the empirical method has been accepted as being appropriate without question. It is, however, easy to argument that a sedimentary system is by nature complex and that several input parameters vary at the same time and that they can create similar output in the rock record. It follows trivially from these first principles that in such a case the deductive solution cannot be unique. At the same time several geological methods depart precisely from the assumption, that one particular variable is the dictator/driver and that the others are constant, even though the data do not support such an assumption. The method of "sequence stratigraphy" is a typical example of such a dogma. It can be easily argued that all the interpretation resulting from a method that is built on uncertain or wrong assumptions is erroneous. Still, this method has survived for many years, nonwithstanding all the critics it has received. This is just one example of the present day geological world and is not unique. Even the alternative methods criticising sequence stratigraphy actually depart from the same erroneous assumptions and do not solve the very fundamental issue that lies at the base of the problem. This problem is straighforward and obvious: a sedimentary system is inherently four-dimensional (3 spatial dimensions + 1 temporal dimension). Any method using an inferior number or dimensions is bound to fail to describe the evolution of a sedimentary system. It is indicative of the present day geological world that such fundamental issues be overlooked. The only reason for which one can appoint the socalled "rationality" in todays society. Simple "common sense" leads us to the conclusion that in this case the empirical method is bound to fail and the only method that can solve the problem is the theoretical approach. Reasoning that is completely trivial for the traditional exact sciences like physics and mathematics and applied sciences like engineering. However, not for geology, a science that was traditionally descriptive and jumped to empirical science, skipping the stage of theoretical science. I argue that the gap of theoretical geology is left open and needs to be filled. Every discipline in geology lacks a theoretical base. This base can only be filled by the theoretical/inductive approach and can impossibly be filled by the empirical/deductive approach. Once a critical mass of geologists realises this flaw in todays geology, we can start solving the fundamental problems in geology.

  19. The impact of cloud vertical profile on liquid water path retrieval based on the bispectral method: A theoretical study based on large-eddy simulations of shallow marine boundary layer clouds.

    PubMed

    Miller, Daniel J; Zhang, Zhibo; Ackerman, Andrew S; Platnick, Steven; Baum, Bryan A

    2016-04-27

    Passive optical retrievals of cloud liquid water path (LWP), like those implemented for Moderate Resolution Imaging Spectroradiometer (MODIS), rely on cloud vertical profile assumptions to relate optical thickness ( τ ) and effective radius ( r e ) retrievals to LWP. These techniques typically assume that shallow clouds are vertically homogeneous; however, an adiabatic cloud model is plausibly more realistic for shallow marine boundary layer cloud regimes. In this study a satellite retrieval simulator is used to perform MODIS-like satellite retrievals, which in turn are compared directly to the large-eddy simulation (LES) output. This satellite simulator creates a framework for rigorous quantification of the impact that vertical profile features have on LWP retrievals, and it accomplishes this while also avoiding sources of bias present in previous observational studies. The cloud vertical profiles from the LES are often more complex than either of the two standard assumptions, and the favored assumption was found to be sensitive to cloud regime (cumuliform/stratiform). Confirming previous studies, drizzle and cloud top entrainment of dry air are identified as physical features that bias LWP retrievals away from adiabatic and toward homogeneous assumptions. The mean bias induced by drizzle-influenced profiles was shown to be on the order of 5-10 g/m 2 . In contrast, the influence of cloud top entrainment was found to be smaller by about a factor of 2. A theoretical framework is developed to explain variability in LWP retrievals by introducing modifications to the adiabatic r e profile. In addition to analyzing bispectral retrievals, we also compare results with the vertical profile sensitivity of passive polarimetric retrieval techniques.

  20. The impact of cloud vertical profile on liquid water path retrieval based on the bispectral method: A theoretical study based on large-eddy simulations of shallow marine boundary layer clouds

    PubMed Central

    Miller, Daniel J.; Zhang, Zhibo; Ackerman, Andrew S.; Platnick, Steven; Baum, Bryan A.

    2018-01-01

    Passive optical retrievals of cloud liquid water path (LWP), like those implemented for Moderate Resolution Imaging Spectroradiometer (MODIS), rely on cloud vertical profile assumptions to relate optical thickness (τ) and effective radius (re) retrievals to LWP. These techniques typically assume that shallow clouds are vertically homogeneous; however, an adiabatic cloud model is plausibly more realistic for shallow marine boundary layer cloud regimes. In this study a satellite retrieval simulator is used to perform MODIS-like satellite retrievals, which in turn are compared directly to the large-eddy simulation (LES) output. This satellite simulator creates a framework for rigorous quantification of the impact that vertical profile features have on LWP retrievals, and it accomplishes this while also avoiding sources of bias present in previous observational studies. The cloud vertical profiles from the LES are often more complex than either of the two standard assumptions, and the favored assumption was found to be sensitive to cloud regime (cumuliform/stratiform). Confirming previous studies, drizzle and cloud top entrainment of dry air are identified as physical features that bias LWP retrievals away from adiabatic and toward homogeneous assumptions. The mean bias induced by drizzle-influenced profiles was shown to be on the order of 5–10 g/m2. In contrast, the influence of cloud top entrainment was found to be smaller by about a factor of 2. A theoretical framework is developed to explain variability in LWP retrievals by introducing modifications to the adiabatic re profile. In addition to analyzing bispectral retrievals, we also compare results with the vertical profile sensitivity of passive polarimetric retrieval techniques. PMID:29637042

  1. Evaluation of a distributed catchment scale water balance model

    NASA Technical Reports Server (NTRS)

    Troch, Peter A.; Mancini, Marco; Paniconi, Claudio; Wood, Eric F.

    1993-01-01

    The validity of some of the simplifying assumptions in a conceptual water balance model is investigated by comparing simulation results from the conceptual model with simulation results from a three-dimensional physically based numerical model and with field observations. We examine, in particular, assumptions and simplifications related to water table dynamics, vertical soil moisture and pressure head distributions, and subsurface flow contributions to stream discharge. The conceptual model relies on a topographic index to predict saturation excess runoff and on Philip's infiltration equation to predict infiltration excess runoff. The numerical model solves the three-dimensional Richards equation describing flow in variably saturated porous media, and handles seepage face boundaries, infiltration excess and saturation excess runoff production, and soil driven and atmosphere driven surface fluxes. The study catchments (a 7.2 sq km catchment and a 0.64 sq km subcatchment) are located in the North Appalachian ridge and valley region of eastern Pennsylvania. Hydrologic data collected during the MACHYDRO 90 field experiment are used to calibrate the models and to evaluate simulation results. It is found that water table dynamics as predicted by the conceptual model are close to the observations in a shallow water well and therefore, that a linear relationship between a topographic index and the local water table depth is found to be a reasonable assumption for catchment scale modeling. However, the hydraulic equilibrium assumption is not valid for the upper 100 cm layer of the unsaturated zone and a conceptual model that incorporates a root zone is suggested. Furthermore, theoretical subsurface flow characteristics from the conceptual model are found to be different from field observations, numerical simulation results, and theoretical baseflow recession characteristics based on Boussinesq's groundwater equation.

  2. A Comparison of Analytical and Experimental Data for a Magnetic Actuator

    NASA Technical Reports Server (NTRS)

    Groom, Nelson J.; Bloodgood, V. Dale, Jr.

    2000-01-01

    Theoretical and experimental force-displacement and force-current data are compared for two configurations of a simple horseshoe, or bipolar, magnetic actuator. One configuration utilizes permanent magnet wafers to provide a bias flux and the other configuration has no source of bias flux. The theoretical data are obtained from two analytical models of each configuration. One is an ideal analytical model which is developed under the following assumptions: (1) zero fringing and leakage flux, (2) zero actuator coil mmf loss, and (3) infinite permeability of the actuator core and suspended element flux return path. The other analytical model, called the extended model, is developed by adding loss and leakage factors to the ideal model. The values of the loss and leakage factors are calculated from experimental data. The experimental data are obtained from a magnetic actuator test fixture, which is described in detail. Results indicate that the ideal models for both configurations do not match the experimental data very well. However, except for the range around zero force, the extended models produce a good match. The best match is produced by the extended model of the configuration with permanent magnet flux bias.

  3. A probabilistic framework for microarray data analysis: fundamental probability models and statistical inference.

    PubMed

    Ogunnaike, Babatunde A; Gelmi, Claudio A; Edwards, Jeremy S

    2010-05-21

    Gene expression studies generate large quantities of data with the defining characteristic that the number of genes (whose expression profiles are to be determined) exceed the number of available replicates by several orders of magnitude. Standard spot-by-spot analysis still seeks to extract useful information for each gene on the basis of the number of available replicates, and thus plays to the weakness of microarrays. On the other hand, because of the data volume, treating the entire data set as an ensemble, and developing theoretical distributions for these ensembles provides a framework that plays instead to the strength of microarrays. We present theoretical results that under reasonable assumptions, the distribution of microarray intensities follows the Gamma model, with the biological interpretations of the model parameters emerging naturally. We subsequently establish that for each microarray data set, the fractional intensities can be represented as a mixture of Beta densities, and develop a procedure for using these results to draw statistical inference regarding differential gene expression. We illustrate the results with experimental data from gene expression studies on Deinococcus radiodurans following DNA damage using cDNA microarrays. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  4. An Update on the Non-Mass-Dependent Isotope Fractionation under Thermal Gradient

    NASA Technical Reports Server (NTRS)

    Sun, Tao; Niles, Paul; Bao, Huiming; Socki, Richard; Liu, Yun

    2013-01-01

    Mass flow and compositional gradient (elemental and isotope separation) occurs when flu-id(s) or gas(es) in an enclosure is subjected to a thermal gradient, and the phenomenon is named thermal diffusion. Gas phase thermal diffusion has been theoretically and experimentally studied for more than a century, although there has not been a satisfactory theory to date. Nevertheless, for isotopic system, the Chapman-Enskog theory predicts that the mass difference is the only term in the thermal diffusion separation factors that differs one isotope pair to another,with the assumptions that the molecules are spherical and systematic (monoatomic-like structure) and the particle collision is elastic. Our previous report indicates factors may be playing a role because the Non-Mass Dependent (NMD) effect is found for both symmetric and asymmetric, linear and spherical polyatomic molecules over a wide range of temperature (-196C to +237C). The observed NMD phenomenon in the simple thermal-diffusion experiments demands quantitative validation and theoretical explanation. Besides the pressure and temperature dependency illustrated in our previous reports, efforts are made in this study to address issues such as the role of convection or molecular structure and whether it is a transient, non-equilibrium effect only.

  5. An Empirically Calibrated Model of Cell Fate Decision Following Viral Infection

    NASA Astrophysics Data System (ADS)

    Coleman, Seth; Igoshin, Oleg; Golding, Ido

    The life cycle of the virus (phage) lambda is an established paradigm for the way genetic networks drive cell fate decisions. But despite decades of interrogation, we are still unable to theoretically predict whether the infection of a given cell will result in cell death or viral dormancy. The poor predictive power of current models reflects the absence of quantitative experimental data describing the regulatory interactions between different lambda genes. To address this gap, we are constructing a theoretical model that captures the known interactions in the lambda network. Model assumptions and parameters are calibrated using new single-cell data from our lab, describing the activity of lambda genes at single-molecule resolution. We began with a mean-field model, aimed at exploring the population averaged gene-expression trajectories under different initial conditions. Next, we will develop a stochastic formulation, to capture the differences between individual cells within the population. The eventual goal is to identify how the post-infection decision is driven by the interplay between network topology, initial conditions, and stochastic effects. The insights gained here will inform our understanding of cell fate choices in more complex cellular systems.

  6. Two roads diverged: Distinct mechanisms of attentional bias differentially predict negative affect and persistent negative thought.

    PubMed

    Onie, Sandersan; Most, Steven B

    2017-08-01

    Attentional biases to threatening stimuli have been implicated in various emotional disorders. Theoretical approaches often carry the implicit assumption that various attentional bias measures tap into the same underlying construct, but attention itself is not a unitary mechanism. Most attentional bias tasks-such as the dot probe (DP)-index spatial attention, neglecting other potential attention mechanisms. We compared the DP with emotion-induced blindness (EIB), which appears to be mechanistically distinct, and examined the degree to which these tasks predicted (a) negative affect, (b) persistent negative thought (i.e., worry, rumination), and (c) each other. The 2 tasks did not predict each other, and they uniquely accounted for negative affect in a regression analysis. The relationship between EIB and negative affect was mediated by persistent negative thought, whereas that between the DP and negative affect was not, suggesting that EIB may be more intimately linked than spatial attention with persistent negative thought. Experiment 2 revealed EIB to have a favorable test-retest reliability. Together, these findings underscore the importance of distinguishing between attentional bias mechanisms when constructing theoretical models of, and interventions that target, particular emotional disorders. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. Resistance formulas in hydraulics-based models for routing debris flows

    USGS Publications Warehouse

    Chen, Cheng-lung; Ling, Chi-Hai

    1997-01-01

    The one-dimensional, cross-section-averaged flow equations formulated for routing debris flows down a narrow valley are identical to those for clear-water flow, except for the differences in the values of the flow parameters, such as the momentum (or energy) correction factor, resistance coefficient, and friction slope. Though these flow parameters for debris flow in channels with cross-sections of arbitrary geometric shape can only be determined empirically, the theoretical values of such parameters for debris flow in wide channels exist. This paper aims to derive the theoretical resistance coefficient and friction slope for debris flow in wide channels using a rheological model for highly-concentrated, rapidly-sheared granular flows, such as the generalized viscoplastic fluid (GVF) model. Formulating such resistance coefficient or friction slope is equivalent to developing a generally applicable resistance formula for routing debris flows. Inclusion of a nonuniform term in the expression of the resistance formula proves useful in removing the customary assumption that the spatially varied resistance at any section is equal to what would take place with the same rate of flow passing the same section under conditions of uniformity. This in effect implies an improvement in the accuracy of unsteady debris-flow computation.

  8. Non-Linear Vibroisolation Pads Design, Numerical FEM Analysis and Introductory Experimental Investigations

    NASA Astrophysics Data System (ADS)

    Zielnica, J.; Ziółkowski, A.; Cempel, C.

    2003-03-01

    Design and theoretical and experimental investigation of vibroisolation pads with non-linear static and dynamic responses is the objective of the paper. The analytical investigations are based on non-linear finite element analysis where the load-deflection response is traced against the shape and material properties of the analysed model of the vibroisolation pad. A new model of vibroisolation pad of antisymmetrical type was designed and analysed by the finite element method based on the second-order theory (large displacements and strains) with the assumption of material's non-linearities (Mooney-Rivlin model). Stability loss phenomenon was used in the design of the vibroisolators, and it was proved that it would be possible to design a model of vibroisolator in the form of a continuous pad with non-linear static and dynamic response, typical to vibroisolation purposes. The materials used for the vibroisolator are those of rubber, elastomers, and similar ones. The results of theoretical investigations were examined experimentally. A series of models made of soft rubber were designed for the test purposes. The experimental investigations of the vibroisolation models, under static and dynamic loads, confirmed the results of the FEM analysis.

  9. Finite-block-length analysis in classical and quantum information theory.

    PubMed

    Hayashi, Masahito

    2017-01-01

    Coding technology is used in several information processing tasks. In particular, when noise during transmission disturbs communications, coding technology is employed to protect the information. However, there are two types of coding technology: coding in classical information theory and coding in quantum information theory. Although the physical media used to transmit information ultimately obey quantum mechanics, we need to choose the type of coding depending on the kind of information device, classical or quantum, that is being used. In both branches of information theory, there are many elegant theoretical results under the ideal assumption that an infinitely large system is available. In a realistic situation, we need to account for finite size effects. The present paper reviews finite size effects in classical and quantum information theory with respect to various topics, including applied aspects.

  10. Recognition and source memory as multivariate decision processes.

    PubMed

    Banks, W P

    2000-07-01

    Recognition memory, source memory, and exclusion performance are three important domains of study in memory, each with its own findings, it specific theoretical developments, and its separate research literature. It is proposed here that results from all three domains can be treated with a single analytic model. This article shows how to generate a comprehensive memory representation based on multidimensional signal detection theory and how to make predictions for each of these paradigms using decision axes drawn through the space. The detection model is simpler than the comparable multinomial model, it is more easily generalizable, and it does not make threshold assumptions. An experiment using the same memory set for all three tasks demonstrates the analysis and tests the model. The results show that some seemingly complex relations between the paradigms derive from an underlying simplicity of structure.

  11. Controlled interaction: strategies for using virtual reality to study perception.

    PubMed

    Durgin, Frank H; Li, Zhi

    2010-05-01

    Immersive virtual reality systems employing head-mounted displays offer great promise for the investigation of perception and action, but there are well-documented limitations to most virtual reality systems. In the present article, we suggest strategies for studying perception/action interactions that try to depend on both scale-invariant metrics (such as power function exponents) and careful consideration of the requirements of the interactions under investigation. New data concerning the effect of pincushion distortion on the perception of surface orientation are presented, as well as data documenting the perception of dynamic distortions associated with head movements with uncorrected optics. A review of several successful uses of virtual reality to study the interaction of perception and action emphasizes scale-free analysis strategies that can achieve theoretical goals while minimizing assumptions about the accuracy of virtual simulations.

  12. Stability and diversity in collective adaptation

    NASA Astrophysics Data System (ADS)

    Sato, Yuzuru; Akiyama, Eizo; Crutchfield, James P.

    2005-10-01

    We derive a class of macroscopic differential equations that describe collective adaptation, starting from a discrete-time stochastic microscopic model. The behavior of each agent is a dynamic balance between adaptation that locally achieves the best action and memory loss that leads to randomized behavior. We show that, although individual agents interact with their environment and other agents in a purely self-interested way, macroscopic behavior can be interpreted as game dynamics. Application to several familiar, explicit game interactions shows that the adaptation dynamics exhibits a diversity of collective behaviors. The simplicity of the assumptions underlying the macroscopic equations suggests that these behaviors should be expected broadly in collective adaptation. We also analyze the adaptation dynamics from an information-theoretic viewpoint and discuss self-organization induced by the dynamics of uncertainty, giving a novel view of collective adaptation.

  13. Finite-block-length analysis in classical and quantum information theory

    PubMed Central

    HAYASHI, Masahito

    2017-01-01

    Coding technology is used in several information processing tasks. In particular, when noise during transmission disturbs communications, coding technology is employed to protect the information. However, there are two types of coding technology: coding in classical information theory and coding in quantum information theory. Although the physical media used to transmit information ultimately obey quantum mechanics, we need to choose the type of coding depending on the kind of information device, classical or quantum, that is being used. In both branches of information theory, there are many elegant theoretical results under the ideal assumption that an infinitely large system is available. In a realistic situation, we need to account for finite size effects. The present paper reviews finite size effects in classical and quantum information theory with respect to various topics, including applied aspects. PMID:28302962

  14. Linking nursing unit's culture to organizational effectiveness: a measurement tool.

    PubMed

    Casida, Jesus

    2008-01-01

    Organizational culture consists of the deep underlying assumptions, beliefs, and values that are shared by members of the organization and typically operate unconsciously. The four organizational culture traits of the Denison Organizational Culture Model (DOCM) are characteristics of organizational effectiveness, which include adaptability, involvement, consistency, and mission. Effective organizations demonstrate high levels of the four cultural traits which reflect their ability to balance the dynamic tension between the need for stability and the need for flexibility within the organization. The Denison Organizational Culture Survey (DOCS) is a measurement tool that was founded on the theoretical framework of the DOCM, and in the field of business, is one of the most commonly used tools for measuring organizational culture. The DOCS offers a promising approach to operationalizing and measuring the link between organizational culture and organizational effectiveness in the context of nursing units.

  15. Optical device for thermal diffusivity determination in liquids by reflection of a thermal wave

    NASA Astrophysics Data System (ADS)

    Sánchez-Pérez, C.; De León-Hernández, A.; García-Cadena, C.

    2017-08-01

    In this work, we present a device for determination of the thermal diffusivity using the oblique reflection of a thermal wave within a solid slab that is in contact with the medium to be characterized. By using the reflection near a critical angle under the assumption that thermal waves obey Snell's law of refraction with the square root of the thermal diffusivities, the unknown thermal diffusivity is obtained by simple formulae. Experimentally, the sensor response is measured using the photothermal beam deflection technique within a slab that results in a compact device with no contact of the laser probing beam with the sample. We describe the theoretical basis and provide experimental results to validate the proposed method. We determine the thermal diffusivity of tridistilled water and glycerin solutions with an error of less than 0.5%.

  16. Biological evolution and statistical physics

    NASA Astrophysics Data System (ADS)

    Drossel, Barbara

    2001-03-01

    This review is an introduction to theoretical models and mathematical calculations for biological evolution, aimed at physicists. The methods in the field are naturally very similar to those used in statistical physics, although the majority of publications have appeared in biology journals. The review has three parts, which can be read independently. The first part deals with evolution in fitness landscapes and includes Fisher's theorem, adaptive walks, quasispecies models, effects of finite population sizes, and neutral evolution. The second part studies models of coevolution, including evolutionary game theory, kin selection, group selection, sexual selection, speciation, and coevolution of hosts and parasites. The third part discusses models for networks of interacting species and their extinction avalanches. Throughout the review, attention is paid to giving the necessary biological information, and to pointing out the assumptions underlying the models, and their limits of validity.

  17. A Determinate Model of Thrust-Augmenting Ejectors

    NASA Astrophysics Data System (ADS)

    Whitley, N.; Krothapalli, A.; van Dommelen, L.

    1996-01-01

    A theoretical analysis of the compressible flow through a constant-area jet-engine ejector in which a primary jet mixes with ambient fluid from a uniform free stream is pursued. The problem is reduced to a determinate mathematical one by prescribing the ratios of stagnation properties between the primary and secondary flows. For some selections of properties and parameters more than one solution is possible and the meaning of these solutions is discussed by means of asymptotic expansions. Our results further show that while under stationary conditions the thrust-augmentation ratio assumes a value of 2 in the large area-ratio limit, for a free-stream Mach number greater than 0.6 very little thrust augmentation is left. Due to the assumptions made, the analysis provides idealized values for the thrust-augmentation ratio and the mass flux entrainment factor.

  18. Structure of right-handed neutrino mass matrix

    NASA Astrophysics Data System (ADS)

    Koide, Yoshio

    2017-11-01

    Recently, Nishiura and the author proposed a unified quark-lepton mass matrix model under a family symmetry U (3 )×U (3 )' . The model can give excellent parameter fitting to the observed quark and neutrino data. The model has a reasonable basis as far as the quark sector, but, in the neutrino sector, the form of the right-handed neutrino mass matrix MR does not have a theoretical basis; that is, it was nothing but a phenomenological assumption. In this paper, it is pointed out that the form of MR is originated in the structure of Majorana mass matrix (4 ×4 matrix) for the left-handed fields ((νL)i,(νRc)i,(NL)α,(NRc)α) where νi (i =1 , 2, 3) and Nα (α =1 , 2, 3) are U(3)-family and U(3 ) ' -family triplets, respectively.

  19. Can organizations benefit from worksite health promotion?

    PubMed Central

    Leviton, L C

    1989-01-01

    A decision-analytic model was developed to project the future effects of selected worksite health promotion activities on employees' likelihood of chronic disease and injury and on employer costs due to illness. The model employed a conservative set of assumptions and a limited five-year time frame. Under these assumptions, hypertension control and seat belt campaigns prevent a substantial amount of illness, injury, and death. Sensitivity analysis indicates that these two programs pay for themselves and under some conditions show a modest savings to the employer. Under some conditions, smoking cessation programs pay for themselves, preventing a modest amount of illness and death. Cholesterol reduction by behavioral means does not pay for itself under these assumptions. These findings imply priorities in prevention for employer and employee alike. PMID:2499556

  20. Allgemeine Sprachfaehigkeit und Fremdsprachenerwerb. Zur Struktur von Leistungsdimensionen und linguistischer Kompetenz des Fremdsprachenlerners (General Language Ability and Foreign Language Acquisition. On the Structure of Performance Dimensions and the Linguistic Competence of the Foreign Language Learner). Diskussions beitraege aus dem Institute fuer Bildungsforschung, No. 1.

    ERIC Educational Resources Information Center

    Sang, Fritz; Vollmer, Helmut J.

    This study investigates the theoretical plausibility and empirical validity of the assumption that all performance in a foreign language can be traced back to a single factor, the general language ability factor. The theoretical background of this hypothesis is reviewed in detail. The concept of a unitary linguistic competence, interpreted as an…

  1. Brownian motion with adaptive drift for remaining useful life prediction: Revisited

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Tsui, Kwok-Leung

    2018-01-01

    Linear Brownian motion with constant drift is widely used in remaining useful life predictions because its first hitting time follows the inverse Gaussian distribution. State space modelling of linear Brownian motion was proposed to make the drift coefficient adaptive and incorporate on-line measurements into the first hitting time distribution. Here, the drift coefficient followed the Gaussian distribution, and it was iteratively estimated by using Kalman filtering once a new measurement was available. Then, to model nonlinear degradation, linear Brownian motion with adaptive drift was extended to nonlinear Brownian motion with adaptive drift. However, in previous studies, an underlying assumption used in the state space modelling was that in the update phase of Kalman filtering, the predicted drift coefficient at the current time exactly equalled the posterior drift coefficient estimated at the previous time, which caused a contradiction with the predicted drift coefficient evolution driven by an additive Gaussian process noise. In this paper, to alleviate such an underlying assumption, a new state space model is constructed. As a result, in the update phase of Kalman filtering, the predicted drift coefficient at the current time evolves from the posterior drift coefficient at the previous time. Moreover, the optimal Kalman filtering gain for iteratively estimating the posterior drift coefficient at any time is mathematically derived. A discussion that theoretically explains the main reasons why the constructed state space model can result in high remaining useful life prediction accuracies is provided. Finally, the proposed state space model and its associated Kalman filtering gain are applied to battery prognostics.

  2. Sliding friction between polymer surfaces: A molecular interpretation

    NASA Astrophysics Data System (ADS)

    Allegra, Giuseppe; Raos, Guido

    2006-04-01

    For two contacting rigid bodies, the friction force F is proportional to the normal load and independent of the macroscopic contact area and relative velocity V (Amonton's law). With two mutually sliding polymer samples, the surface irregularities transmit deformation to the underlying material. Energy loss along the deformation cycles is responsible for the friction force, which now appears to depend strongly on V [see, e.g., N. Maeda et al., Science 297, 379 (2002)]. We base our theoretical interpretation on the assumption that polymer chains are mainly subjected to oscillatory "reptation" along their "tubes." At high deformation frequencies—i.e., with a large sliding velocity V—the internal viscosity due to the rotational energy barriers around chain bonds hinders intramolecular mobility. As a result, energy dissipation and the correlated friction force strongly diminish at large V. Derived from a linear differential equation for chain dynamics, our results are basically consistent with the experimental data by Maeda et al. [Science 297, 379 (2002)] on modified polystyrene. Although the bulk polymer is below Tg, we regard the first few chain layers below the surface to be in the liquid state. In particular, the observed maximum of F vs V is consistent with physically reasonable values of the molecular parameters. As a general result, the ratio F /V is a steadily decreasing function of V, tending to V-2 for large velocities. We evaluate a much smaller friction for a cross-linked polymer under the assumption that the junctions are effectively immobile, also in agreement with the experimental results of Maeda et al. [Science 297, 379 (2002)].

  3. Theoretical Implications of Disordered Syntactic Comprehension.

    ERIC Educational Resources Information Center

    Rindflesch, Thomas; Reeves, Jennifer E.

    1992-01-01

    Reexamines data from Caplan and Hildebrandt (1988) with a new set of background assumptions and concludes a Government-Binding-based account is not supported. Instead, deficits observed in the process of infinitival complement constructions are attributed to patient inability to fully access the data structure required to support a proposed…

  4. Cultivating Teachers' Morality and the Pedagogy of Emotional Rationality

    ERIC Educational Resources Information Center

    Kim, Minkang

    2013-01-01

    Teachers are expected to act ethically and provide moral role models in performing their duties, even though teacher education has often relegated the cultivation of teachers' ethical awareness and moral development to the margins. When it is addressed, the main theoretical assumptions have relied heavily on the cognitivist developmental theories…

  5. Pedagogies of Indignation and "The Lives of Others"

    ERIC Educational Resources Information Center

    Suissa, Judith

    2017-01-01

    Neel Mukherjee's novel, "The Lives of Others", which depicts characters dealing with a situation of extreme and violent oppression, is used as the basis for looking more closely at some of the theoretical assumptions about hope, agency and critical consciousness that underpin Critical Pedagogy. It is suggested that it may be…

  6. The Syntax and Pragmatics of Fronting in Germanic

    ERIC Educational Resources Information Center

    Light, Caitlin

    2012-01-01

    Across the Germanic language family, we find a type of movement traditionally termed "topicalization," which may be realized in Germanic languages which possess the so-called Verb-Second (V2) constraint, as well as those without it. I will henceforward call this phenomenon "fronting" to avoid theoretical assumptions. This…

  7. Network Analysis in Comparative Social Sciences

    ERIC Educational Resources Information Center

    Vera, Eugenia Roldan; Schupp, Thomas

    2006-01-01

    This essay describes the pertinence of Social Network Analysis (SNA) for the social sciences in general, and discusses its methodological and conceptual implications for comparative research in particular. The authors first present a basic summary of the theoretical and methodological assumptions of SNA, followed by a succinct overview of its…

  8. Modification of the DSN radio frequency angular tropospheric refraction model

    NASA Technical Reports Server (NTRS)

    Berman, A. L.

    1977-01-01

    The previously derived DSN Radio Frequency Angular Tropospheric Refraction Model contained an assumption which was subsequently seen to be at a variance with the theoretical basis of angular refraction. The modification necessary to correct the model is minor in that the value of a constant is changed.

  9. Stochastic game theory: for playing games, not just for doing theory.

    PubMed

    Goeree, J K; Holt, C A

    1999-09-14

    Recent theoretical advances have dramatically increased the relevance of game theory for predicting human behavior in interactive situations. By relaxing the classical assumptions of perfect rationality and perfect foresight, we obtain much improved explanations of initial decisions, dynamic patterns of learning and adjustment, and equilibrium steady-state distributions.

  10. The Worldview Dimensions of Individualism and Collectivism: Implications for Counseling.

    ERIC Educational Resources Information Center

    Williams, Bryant

    2003-01-01

    A recent article, "Rethinking Individualism and Collectivism: Evaluation of Theoretical Assumptions and Meta-Analyses" (D. Oyserman, H. M. Coon, & M. Kemmelmeier, 2002), revealed that 170 studies have been conducted on the worldview dimensions of individualism and collectivism. This article reviews the results of the authors'…

  11. Cognitive Processes in Dissociation: Comment on Giesbrecht et al. (2008)

    ERIC Educational Resources Information Center

    Bremner, J. Douglas

    2010-01-01

    In their recent review "Cognitive Processes in Dissociation: An Analysis of Core Theoretical Assumptions," published in "Psychological Bulletin", Giesbrecht, Lynn, Lilienfeld, and Merckelbach (2008) have challenged the widely accepted trauma theory of dissociation, which holds that dissociative symptoms are caused by traumatic stress. In doing so,…

  12. Parent-Child Interaction: Research and Its Practical Implications.

    ERIC Educational Resources Information Center

    Smart, Margaret E.; Minet, Selma B.

    This report, prepared as part of the Project in Television and Early Childhood Education at the University of Southern California, contains a review of landmark and current literature on parent-child interaction (PCI). Major theoretical assumptions, research procedures and findings are analyzed in order to develop a model of parent-child…

  13. Mathematical Formulation of Multivariate Euclidean Models for Discrimination Methods.

    ERIC Educational Resources Information Center

    Mullen, Kenneth; Ennis, Daniel M.

    1987-01-01

    Multivariate models for the triangular and duo-trio methods are described, and theoretical methods are compared to a Monte Carlo simulation. Implications are discussed for a new theory of multidimensional scaling which challenges the traditional assumption that proximity measures and perceptual distances are monotonically related. (Author/GDC)

  14. Parent-Child Relationships of Boys in Different Offending Trajectories: A Developmental Perspective

    ERIC Educational Resources Information Center

    Keijsers, Loes; Loeber, Rolf; Branje, Susan; Meeus, Wim

    2012-01-01

    Background: This study tested the theoretical assumption that transformations of parent-child relationships in late childhood and adolescence would differ for boys following different offending trajectories. Methods: Using longitudinal multiinformant data of 503 boys (ages 7-19), we conducted Growth Mixture Modeling to extract offending…

  15. Toward an Instructionally Oriented Theory of Example-Based Learning

    ERIC Educational Resources Information Center

    Renkl, Alexander

    2014-01-01

    Learning from examples is a very effective means of initial cognitive skill acquisition. There is an enormous body of research on the specifics of this learning method. This article presents an instructionally oriented theory of example-based learning that integrates theoretical assumptions and findings from three research areas: learning from…

  16. Issues in the Intellectual Assessment of Hearing Impaired Children

    ERIC Educational Resources Information Center

    Hughes, Deana; Sapp, Gary L.; Kohler, Maxie P.

    2006-01-01

    The assessment of hearing impaired children is fraught with a number of problems. These include lack of valid assessment measures, faulty theoretical assumptions, lack of knowledge regarding the functioning of cognitive processes of these children, and biases against these children. This article briefly considers these issues and describes a study…

  17. Latter-Day Saint Women and Leadership: The Influence of Their Religious Worldview

    ERIC Educational Resources Information Center

    Madsen, Susan R.

    2016-01-01

    The article examines theories, assumptions, concepts, experiences, and practices from the Latter-day Saints' (LDS, or the Mormons) religious worldview to expand existing theoretical constructs and implications of leadership development and education for women. The article elucidates LDS doctrine and culture regarding women and provides specific…

  18. Psychologic-Pedagogical Conditions for Prevention of Suicidal Tendencies among Teenagers

    ERIC Educational Resources Information Center

    Abil, Yerkin A.; Kim, Natalia P.; Baymuhambetova, Botagoz Sh.; Mamiyev, Nurlan B.; Li, Yelena D.; Shumeyko, Tatyana S.

    2016-01-01

    Aim of research: to develop complex of psychology-pedagogical conditions, directed on prevention of suicidal tendencies among teenagers. On analysis basis of scientific literature authors disclose main causes of suicidal behavior in adolescence. To confirm science veracity of advanced theoretic assumptions, describes experiment, conducted on basis…

  19. Moral Development in Higher Education

    ERIC Educational Resources Information Center

    Liddell, Debora L.; Cooper, Diane L.

    2012-01-01

    In this article, the authors lay out the basic foundational concepts and assumptions that will guide the reader through the chapters to come as the chapter authors explore "how" moral growth can be facilitated through various initiatives on the college campus. This article presents a brief review of the theoretical frameworks that provide the…

  20. The Critical Purchase of Genealogy: Critiquing Student Participation Projects

    ERIC Educational Resources Information Center

    Anderson, Anna

    2015-01-01

    Until recently the dominant critique of "student participation" projects was one based on the theoretical assumptions of critical theory in the form of critical pedagogy. Over the last decade, we have witnessed the emergence of a critical education discourse that theorises and critically analyses such projects using Foucault's notion of…

  1. Normalizing Catastrophe: An Educational Response

    ERIC Educational Resources Information Center

    Jickling, Bob

    2013-01-01

    Processes of normalizing assumptions and values have been the subjects of theoretical framing and critique for several decades now. Critique has often been tied to issues of environmental sustainability and social justice. Now, in an era of global warming, there is a rising concern that the results of normalizing of present values could be…

  2. A Unified Framework for Monetary Theory and Policy Analysis.

    ERIC Educational Resources Information Center

    Lagos, Ricardo; Wright, Randall

    2005-01-01

    Search-theoretic models of monetary exchange are based on explicit descriptions of the frictions that make money essential. However, tractable versions of these models typically make strong assumptions that render them ill suited for monetary policy analysis. We propose a new framework, based on explicit micro foundations, within which macro…

  3. Ecosystemic Complexity Theory of Conflict: Understanding the Fog of Conflict

    ERIC Educational Resources Information Center

    Brack, Greg; Lassiter, Pamela S.; Hill, Michele B.; Moore, Sarah A.

    2011-01-01

    Counselors often engage in conflict mediation in professional practice. A model for understanding the complex and subtle nature of conflict resolution is presented. The ecosystemic complexity theory of conflict is offered to assist practitioners in navigating the fog of conflict. Theoretical assumptions are discussed with implications for clinical…

  4. Diversity in Literary Response: Revisiting Gender Expectations

    ERIC Educational Resources Information Center

    Brendler, Beth M.

    2014-01-01

    Drawing on and reexamining theories on gender and literacy, derived from research performed between 1974 and 2002, this qualitative study explored the gender assumptions and expectations of Language Arts teachers in a graduate level adolescent literature course at a university in the Midwestern United States. The theoretical framework was…

  5. Practitioner Review: Approaches to Assessment and Treatment of Children with DCD--An Evaluative Review

    ERIC Educational Resources Information Center

    Wilson, Peter H.

    2005-01-01

    Background: Movement clumsiness (or Developmental Coordination Disorder--DCD) has gained increasing recognition as a significant condition of childhood. However, some uncertainty still exists about diagnosis. Accordingly, approaches to assessment and treatment are varied, each drawing on distinct theoretical assumptions about the aetiology of the…

  6. Examining Transfer Effects from Dialogic Discussions to New Tasks and Contexts

    ERIC Educational Resources Information Center

    Reznitskaya, Alina; Glina, Monica; Carolan, Brian; Michaud, Olivier; Rogers, Jon; Sequeira, Lavina

    2012-01-01

    This study investigated whether students who engage in inquiry dialogue with others improve their performance on various tasks measuring argumentation development. The study used an educational environment called Philosophy for Children (P4C) to examine specific theoretical assumptions regarding the role dialogic interaction plays in the…

  7. Toward a Social Approach to Learning in Community Service Learning

    ERIC Educational Resources Information Center

    Cooks, Leda; Scharrer, Erica; Paredes, Mari Castaneda

    2004-01-01

    The authors describe a social approach to learning in community service learning that extends the contributions of three theoretical bodies of scholarship on learning: social constructionism, critical pedagogy, and community service learning. Building on the assumptions about learning described in each of these areas, engagement, identity, and…

  8. High-mass stars in Milky Way clusters

    NASA Astrophysics Data System (ADS)

    Negueruela, Ignacio

    2017-11-01

    Young open clusters are our laboratories for studying high-mass star formation and evolution. Unfortunately, the information that they provide is difficult to interpret, and sometimes contradictory. In this contribution, I present a few examples of the uncertainties that we face when confronting observations with theoretical models and our own assumptions.

  9. Play-Based Art Activities in Early Years: Teachers' Thinking and Practice

    ERIC Educational Resources Information Center

    Savva, Andri; Erakleous, Valentina

    2018-01-01

    The present study reports findings on pre-service teachers' thinking during planning and implementing play-based art activities. "Thinking" (in the present study) is informed by discourses emphasising art teaching and learning in relation to play and theoretical assumptions conceptualising planning as "practice of knowing."…

  10. Partial Least Squares Structural Equation Modeling with R

    ERIC Educational Resources Information Center

    Ravand, Hamdollah; Baghaei, Purya

    2016-01-01

    Structural equation modeling (SEM) has become widespread in educational and psychological research. Its flexibility in addressing complex theoretical models and the proper treatment of measurement error has made it the model of choice for many researchers in the social sciences. Nevertheless, the model imposes some daunting assumptions and…

  11. Social Studies Curriculum Guidelines.

    ERIC Educational Resources Information Center

    Manson, Gary; And Others

    These guidelines, which set standards for social studies programs K-12, can be used to update existing programs or may serve as a baseline for further innovation. The first section, "A Basic Rationale for Social Studies Education," identifies the theoretical assumptions basic to the guidelines as knowledge, thinking, valuing, social participation,…

  12. E-Portfolio Evaluation and Vocabulary Learning: Moving from Pedagogy to Andragogy

    ERIC Educational Resources Information Center

    Sharifi, Maryam; Soleimani, Hassan; Jafarigohar, Manoochehr

    2017-01-01

    Current trends in the field of educational technology indicate a shift in pedagogical assumptions and theoretical frameworks that favor active involvement of self-directed learners in a constructivist environment. This study probes the influence of electronic portfolio evaluation on vocabulary learning of Iranian university students and the…

  13. Uncertainties and understanding of experimental and theoretical results regarding reactions forming heavy and superheavy nuclei

    NASA Astrophysics Data System (ADS)

    Giardina, G.; Mandaglio, G.; Nasirov, A. K.; Anastasi, A.; Curciarello, F.; Fazio, G.

    2018-02-01

    Experimental and theoretical results of the PCN fusion probability of reactants in the entrance channel and the Wsur survival probability against fission at deexcitation of the compound nucleus formed in heavy-ion collisions are discussed. The theoretical results for a set of nuclear reactions leading to formation of compound nuclei (CNs) with the charge number Z = 102- 122 reveal a strong sensitivity of PCN to the characteristics of colliding nuclei in the entrance channel, dynamics of the reaction mechanism, and excitation energy of the system. We discuss the validity of assumptions and procedures for analysis of experimental data, and also the limits of validity of theoretical results obtained by the use of phenomenological models. The comparison of results obtained in many investigated reactions reveals serious limits of validity of the data analysis and calculation procedures.

  14. Three-class ROC analysis--the equal error utility assumption and the optimality of three-class ROC surface using the ideal observer.

    PubMed

    He, Xin; Frey, Eric C

    2006-08-01

    Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.

  15. Elasticity reconstruction: Beyond the assumption of local homogeneity

    NASA Astrophysics Data System (ADS)

    Sinkus, Ralph; Daire, Jean-Luc; Van Beers, Bernard E.; Vilgrain, Valerie

    2010-07-01

    Elasticity imaging is a novel domain which is currently gaining significant interest in the medical field. Most inversion techniques are based on the homogeneity assumption, i.e. the local spatial derivatives of the complex-shear modulus are ignored. This analysis presents an analytic approach in order to overcome this limitation, i.e. first order spatial derivatives of the real-part of the complex-shear modulus are taken into account. Resulting distributions in a gauged breast lesion phantom agree very well with the theoretical expectations. An in-vivo example of a cholangiocarcinoma demonstrates that the new approach provides maps of the viscoelastic properties which agree much better with expectations from anatomy.

  16. Theoretical aerodynamic characteristics of a family of slender wing-tail-body combinations

    NASA Technical Reports Server (NTRS)

    Lomax, Harvard; Byrd, Paul F

    1951-01-01

    The aerodynamic characteristics of an airplane configuration composed of a swept-back, nearly constant chord wing and a triangular tail mounted on a cylindrical body are presented. The analysis is based on the assumption that the free-stream Mach number is near unity or that the configuration is slender. The calculations for the tail are made on the assumption that the vortex system trailing back from the wing is either a sheet lying entirely in the plane of the flat tail surface or has completely "rolled up" into two point vortices that lie either in, above, or below the plane of the tail surface.

  17. Validity of the mockwitness paradigm: testing the assumptions.

    PubMed

    McQuiston, Dawn E; Malpass, Roy S

    2002-08-01

    Mockwitness identifications are used to provide a quantitative measure of lineup fairness. Some theoretical and practical assumptions of this paradigm have not been studied in terms of mockwitnesses' decision processes and procedural variation (e.g., instructions, lineup presentation method), and the current experiment was conducted to empirically evaluate these assumptions. Four hundred and eighty mockwitnesses were given physical information about a culprit, received 1 of 4 variations of lineup instructions, and were asked to identify the culprit from either a fair or unfair sequential lineup containing 1 of 2 targets. Lineup bias estimates varied as a result of lineup fairness and the target presented. Mockwitnesses generally reported that the target's physical description was their main source of identifying information. Our findings support the use of mockwitness identifications as a useful technique for sequential lineup evaluation, but only for mockwitnesses who selected only 1 lineup member. Recommendations for the use of this evaluation procedure are discussed.

  18. Twin studies in psychiatry and psychology: science or pseudoscience?

    PubMed

    Joseph, Jay

    2002-01-01

    Twin studies are frequently cited in support of the influence of genetic factors for a wide range of psychiatric conditions and psychological trait differences. The most common method, known as the classical twin method, compares the concordance rates or correlations of reared-together identical (MZ) vs. reared-together same-sex fraternal (DZ) twins. However, drawing genetic inferences from MZ-DZ comparisons is problematic due to methodological problems and questionable assumptions. It is argued that the main theoretical assumption of the twin method--known as the "equal environment assumption"--is not tenable. The twin method is therefore of doubtful value as an indicator of genetic influences. Studies of reared-apart twins are discussed, and it is noted that these studies are also vulnerable to methodological problems and environmental confounds. It is concluded that there is little reason to believe that twin studies provide evidence in favor of genetic influences on psychiatric disorders and human behavioral differences.

  19. Causal inferences on the effectiveness of complex social programs: Navigating assumptions, sources of complexity and evaluation design challenges.

    PubMed

    Chatterji, Madhabi

    2016-12-01

    This paper explores avenues for navigating evaluation design challenges posed by complex social programs (CSPs) and their environments when conducting studies that call for generalizable, causal inferences on the intervention's effectiveness. A definition is provided of a CSP drawing on examples from different fields, and an evaluation case is analyzed in depth to derive seven (7) major sources of complexity that typify CSPs, threatening assumptions of textbook-recommended experimental designs for performing impact evaluations. Theoretically-supported, alternative methodological strategies are discussed to navigate assumptions and counter the design challenges posed by the complex configurations and ecology of CSPs. Specific recommendations include: sequential refinement of the evaluation design through systems thinking, systems-informed logic modeling; and use of extended term, mixed methods (ETMM) approaches with exploratory and confirmatory phases of the evaluation. In the proposed approach, logic models are refined through direct induction and interactions with stakeholders. To better guide assumption evaluation, question-framing, and selection of appropriate methodological strategies, a multiphase evaluation design is recommended. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. A Note on the Assumption of Identical Distributions for Nonparametric Tests of Location

    ERIC Educational Resources Information Center

    Nordstokke, David W.; Colp, S. Mitchell

    2018-01-01

    Often, when testing for shift in location, researchers will utilize nonparametric statistical tests in place of their parametric counterparts when there is evidence or belief that the assumptions of the parametric test are not met (i.e., normally distributed dependent variables). An underlying and often unattended to assumption of nonparametric…

  1. Human Visual System as a Double-Slit Single Photon Interference Sensor: A Comparison between Modellistic and Biophysical Tests

    PubMed Central

    Pizzi, Rita; Wang, Rui; Rossetti, Danilo

    2016-01-01

    This paper describes a computational approach to the theoretical problems involved in the Young's single-photon double-slit experiment, focusing on a simulation of this experiment in the absence of measuring devices. Specifically, the human visual system is used in place of a photomultiplier or similar apparatus. Beginning with the assumption that the human eye perceives light in the presence of very few photons, we measure human eye performance as a sensor in a double-slit one-photon-at-a-time experimental setup. To interpret the results, we implement a simulation algorithm and compare its results with those of human subjects under identical experimental conditions. In order to evaluate the perceptive parameters exactly, which vary depending on the light conditions and on the subject’s sensitivity, we first review the existing literature on the biophysics of the human eye in the presence of a dim light source, and then use the known values of the experimental variables to set the parameters of the computational simulation. The results of the simulation and their comparison with the experiment involving human subjects are reported and discussed. It is found that, while the computer simulation indicates that the human eye has the capacity to detect the corpuscular nature of photons under these conditions, this was not observed in practice. The possible reasons for the difference between theoretical prediction and experimental results are discussed. PMID:26816029

  2. Maximum likelihood estimation of protein kinetic parameters under weak assumptions from unfolding force spectroscopy experiments

    NASA Astrophysics Data System (ADS)

    Aioanei, Daniel; Samorì, Bruno; Brucale, Marco

    2009-12-01

    Single molecule force spectroscopy (SMFS) is extensively used to characterize the mechanical unfolding behavior of individual protein domains under applied force by pulling chimeric polyproteins consisting of identical tandem repeats. Constant velocity unfolding SMFS data can be employed to reconstruct the protein unfolding energy landscape and kinetics. The methods applied so far require the specification of a single stretching force increase function, either theoretically derived or experimentally inferred, which must then be assumed to accurately describe the entirety of the experimental data. The very existence of a suitable optimal force model, even in the context of a single experimental data set, is still questioned. Herein, we propose a maximum likelihood (ML) framework for the estimation of protein kinetic parameters which can accommodate all the established theoretical force increase models. Our framework does not presuppose the existence of a single force characteristic function. Rather, it can be used with a heterogeneous set of functions, each describing the protein behavior in the stretching time range leading to one rupture event. We propose a simple way of constructing such a set of functions via piecewise linear approximation of the SMFS force vs time data and we prove the suitability of the approach both with synthetic data and experimentally. Additionally, when the spontaneous unfolding rate is the only unknown parameter, we find a correction factor that eliminates the bias of the ML estimator while also reducing its variance. Finally, we investigate which of several time-constrained experiment designs leads to better estimators.

  3. Hybrid Perovskites: Prospects for Concentrator Solar Cells.

    PubMed

    Lin, Qianqian; Wang, Zhiping; Snaith, Henry J; Johnston, Michael B; Herz, Laura M

    2018-04-01

    Perovskite solar cells have shown a meteoric rise of power conversion efficiency and a steady pace of improvements in their stability of operation. Such rapid progress has triggered research into approaches that can boost efficiencies beyond the Shockley-Queisser limit stipulated for a single-junction cell under normal solar illumination conditions. The tandem solar cell architecture is one concept here that has recently been successfully implemented. However, the approach of solar concentration has not been sufficiently explored so far for perovskite photovoltaics, despite its frequent use in the area of inorganic semiconductor solar cells. Here, the prospects of hybrid perovskites are assessed for use in concentrator solar cells. Solar cell performance parameters are theoretically predicted as a function of solar concentration levels, based on representative assumptions of charge-carrier recombination and extraction rates in the device. It is demonstrated that perovskite solar cells can fundamentally exhibit appreciably higher energy-conversion efficiencies under solar concentration, where they are able to exceed the Shockley-Queisser limit and exhibit strongly elevated open-circuit voltages. It is therefore concluded that sufficient material and device stability under increased illumination levels will be the only significant challenge to perovskite concentrator solar cell applications.

  4. Hybrid Perovskites: Prospects for Concentrator Solar Cells

    PubMed Central

    Lin, Qianqian; Wang, Zhiping; Snaith, Henry J.; Johnston, Michael B.

    2018-01-01

    Abstract Perovskite solar cells have shown a meteoric rise of power conversion efficiency and a steady pace of improvements in their stability of operation. Such rapid progress has triggered research into approaches that can boost efficiencies beyond the Shockley–Queisser limit stipulated for a single‐junction cell under normal solar illumination conditions. The tandem solar cell architecture is one concept here that has recently been successfully implemented. However, the approach of solar concentration has not been sufficiently explored so far for perovskite photovoltaics, despite its frequent use in the area of inorganic semiconductor solar cells. Here, the prospects of hybrid perovskites are assessed for use in concentrator solar cells. Solar cell performance parameters are theoretically predicted as a function of solar concentration levels, based on representative assumptions of charge‐carrier recombination and extraction rates in the device. It is demonstrated that perovskite solar cells can fundamentally exhibit appreciably higher energy‐conversion efficiencies under solar concentration, where they are able to exceed the Shockley–Queisser limit and exhibit strongly elevated open‐circuit voltages. It is therefore concluded that sufficient material and device stability under increased illumination levels will be the only significant challenge to perovskite concentrator solar cell applications. PMID:29721426

  5. 10 CFR 436.17 - Establishing energy or water cost data.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... escalation rate assumptions under § 436.14. When energy costs begin to accrue at a later time, subtract the... assumptions under § 436.14. When water costs begin to accrue at a later time, subtract the present value of... Methodology and Procedures for Life Cycle Cost Analyses § 436.17 Establishing energy or water cost data. (a...

  6. Political Assumptions Underlying Pedagogies of National Education: The Case of Student Teachers Teaching 'British Values' in England

    ERIC Educational Resources Information Center

    Sant, Edda; Hanley, Chris

    2018-01-01

    Teacher education in England now requires that student teachers follow practices that do not undermine "fundamental British values" where these practices are assessed against a set of ethics and behaviour standards. This paper examines the political assumptions underlying pedagogical interpretations about the education of national…

  7. Sparse SPM: Group Sparse-dictionary learning in SPM framework for resting-state functional connectivity MRI analysis.

    PubMed

    Lee, Young-Beom; Lee, Jeonghyeon; Tak, Sungho; Lee, Kangjoo; Na, Duk L; Seo, Sang Won; Jeong, Yong; Ye, Jong Chul

    2016-01-15

    Recent studies of functional connectivity MR imaging have revealed that the default-mode network activity is disrupted in diseases such as Alzheimer's disease (AD). However, there is not yet a consensus on the preferred method for resting-state analysis. Because the brain is reported to have complex interconnected networks according to graph theoretical analysis, the independency assumption, as in the popular independent component analysis (ICA) approach, often does not hold. Here, rather than using the independency assumption, we present a new statistical parameter mapping (SPM)-type analysis method based on a sparse graph model where temporal dynamics at each voxel position are described as a sparse combination of global brain dynamics. In particular, a new concept of a spatially adaptive design matrix has been proposed to represent local connectivity that shares the same temporal dynamics. If we further assume that local network structures within a group are similar, the estimation problem of global and local dynamics can be solved using sparse dictionary learning for the concatenated temporal data across subjects. Moreover, under the homoscedasticity variance assumption across subjects and groups that is often used in SPM analysis, the aforementioned individual and group analyses using sparse dictionary learning can be accurately modeled by a mixed-effect model, which also facilitates a standard SPM-type group-level inference using summary statistics. Using an extensive resting fMRI data set obtained from normal, mild cognitive impairment (MCI), and Alzheimer's disease patient groups, we demonstrated that the changes in the default mode network extracted by the proposed method are more closely correlated with the progression of Alzheimer's disease. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Current demographics suggest future energy supplies will be inadequate to slow human population growth.

    PubMed

    DeLong, John P; Burger, Oskar; Hamilton, Marcus J

    2010-10-05

    Influential demographic projections suggest that the global human population will stabilize at about 9-10 billion people by mid-century. These projections rest on two fundamental assumptions. The first is that the energy needed to fuel development and the associated decline in fertility will keep pace with energy demand far into the future. The second is that the demographic transition is irreversible such that once countries start down the path to lower fertility they cannot reverse to higher fertility. Both of these assumptions are problematic and may have an effect on population projections. Here we examine these assumptions explicitly. Specifically, given the theoretical and empirical relation between energy-use and population growth rates, we ask how the availability of energy is likely to affect population growth through 2050. Using a cross-country data set, we show that human population growth rates are negatively related to per-capita energy consumption, with zero growth occurring at ∼13 kW, suggesting that the global human population will stop growing only if individuals have access to this amount of power. Further, we find that current projected future energy supply rates are far below the supply needed to fuel a global demographic transition to zero growth, suggesting that the predicted leveling-off of the global population by mid-century is unlikely to occur, in the absence of a transition to an alternative energy source. Direct consideration of the energetic constraints underlying the demographic transition results in a qualitatively different population projection than produced when the energetic constraints are ignored. We suggest that energetic constraints be incorporated into future population projections.

  9. Estimate of the critical exponents from the field-theoretical renormalization group: mathematical meaning of the 'Standard Values'

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pogorelov, A. A.; Suslov, I. M.

    2008-06-15

    New estimates of the critical exponents have been obtained from the field-theoretical renormalization group using a new method for summing divergent series. The results almost coincide with the central values obtained by Le Guillou and Zinn-Justin (the so-called standard values), but have lower uncertainty. It has been shown that usual field-theoretical estimates implicitly imply the smoothness of the coefficient functions. The last assumption is open for discussion in view of the existence of the oscillating contribution to the coefficient functions. The appropriate interpretation of the last contribution is necessary both for the estimation of the systematic errors of the standardmore » values and for a further increase in accuracy.« less

  10. Charting the future course of rural health and remote health in Australia: Why we need theory.

    PubMed

    Bourke, Lisa; Humphreys, John S; Wakerman, John; Taylor, Judy

    2010-04-01

    This paper argues that rural and remote health is in need of theoretical development. Based on the authors' discussions, reflections and critical analyses of literature, this paper proposes key reasons why rural and remote health warrants the development of theoretical frameworks. The paper cites five reasons why theory is needed: (i) theory provides an approach for how a topic is studied; (ii) theory articulates key assumptions in knowledge development; (iii) theory systematises knowledge, enabling it to be transferable; (iv) theory provides predictability; and (v) theory enables comprehensive understanding. This paper concludes with a call for theoretical development in both rural and remote health to expand its knowledge and be more relevant to improving health care for rural Australians.

  11. Harvesting Atlantic Cod under Climate Variability

    NASA Astrophysics Data System (ADS)

    Oremus, K. L.

    2016-12-01

    Previous literature links the growth of a fishery to climate variability. This study uses an age-structured bioeconomic model to compare optimal harvest in the Gulf of Maine Atlantic cod fishery under a variable climate versus a static climate. The optimal harvest path depends on the relationship between fishery growth and the interest rate, with higher interest rates dictating greater harvests now at the cost of long-term stock sustainability. Given the time horizon of a single generation of fishermen under assumptions of a static climate, the model finds that the economically optimal management strategy is to harvest the entire stock in the short term and allow the fishery to collapse. However, if the biological growth of the fishery is assumed to vary with climate conditions, such as the North Atlantic Oscillation, there will always be pulses of high growth in the stock. During some of these high-growth years, the growth of the stock and its economic yield can exceed the growth rate of the economy even under high interest rates. This implies that it is not economically optimal to exhaust the New England cod fishery if NAO is included in the biological growth function. This finding may have theoretical implications for the management of other renewable yet exhaustible resources whose growth rates are subject to climate variability.

  12. Negotiating School Conflicts to Prevent Student Delinquency.

    ERIC Educational Resources Information Center

    De Cecco, John P.; Roberts, John K.

    One of 52 theoretical papers on school crime and its relation to poverty, this chapter presents a model of negotiation as a means to resolve school conflict. The assumption is that school conflict is inevitable, but student delinquency is not. Delinquent behavior results from the way that the school deals with conflict. Students resort to…

  13. Quantitative Differences in Retest Effects across Different Methods Used to Construct Alternate Test Forms

    ERIC Educational Resources Information Center

    Arendasy, Martin E.; Sommer, Markus

    2013-01-01

    Allowing respondents to retake a cognitive ability test has shown to increase their test scores. Several theoretical models have been proposed to explain this effect, which make distinct assumptions regarding the measurement invariance of psychometric tests across test administration sessions with regard to narrower cognitive abilities and general…

  14. Computerized Adaptive Test (CAT) Applications and Item Response Theory Models for Polytomous Items

    ERIC Educational Resources Information Center

    Aybek, Eren Can; Demirtasli, R. Nukhet

    2017-01-01

    This article aims to provide a theoretical framework for computerized adaptive tests (CAT) and item response theory models for polytomous items. Besides that, it aims to introduce the simulation and live CAT software to the related researchers. Computerized adaptive test algorithm, assumptions of item response theory models, nominal response…

  15. Analyzing Data from a Pretest-Posttest Control Group Design: The Importance of Statistical Assumptions

    ERIC Educational Resources Information Center

    Zientek, Linda; Nimon, Kim; Hammack-Brown, Bryn

    2016-01-01

    Purpose: Among the gold standards in human resource development (HRD) research are studies that test theoretically developed hypotheses and use experimental designs. A somewhat typical experimental design would involve collecting pretest and posttest data on individuals assigned to a control or experimental group. Data from such a design that…

  16. Reflective Pedagogy: The Integration of Methodology and Subject-Matter Content in a Graduate-Level Course

    ERIC Educational Resources Information Center

    Jakeman, Rick C.; Henderson, Markesha M.; Howard, Lionel C.

    2017-01-01

    This article presents a critical reflection on how we, instructors of a graduate-level course in higher education administration, sought to integrate theoretical and subject-matter content and research methodology. Our reflection, guided by autoethnography and teacher reflection, challenged both our assumptions about curriculum design and our…

  17. Complexity, Methodology and Method: Crafting a Critical Process of Research

    ERIC Educational Resources Information Center

    Alhadeff-Jones, Michel

    2013-01-01

    This paper defines a theoretical framework aiming to support the actions and reflections of researchers looking for a "method" in order to critically conceive the complexity of a scientific process of research. First, it starts with a brief overview of the core assumptions framing Morin's "paradigm of complexity" and Le…

  18. Daddy, I Know What the Story Means--Now, I Just Need Help with the Words.

    ERIC Educational Resources Information Center

    Bintz, William

    1998-01-01

    Describes an instance of literacy learning involving the author and his two daughters at a local bookstore. Discusses how this literacy event challenged the author to consider alternative assumptions about reading, learning to read, and the relationship between reading and literacy. Offers lingering questions about what theoretical assumptions…

  19. On Knowing: Art and Visual Culture.

    ERIC Educational Resources Information Center

    Duncum, Paul, Ed.; Bracey, Ted, Ed.

    The question of whether or not art can be distinguished from all that is called visual culture has become central to art theoretical discussion over recent decades. This collection of essays and responses addresses this question with the specific aim of making sense of an epistemology of art, with the assumption that nothing less than a persuasive…

  20. What Are We Looking For?--Pro Critical Realism in Text Interpretation

    ERIC Educational Resources Information Center

    Siljander, Pauli

    2011-01-01

    A visible role in the theoretical discourses on education has been played in the last couple of decades by the constructivist epistemologies, which have questioned the basic assumptions of realist epistemologies. The increased popularity of interpretative approaches especially has put the realist epistemologies on the defensive. Basing itself on…

  1. Unequal Ecological Exchange and Environmental Degradation: A Theoretical Proposition and Cross-National Study of Deforestation, 1990-2000

    ERIC Educational Resources Information Center

    Jorgenson, Andrew K.

    2006-01-01

    Political-economic sociologists have long investigated the dynamics and consequences of international trade. With few exceptions, this area of inquiry ignores the possible connections between trade and environmental degradation. In contrast, environmental sociologists have made several assumptions about the environmental impacts of international…

  2. Impact of Handwriting Training on Fluency, Spelling and Text Quality among Third Graders

    ERIC Educational Resources Information Center

    Hurschler Lichtsteiner, Sibylle; Wicki, Werner; Falmann, Péter

    2018-01-01

    As recent studies and theoretical assumptions suggest that the quality of texts composed by children and adolescents is affected by their transcription skills, this experimental field trial aims at investigating the impact of combined handwriting/spelling training on fluency, spelling and text quality among normally developing 3rd graders…

  3. Poisson sampling - The adjusted and unadjusted estimator revisited

    Treesearch

    Michael S. Williams; Hans T. Schreuder; Gerardo H. Terrazas

    1998-01-01

    The prevailing assumption, that for Poisson sampling the adjusted estimator "Y-hat a" is always substantially more efficient than the unadjusted estimator "Y-hat u" , is shown to be incorrect. Some well known theoretical results are applicable since "Y-hat a" is a ratio-of-means estimator and "Y-hat u" a simple unbiased estimator...

  4. Compensatory dynamics are rare in natural ecological communities.

    Treesearch

    J.E. Houlahan; D.J. Currie; K. Cottenie; G.S. Cumming; S.K.M. Ernest; C.S. Findlay; S.D. Fuhlendorf; R.D. Stevens; T.J. Willis; I.P. Woiwod; S.M. Wondzell

    2007-01-01

    Hubbell recently presented a theoretical framework, neutral models, for explaining large-scale patterns of community structure. This theory rests on the foundation of zero-sum ecological communities, that is, the assumption that the number of individuals in a community stays constant over time. If community abundances stay relatively constant, (i.e. approximating the...

  5. The Significance of Motivation in Student-Centred Learning: A Reflective Case Study

    ERIC Educational Resources Information Center

    Maclellan, Effie

    2008-01-01

    The theoretical underpinnings of student-centred learning suggest motivation to be an integral component. However, lack of clarification of what is involved in motivation in education often results in unchallenged assumptions that fail to recognise that what motivates some students may alienate others. This case study, using socio-cognitive…

  6. Cautionary Tales on Interrupting Children's Play: A Study from Sweden

    ERIC Educational Resources Information Center

    Weldemariam, Kassahun Tigistu

    2014-01-01

    Play is a natural and significant aspect of children's learning and development. Adults can be important to children's play, as they act as "play agents." Their involvement significantly influences the quality of the play activities in which children engage. The author briefly reviews the theoretical assumptions about adults' role in…

  7. Classifying Correlation Matrices into Relatively Homogeneous Subgroups: A Cluster Analytic Approach

    ERIC Educational Resources Information Center

    Cheung, Mike W.-L.; Chan, Wai

    2005-01-01

    Researchers are becoming interested in combining meta-analytic techniques and structural equation modeling to test theoretical models from a pool of studies. Most existing procedures are based on the assumption that all correlation matrices are homogeneous. Few studies have addressed what the next step should be when studies being analyzed are…

  8. The Effective Elementary School Principal: Theoretical Bases, Research Findings and Practical Implications.

    ERIC Educational Resources Information Center

    Burnett, I. Emett, Jr.; Pankake, Anita M.

    Although much of the current school reform movement relies on the basic assumption of effective elementary school administration, insufficient effort has been made to synthesize key concepts found in organizational theory and management studies with relevant effective schools research findings. This paper attempts such a synthesis to help develop…

  9. The State as a Work of Art: Statecraft for the 21st Century.

    ERIC Educational Resources Information Center

    Caldwell, Lynton K.

    1996-01-01

    Maintains that in the future the state will have to move beyond the politics of particular advantage (individual and group rights) to politics serving the general advantage (environmental concerns, economic development). Argues that current politics are dangerously outmoded in everything from their theoretical assumptions to data collecting. (MJP)

  10. Pedandragogy: A Way Forward to Self-Engaged Learning

    ERIC Educational Resources Information Center

    Samaroo, Selwyn; Cooper, Eleanor; Green, Tim

    2013-01-01

    A debate that has engaged the attention of educators and scores of intellectuals is the longstanding issue of pedagogy versus andragogy. The nature of the debate, given the interdisciplinary theoretical assumptions that underpin the issue, has had a polarizing effect on these scholars; as a result, there has been the emergence of competing…

  11. Extending the Challenge-Hindrance Model of Occupational Stress: The Role of Appraisal

    ERIC Educational Resources Information Center

    Webster, Jennica R.; Beehr, Terry A.; Love, Kevin

    2011-01-01

    Interest regarding the challenge-hindrance occupational stress model has increased in recent years, however its theoretical foundation has not been tested. Drawing from the transactional theory of stress, this study tests the assumptions made in past research (1) that workload and responsibility are appraised as challenges and role ambiguity and…

  12. The Theoretical Basis of Experience-Based Career Education.

    ERIC Educational Resources Information Center

    Jenks, C. Lynn

    This study analyzes the extent to which the assumptions and procedures of the Experience-Based Career Education model (EBCE) as developed by the Far West Laboratory (FWL) are supported by empirical data and by recognized scholars in educational theory. The analysis is presented as relevant to the more general problem: the limited availability of…

  13. Out on a Limb: The Efficacy of Teacher Induction in Secondary Schools

    ERIC Educational Resources Information Center

    Shockley, Robert; Watlington, Eliah; Felsher, Rivka

    2013-01-01

    This article reports the results of a qualitative meta-analysis study of the research and literature on the efficacy of teacher induction on the retention of high-quality secondary school teachers and challenges current assumptions about the efficacy of induction despite the proliferation of induction programs nationwide. A theoretical model for…

  14. Mental Retardation: Definition, Classification, and Systems of Supports. 10th Edition.

    ERIC Educational Resources Information Center

    Luckasson, Ruth; Borthwick-Duffy, Sharon; Buntinx, Wil H. E.; Coulter, David L.; Craig, Ellis M.; Reeve, Alya; Schalock, Robert L.; Snell, Martha E.; Spitalnik, Deborah M.; Spreat, Scott; Tasse, Marc J.

    This manual, the 10th edition of a regularly published definition and classification work on mental retardation, presents five key assumptions upon which the definition of mental retardation is based and a theoretical model of five essential dimensions that explain mental retardation and how to use the companion system. These dimensions include…

  15. The Extended Parallel Process Model: Illuminating the Gaps in Research

    ERIC Educational Resources Information Center

    Popova, Lucy

    2012-01-01

    This article examines constructs, propositions, and assumptions of the extended parallel process model (EPPM). Review of the EPPM literature reveals that its theoretical concepts are thoroughly developed, but the theory lacks consistency in operational definitions of some of its constructs. Out of the 12 propositions of the EPPM, a few have not…

  16. Tending to Change: Toward a Situated Model of Affinity Spaces

    ERIC Educational Resources Information Center

    Bommarito, Dan

    2014-01-01

    The concept of affinity spaces, a theoretical construct used to analyze literate activity from a spatial perspective, has gained popularity among scholars of literacy studies and, particularly, video-game studies. This article seeks to expand current notions of affinity spaces by identifying key assumptions that have limited researchers'…

  17. Taxometric Analyses of Specific Language Impairment in 3- And 4-Year-Old Children.

    ERIC Educational Resources Information Center

    Dollaghan, Christine A.

    2004-01-01

    Specific language impairment (SLI), like many diagnostic labels for complex behavioral conditions, is often assumed to define a category of children who differ not only in degree but also in kind from children developing language normally. Although this assumption has important implications for theoretical models and clinical approaches, its…

  18. Critical Comments on the General Model of Instructional Communication

    ERIC Educational Resources Information Center

    Walton, Justin D.

    2014-01-01

    This essay presents a critical commentary on McCroskey et al.'s (2004) general model of instructional communication. In particular, five points are examined which make explicit and problematize the meta-theoretical assumptions of the model. Comments call attention to the limitations of the model and argue for a broader approach to…

  19. Revisiting a Progressive Pedagogy. The Developmental-Interaction Approach. SUNY Series, Early Childhood Education: Inquiries and Insights.

    ERIC Educational Resources Information Center

    Nager, Nancy, Ed.; Shapiro, Edna K., Ed.

    This book reviews the history of the developmental-interactive approach, a formulation rooted in developmental psychology and educational practice, progressively informing educational thinking since the early 20th century. The book describes and analyzes key assumptions and assesses the compatibility of new theoretical approaches, focuses on…

  20. The Role of Theory in Teacher Education: Reconsidered from a Student Teacher Perspective

    ERIC Educational Resources Information Center

    Sjølie, Ela

    2014-01-01

    With the persistent criticism of teacher education as a backdrop, this article explores the common perception that teacher education is too theoretical. This article takes the view that the student teachers' assumptions regarding the concept of theory affect how they engage with theory during initial teacher education. Using a qualitative…

  1. Organizational Cynicism, School Culture, and Academic Achievement: The Study of Structural Equation Modeling

    ERIC Educational Resources Information Center

    Karadag, Engin; Kilicoglu, Gökhan; Yilmaz, Derya

    2014-01-01

    The purpose of this study is to explain constructed theoretical models that organizational cynicism perceptions of primary school teachers affect school culture and academic achievement, by using structural equation modeling. With the assumption that there is a cause-effect relationship between three main variables, the study was constructed with…

  2. The Didactic Principles and Their Applications in the Didactic Activity

    ERIC Educational Resources Information Center

    Marius-Costel, Esi

    2010-01-01

    The evaluation and reevaluation of the fundamental didactic principles suppose the acceptance at the level of an instructive-educative activity of a new educational paradigm. Thus, its understanding implies an assumption at a conceptual-theoretical level of some approaches where the didactic aspects find their usefulness by relating to value…

  3. Level I and Level II Abilities: Some Theoretical Reinterpretations

    ERIC Educational Resources Information Center

    Jarman, Ronald F.

    1978-01-01

    Some of the major assumptions and premises of Arthur Jensen's theory of Level I and Level II cognitive abilities are examined using a model of cognitive abilities recently proposed by Das, Kirby & Jarman (1975) and known as simultaneous and successive syntheses. Four areas are discussed: quantity versus type of information processing, internal…

  4. Inventing a Discipline: Rhetoric Scholarship in Honor of Richard E. Young.

    ERIC Educational Resources Information Center

    Goggin, Maureen Daly, Ed.

    Heeding the call of noted rhetoric scholar Richard E. Young to engage in serious, scholarly investigations of the assumptions that underlie established practices and habits about writing, the contributors to this critical volume study a diverse array of disciplinary issues, situate their work in a wide matrix of theoretical perspectives, and…

  5. A Comparison of Classification Approaches for Cyberbullying and Traditional Bullying Using Data from Six European Countries

    ERIC Educational Resources Information Center

    Schultze-Krumbholz, Anja; Göbel, Kristin; Scheithauer, Herbert; Brighi, Antonella; Guarini, Annalisa; Tsorbatzoudis, Haralambos; Barkoukis, Vassilis; Pyzalski, Jacek; Plichta, Piotr; Del Rey, Rosario; Casas, José A.; Thompson, Fran; Smith, Peter K.

    2015-01-01

    In recently published studies on cyberbullying, students are frequently categorized into distinct (cyber)bully and (cyber)victim clusters based on theoretical assumptions and arbitrary cut-off scores adapted from traditional bullying research. The present study identified involvement classes empirically using latent class analysis (LCA), to…

  6. Non-Formal Education: A Major Educational Force in the Postmodern Era

    ERIC Educational Resources Information Center

    Romi, Shlomo; Schmida, Mirjam

    2009-01-01

    This study aims to describe the current position of non-formal education (NFE) as a major educational force in the postmodern world, and to analyze its philosophical and theoretical assumptions. Far from being "supplementary education" or "extracurricular activities", NFE has developed into a worldwide educational industry. However, it has yet to…

  7. Writing and Reading: The Transactional Theory. Technical Report No. 416.

    ERIC Educational Resources Information Center

    Rosenblatt, Louise M.

    Because any reading or writing research project or teaching method rests on some kind of epistemological assumptions and some models of reading and writing processes, a coherent theoretical approach to the interrelationships of the reading and writing processes is needed. In light of the post-Einsteinian scientific paradigm and Peircean semiotics,…

  8. The Politics of Language. Lektos: Interdisciplinary Working Papers in Language Sciences, Vol. 3, No. 2.

    ERIC Educational Resources Information Center

    St. Clair, Robert N.

    The areas of language planning and the language of oppression are discussed within the theoretical framework of existential sociolinguistics. This tradition is contrasted with the contemporary models of positivism with its assumptions about constancy and quantification. The proposed model brings in social history, intent, consciousness, and other…

  9. Students' Representations of Scientific Practice during a Science Internship: Reflections from an Activity-Theoretic Perspective

    ERIC Educational Resources Information Center

    Hsu, Pei-Ling; van Eijck, Michiel; Roth, Wolff-Michael

    2010-01-01

    Working at scientists' elbows is one suggestion that educators make to improve science education, because such "authentic experiences" provide students with various types of science knowledge. However, there is an ongoing debate in the literature about the assumption that authentic science activities can enhance students' understandings…

  10. Determining the size of a complete disturbance landscape: multi-scale, continental analysis of forest change

    Treesearch

    Brian Buma; Jennifer K Costanza; Kurt Riitters

    2017-01-01

    The scale of investigation for disturbanceinfluenced processes plays a critical role in theoretical assumptions about stability, variance, and equilibrium, as well as conservation reserve and long-term monitoring program design. Critical consideration of scale is required for robust planning designs, especially when anticipating future disturbances whose exact...

  11. The Impact of Cultural Assumptions about Technology on Choctaw Heritage Preservation and Sharing

    ERIC Educational Resources Information Center

    Dolezal, Jake A.

    2013-01-01

    Neither the effects of information and communication technology (ICT) on culture nor the cultural roles of ICT are widely understood, particularly among marginalized ethno-cultures and indigenous people. One theoretical lens that has received attention outside of Native American studies is the theory of Information Technology Cultures, or "IT…

  12. Statistical characterization of spatial patterns of rainfall cells in extratropical cyclones

    NASA Astrophysics Data System (ADS)

    Bacchi, Baldassare; Ranzi, Roberto; Borga, Marco

    1996-11-01

    The assumption of a particular type of distribution of rainfall cells in space is needed for the formulation of several space-time rainfall models. In this study, weather radar-derived rain rate maps are employed to evaluate different types of spatial organization of rainfall cells in storms through the use of distance functions and second-moment measures. In particular the spatial point patterns of the local maxima of rainfall intensity are compared to a completely spatially random (CSR) point process by applying an objective distance measure. For all the analyzed radar maps the CSR assumption is rejected, indicating that at the resolution of the observation considered, rainfall cells are clustered. Therefore a theoretical framework for evaluating and fitting alternative models to the CSR is needed. This paper shows how the "reduced second-moment measure" of the point pattern can be employed to estimate the parameters of a Neyman-Scott model and to evaluate the degree of adequacy to the experimental data. Some limitations of this theoretical framework, and also its effectiveness, in comparison to the use of scaling functions, are discussed.

  13. Direct observation limits on antimatter gravitation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischler, Mark; Lykken, Joe; Roberts, Tom

    2008-06-01

    The proposed Antihydrogen Gravity experiment at Fermilab (P981) will directly measure the gravitational attraction g between antihydrogen and the Earth, with an accuracy of 1% or better. The following key question has been asked by the PAC: Is a possible 1% difference between g and g already ruled out by other evidence? This memo presents the key points of existing evidence, to answer whether such a difference is ruled out (a) on the basis of direct observational evidence; and/or (b) on the basis of indirect evidence, combined with reasoning based on strongly held theoretical assumptions. The bottom line is thatmore » there are no direct observations or measurements of gravitational asymmetry which address the antimatter sector. There is evidence which by indirect reasoning can be taken to rule out such a difference, but the analysis needed to draw that conclusion rests on models and assumptions which are in question for other reasons and are thus worth testing. There is no compelling evidence or theoretical reason to rule out such a difference at the 1% level.« less

  14. A new theoretical framework for modeling respiratory protection based on the beta distribution.

    PubMed

    Klausner, Ziv; Fattal, Eyal

    2014-08-01

    The problem of modeling respiratory protection is well known and has been dealt with extensively in the literature. Often the efficiency of respiratory protection is quantified in terms of penetration, defined as the proportion of an ambient contaminant concentration that penetrates the respiratory protection equipment. Typically, the penetration modeling framework in the literature is based on the assumption that penetration measurements follow the lognormal distribution. However, the analysis in this study leads to the conclusion that the lognormal assumption is not always valid, making it less adequate for analyzing respiratory protection measurements. This work presents a formulation of the problem from first principles, leading to a stochastic differential equation whose solution is the probability density function of the beta distribution. The data of respiratory protection experiments were reexamined, and indeed the beta distribution was found to provide the data a better fit than the lognormal. We conclude with a suggestion for a new theoretical framework for modeling respiratory protection. © The Author 2014. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  15. A Novel Information-Theoretic Approach for Variable Clustering and Predictive Modeling Using Dirichlet Process Mixtures

    PubMed Central

    Chen, Yun; Yang, Hui

    2016-01-01

    In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering. PMID:27966581

  16. Impedance measurement of non-locally reactive samples and the influence of the assumption of local reaction.

    PubMed

    Brandão, Eric; Mareze, Paulo; Lenzi, Arcanjo; da Silva, Andrey R

    2013-05-01

    In this paper, the measurement of the absorption coefficient of non-locally reactive sample layers of thickness d1 backed by a rigid wall is investigated. The investigation is carried out with the aid of real and theoretical experiments, which assume a monopole sound source radiating sound above an infinite non-locally reactive layer. A literature search revealed that the number of papers devoted to this matter is rather limited in comparison to those which address the measurement of locally reactive samples. Furthermore, the majority of papers published describe the use of two or more microphones whereas this paper focuses on the measurement with the pressure-particle velocity sensor (PU technique). For these reasons, the assumption that the sample is locally reactive is initially explored, so that the associated measurement errors can be quantified. Measurements in the impedance tube and in a semi-anechoic room are presented to validate the theoretical experiment. For samples with a high non-local reaction behavior, for which the measurement errors tend to be high, two different algorithms are proposed in order to minimize the associated errors.

  17. A comparison between EGS4 and MCNP computer modeling of an in vivo X-ray fluorescence system.

    PubMed

    Al-Ghorabie, F H; Natto, S S; Al-Lyhiani, S H

    2001-03-01

    The Monte Carlo computer codes EGS4 and MCNP were used to develop a theoretical model of a 180 degrees geometry in vivo X-ray fluorescence system for the measurement of platinum concentration in head and neck tumors. The model included specification of the photon source, collimators, phantoms and detector. Theoretical results were compared and evaluated against X-ray fluorescence data obtained experimentally from an existing system developed by the Swansea In Vivo Analysis and Cancer Research Group. The EGS4 results agreed well with the MCNP results. However, agreement between the measured spectral shape obtained using the experimental X-ray fluorescence system and the simulated spectral shape obtained using the two Monte Carlo codes was relatively poor. The main reason for the disagreement between the results arises from the basic assumptions which the two codes used in their calculations. Both codes assume a "free" electron model for Compton interactions. This assumption will underestimate the results and invalidates any predicted and experimental spectra when compared with each other.

  18. Multilevel Methods for Elliptic Problems with Highly Varying Coefficients on Nonaligned Coarse Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scheichl, Robert; Vassilevski, Panayot S.; Zikatanov, Ludmil T.

    2012-06-21

    We generalize the analysis of classical multigrid and two-level overlapping Schwarz methods for 2nd order elliptic boundary value problems to problems with large discontinuities in the coefficients that are not resolved by the coarse grids or the subdomain partition. The theoretical results provide a recipe for designing hierarchies of standard piecewise linear coarse spaces such that the multigrid convergence rate and the condition number of the Schwarz preconditioned system do not depend on the coefficient variation or on any mesh parameters. One assumption we have to make is that the coarse grids are sufficiently fine in the vicinity of crossmore » points or where regions with large diffusion coefficients are separated by a narrow region where the coefficient is small. We do not need to align them with possible discontinuities in the coefficients. The proofs make use of novel stable splittings based on weighted quasi-interpolants and weighted Poincaré-type inequalities. Finally, numerical experiments are included that illustrate the sharpness of the theoretical bounds and the necessity of the technical assumptions.« less

  19. A Novel Information-Theoretic Approach for Variable Clustering and Predictive Modeling Using Dirichlet Process Mixtures.

    PubMed

    Chen, Yun; Yang, Hui

    2016-12-14

    In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering.

  20. Implementation of Complex Signal Processing Algorithms for Position-Sensitive Microcalorimeters

    NASA Technical Reports Server (NTRS)

    Smith, Stephen J.

    2008-01-01

    We have recently reported on a theoretical digital signal-processing algorithm for improved energy and position resolution in position-sensitive, transition-edge sensor (POST) X-ray detectors [Smith et al., Nucl, lnstr and Meth. A 556 (2006) 2371. PoST's consists of one or more transition-edge sensors (TES's) on a large continuous or pixellated X-ray absorber and are under development as an alternative to arrays of single pixel TES's. PoST's provide a means to increase the field-of-view for the fewest number of read-out channels. In this contribution we extend the theoretical correlated energy position optimal filter (CEPOF) algorithm (originally developed for 2-TES continuous absorber PoST's) to investigate the practical implementation on multi-pixel single TES PoST's or Hydras. We use numerically simulated data for a nine absorber device, which includes realistic detector noise, to demonstrate an iterative scheme that enables convergence on the correct photon absorption position and energy without any a priori assumptions. The position sensitivity of the CEPOF implemented on simulated data agrees very well with the theoretically predicted resolution. We discuss practical issues such as the impact of random arrival phase of the measured data on the performance of the CEPOF. The CEPOF algorithm demonstrates that full-width-at- half-maximum energy resolution of < 8 eV coupled with position-sensitivity down to a few 100 eV should be achievable for a fully optimized device.

  1. Shock wave structure in rarefied polyatomic gases with large relaxation time for the dynamic pressure

    NASA Astrophysics Data System (ADS)

    Taniguchi, Shigeru; Arima, Takashi; Ruggeri, Tommaso; Sugiyama, Masaru

    2018-05-01

    The shock wave structure in rarefied polyatomic gases is analyzed based on extended thermodynamics (ET). In particular, the case with large relaxation time for the dynamic pressure, which corresponds to large bulk viscosity, is considered by adopting the simplest version of extended thermodynamics with only 6 independent fields (ET6); the mass density, the velocity, the temperature and the dynamic pressure. Recently, the validity of the theoretical predictions by ET was confirmed by the numerical analysis based on the kinetic theory in [S Kosuge and K Aoki: Phys. Rev. Fluids, Vol. 3, 023401 (2018)]. It was shown that numerical results using the polyatomic version of ellipsoidal statistical model agree with the theoretical predictions by ET for small or moderately large Mach numbers. In the present paper, first, we compare the theoretical predictions by ET6 with the ones by kinetic theory for large Mach number under the same assumptions, that is, the gas is polytropic and the bulk viscosity is proportional to the temperature. Second, the shock wave structure for large Mach number in a non-polytropic gas is analyzed with the particular interest in the effect of the temperature dependence of specific heat and the bulk viscosity on the shock wave structure. Through the analysis of the case of a rarefied carbon dioxide (CO2) gas, it is shown that these temperature dependences play important roles in the precise analysis of the structure for strong shock waves.

  2. Investigating the Cosmic Web with Topological Data Analysis

    NASA Astrophysics Data System (ADS)

    Cisewski-Kehe, Jessi; Wu, Mike; Fasy, Brittany; Hellwing, Wojciech; Lovell, Mark; Rinaldo, Alessandro; Wasserman, Larry

    2018-01-01

    Data exhibiting complicated spatial structures are common in many areas of science (e.g. cosmology, biology), but can be difficult to analyze. Persistent homology is a popular approach within the area of Topological Data Analysis that offers a new way to represent, visualize, and interpret complex data by extracting topological features, which can be used to infer properties of the underlying structures. In particular, TDA may be useful for analyzing the large-scale structure (LSS) of the Universe, which is an intricate and spatially complex web of matter. In order to understand the physics of the Universe, theoretical and computational cosmologists develop large-scale simulations that allow for visualizing and analyzing the LSS under varying physical assumptions. Each point in the 3D data set represents a galaxy or a cluster of galaxies, and topological summaries ("persistent diagrams") can be obtained summarizing the different ordered holes in the data (e.g. connected components, loops, voids).The topological summaries are interesting and informative descriptors of the Universe on their own, but hypothesis tests using the topological summaries would provide a way to make more rigorous comparisons of LSS under different theoretical models. For example, the received cosmological model has cold dark matter (CDM); however, while the case is strong for CDM, there are some observational inconsistencies with this theory. Another possibility is warm dark matter (WDM). It is of interest to see if a CDM Universe and WDM Universe produce LSS that is topologically distinct.We present several possible test statistics for two-sample hypothesis tests using the topological summaries, carryout a simulation study to investigate the suitableness of the proposed test statistics using simulated data from a variation of the Voronoi foam model, and finally we apply the proposed inference framework to WDM vs. CDM cosmological simulation data.

  3. The Importance of the Assumption of Uncorrelated Errors in Psychometric Theory

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.; Patelis, Thanos

    2015-01-01

    A critical discussion of the assumption of uncorrelated errors in classical psychometric theory and its applications is provided. It is pointed out that this assumption is essential for a number of fundamental results and underlies the concept of parallel tests, the Spearman-Brown's prophecy and the correction for attenuation formulas as well as…

  4. Under What Assumptions Do Site-by-Treatment Instruments Identify Average Causal Effects?

    ERIC Educational Resources Information Center

    Reardon, Sean F.; Raudenbush, Stephen W.

    2011-01-01

    The purpose of this paper is to clarify the assumptions that must be met if this--multiple site, multiple mediator--strategy, hereafter referred to as "MSMM," is to identify the average causal effects (ATE) in the populations of interest. The authors' investigation of the assumptions of the multiple-mediator, multiple-site IV model demonstrates…

  5. Keeping Things Simple: Why the Human Development Index Should Not Diverge from Its Equal Weights Assumption

    ERIC Educational Resources Information Center

    Stapleton, Lee M.; Garrod, Guy D.

    2007-01-01

    Using a range of statistical criteria rooted in Information Theory we show that there is little justification for relaxing the equal weights assumption underlying the United Nation's Human Development Index (HDI) even if the true HDI diverges significantly from this assumption. Put differently, the additional model complexity that unequal weights…

  6. Genes, language, and the nature of scientific explanations: the case of Williams syndrome.

    PubMed

    Musolino, Julien; Landau, Barbara

    2012-01-01

    In this article, we discuss two experiments of nature and their implications for the sciences of the mind. The first, Williams syndrome, bears on one of cognitive science's holy grails: the possibility of unravelling the causal chain between genes and cognition. We sketch the outline of a general framework to study the relationship between genes and cognition, focusing as our case study on the development of language in individuals with Williams syndrome. Our approach emphasizes the role of three key ingredients: the need to specify a clear level of analysis, the need to provide a theoretical account of the relevant cognitive structure at that level, and the importance of the (typical) developmental process itself. The promise offered by the case of Williams syndrome has also given rise to two strongly conflicting theoretical approaches-modularity and neuroconstructivism-themselves offshoots of a perennial debate between nativism and empiricism. We apply our framework to explore the tension created by these two conflicting perspectives. To this end, we discuss a second experiment of nature, which allows us to compare the two competing perspectives in what comes close to a controlled experimental setting. From this comparison, we conclude that the "meaningful debate assumption", a widespread assumption suggesting that neuroconstructivism and modularity address the same questions and represent genuine theoretical alternatives, rests on a fallacy.

  7. Method of Moments Applied to the Analysis of Precision Spectra from the Neutron Time-of- flight Diagnostics at the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Hatarik, Robert; Caggiano, J. A.; Callahan, D.; Casey, D.; Clark, D.; Doeppner, T.; Eckart, M.; Field, J.; Frenje, J.; Gatu Johnson, M.; Grim, G.; Hartouni, E.; Hurricane, O.; Kilkenny, J.; Knauer, J.; Ma, T.; Mannion, O.; Munro, D.; Sayre, D.; Spears, B.

    2015-11-01

    The method of moments was introduced by Pearson as a process for estimating the population distributions from which a set of ``random variables'' are measured. These moments are compared with a parameterization of the distributions, or of the same quantities generated by simulations of the process. Most diagnostics processes extract scalar parameters depending on the moments of spectra derived from analytic solutions to the fusion rate, necessarily based on simplifying assumptions of the confined plasma. The precision of the TOF spectra, and the nature of the implosions at the NIF require the inclusion of factors beyond the traditional analysis and require the addition of higher order moments to describe the data. This talk will present a diagnostic process for extracting the moments of the neutron energy spectrum for a comparison with theoretical considerations as well as simulations of the implosions. Work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344.

  8. Coefficient of performance for a low-dissipation Carnot-like refrigerator with nonadiabatic dissipation

    NASA Astrophysics Data System (ADS)

    Hu, Yong; Wu, Feifei; Ma, Yongli; He, Jizhou; Wang, Jianhui; Hernández, A. Calvo; Roco, J. M. M.

    2013-12-01

    We study the coefficient of performance (COP) and its bounds for a Carnot-like refrigerator working between two heat reservoirs at constant temperatures Th and Tc, under two optimization criteria χ and Ω. In view of the fact that an “adiabatic” process usually takes finite time and is nonisentropic, the nonadiabatic dissipation and the finite time required for the adiabatic processes are taken into account by assuming low dissipation. For given optimization criteria, we find that the lower and upper bounds of the COP are the same as the corresponding ones obtained from the previous idealized models where any adiabatic process is undergone instantaneously with constant entropy. To describe some particular models with very fast adiabatic transitions, we also consider the influence of the nonadiabatic dissipation on the bounds of the COP, under the assumption that the irreversible entropy production in the adiabatic process is constant and independent of time. Our theoretical predictions match the observed COPs of real refrigerators more closely than the ones derived in the previous models, providing a strong argument in favor of our approach.

  9. Optimizing the beam pattern of a forward-viewing ring-annular ultrasound array for intravascular imaging.

    PubMed

    Wang, Yao; Stephens, Douglas N; O'Donnell, Matthew

    2002-12-01

    Intravascular ultrasound (IVUS) imaging systems using circumferential arrays mounted on cardiac catheter tips fire beams orthogonal to the principal axis of the catheter. The system produces high resolution cross-sectional images but must be guided by conventional angioscopy. A real-time forward-viewing array, integrated into the same catheter, could greatly reduce radiation exposure by decreasing angiographic guidance. Unfortunately, the mounting requirement of a catheter guide wire prohibits a full-disk imaging aperture. Given only an annulus of array elements, prior theoretical investigations have only considered a circular ring of point transceivers and focusing strategies using all elements in the highly dense array, both impractical assumptions. In this paper, we consider a practical array geometry and signal processing architecture for a forward-viewing IVUS system. Our specific design uses a total of 210 transceiver firings with synthetic reconstruction for a given 3-D image frame. Simulation results demonstrate this design can achieve side-lobes under -40 dB for on-axis situations and under -30 dB for steering to the edge of a 80 degrees cone.

  10. On Medical Progress and Health Care Demand: A Ces Perspective Using the Grossman Model of Health Status.

    PubMed

    Batinti, Alberto

    2015-12-01

    I propose an application of the pure-consumption version of the Grossman model of health care demand, where utility depends on consumption and health status and health status on medical care and health technology. I derive the conditions under which an improvement in health care technology leads to an increase/decrease in health care consumption. In particular, I show how the direction of the effect depends on the relationship between the constant elasticity of substitution parameters of the utility and health production functions. I find that, under the constancy assumption, the ratio of the two elasticity of substitution parameters determines the direction of a technological change on health care demand. On the other hand, the technology share parameter in the health production function contributes to the size but not to the direction of the technological effect. I finally explore how the ratio of the elasticity of substitution parameters work in measurement and practice and discuss how future research may use the theoretical insight provided here. Copyright © 2014 John Wiley & Sons, Ltd.

  11. Principles for circadian orchestration of metabolic pathways.

    PubMed

    Thurley, Kevin; Herbst, Christopher; Wesener, Felix; Koller, Barbara; Wallach, Thomas; Maier, Bert; Kramer, Achim; Westermark, Pål O

    2017-02-14

    Circadian rhythms govern multiple aspects of animal metabolism. Transcriptome-, proteome- and metabolome-wide measurements have revealed widespread circadian rhythms in metabolism governed by a cellular genetic oscillator, the circadian core clock. However, it remains unclear if and under which conditions transcriptional rhythms cause rhythms in particular metabolites and metabolic fluxes. Here, we analyzed the circadian orchestration of metabolic pathways by direct measurement of enzyme activities, analysis of transcriptome data, and developing a theoretical method called circadian response analysis. Contrary to a common assumption, we found that pronounced rhythms in metabolic pathways are often favored by separation rather than alignment in the times of peak activity of key enzymes. This property holds true for a set of metabolic pathway motifs (e.g., linear chains and branching points) and also under the conditions of fast kinetics typical for metabolic reactions. By circadian response analysis of pathway motifs, we determined exact timing separation constraints on rhythmic enzyme activities that allow for substantial rhythms in pathway flux and metabolite concentrations. Direct measurements of circadian enzyme activities in mouse skeletal muscle confirmed that such timing separation occurs in vivo.

  12. Principles for circadian orchestration of metabolic pathways

    PubMed Central

    Thurley, Kevin; Herbst, Christopher; Wesener, Felix; Koller, Barbara; Wallach, Thomas; Maier, Bert; Kramer, Achim

    2017-01-01

    Circadian rhythms govern multiple aspects of animal metabolism. Transcriptome-, proteome- and metabolome-wide measurements have revealed widespread circadian rhythms in metabolism governed by a cellular genetic oscillator, the circadian core clock. However, it remains unclear if and under which conditions transcriptional rhythms cause rhythms in particular metabolites and metabolic fluxes. Here, we analyzed the circadian orchestration of metabolic pathways by direct measurement of enzyme activities, analysis of transcriptome data, and developing a theoretical method called circadian response analysis. Contrary to a common assumption, we found that pronounced rhythms in metabolic pathways are often favored by separation rather than alignment in the times of peak activity of key enzymes. This property holds true for a set of metabolic pathway motifs (e.g., linear chains and branching points) and also under the conditions of fast kinetics typical for metabolic reactions. By circadian response analysis of pathway motifs, we determined exact timing separation constraints on rhythmic enzyme activities that allow for substantial rhythms in pathway flux and metabolite concentrations. Direct measurements of circadian enzyme activities in mouse skeletal muscle confirmed that such timing separation occurs in vivo. PMID:28159888

  13. Hard choices in assessing survival past dams — a comparison of single- and paired-release strategies

    USGS Publications Warehouse

    Zydlewski, Joseph D.; Stich, Daniel S.; Sigourney, Douglas B.

    2017-01-01

    Mark–recapture models are widely used to estimate survival of salmon smolts migrating past dams. Paired releases have been used to improve estimate accuracy by removing components of mortality not attributable to the dam. This method is accompanied by reduced precision because (i) sample size is reduced relative to a single, large release; and (ii) variance calculations inflate error. We modeled an idealized system with a single dam to assess trade-offs between accuracy and precision and compared methods using root mean squared error (RMSE). Simulations were run under predefined conditions (dam mortality, background mortality, detection probability, and sample size) to determine scenarios when the paired release was preferable to a single release. We demonstrate that a paired-release design provides a theoretical advantage over a single-release design only at large sample sizes and high probabilities of detection. At release numbers typical of many survival studies, paired release can result in overestimation of dam survival. Failures to meet model assumptions of a paired release may result in further overestimation of dam-related survival. Under most conditions, a single-release strategy was preferable.

  14. Empirical Tests of the Assumptions Underlying Models for Foreign Exchange Rates.

    DTIC Science & Technology

    1984-03-01

    Research Report COs 481 EMPIRICAL TESTS OF THE ASSUMPTIO:IS UNDERLYING MODELS FOR FOREIGN EXCHANGE RATES by P. Brockett B. Golany 00 00 CENTER FOR...Research Report CCS 481 EMPIRICAL TESTS OF THE ASSUMPTIONS UNDERLYING MODELS FOR FOREIGN EXCHANGE RATES by P. Brockett B. Golany March 1984...applying these tests to the U.S. dollar to Japanese Yen foreign exchange rates . Conclusions and discussion is given in section VI. 1The previous authors

  15. Using the Folstein Mini Mental State Exam (MMSE) to explore methodological issues in cognitive aging research.

    PubMed

    Monroe, Todd; Carter, Michael

    2012-09-01

    Cognitive scales are used frequently in geriatric research and practice. These instruments are constructed with underlying assumptions that are a part of their validation process. A common measurement scale used in older adults is the Folstein Mini Mental State Exam (MMSE). The MMSE was designed to screen for cognitive impairment and is used often in geriatric research. This paper has three aims. Aim one was to explore four potential threats to validity in the use of the MMSE: (1) administering the exam without meeting the underlying assumptions, (2) not reporting that the underlying assumptions were assessed prior to test administration, (3) use of variable and inconsistent cut-off scores for the determination of presence of cognitive impairment, and (4) failure to adjust the scores based on the demographic characteristics of the tested subject. Aim two was to conduct a literature search to determine if the assumptions of (1) education level assessment, (2) sensory assessment, and (3) language fluency were being met and clearly reported in published research using the MMSE. Aim three was to provide recommendations to minimalize threats to validity in research studies that use cognitive scales, such as the MMSE. We found inconsistencies in published work in reporting whether or not subjects meet the assumptions that underlie a reliable and valid MMSE score. These inconsistencies can pose threats to the reliability of exam results. Fourteen of the 50 studies reviewed reported inclusion of all three of these assumptions. Inconsistencies in reporting the inclusion of the underlying assumptions for a reliable score could mean that subjects were not appropriate to be tested by use of the MMSE or that an appropriate test administration of the MMSE was not clearly reported. Thus, the research literature could have threats to both validity and reliability based on misuse of or improper reported use of the MMSE. Six recommendations are provided to minimalize these threats in future research.

  16. Rockfall travel distances theoretical distributions

    NASA Astrophysics Data System (ADS)

    Jaboyedoff, Michel; Derron, Marc-Henri; Pedrazzini, Andrea

    2017-04-01

    The probability of propagation of rockfalls is a key part of hazard assessment, because it permits to extrapolate the probability of propagation of rockfall either based on partial data or simply theoretically. The propagation can be assumed frictional which permits to describe on average the propagation by a line of kinetic energy which corresponds to the loss of energy along the path. But loss of energy can also be assumed as a multiplicative process or a purely random process. The distributions of the rockfall block stop points can be deduced from such simple models, they lead to Gaussian, Inverse-Gaussian, Log-normal or exponential negative distributions. The theoretical background is presented, and the comparisons of some of these models with existing data indicate that these assumptions are relevant. The results are either based on theoretical considerations or by fitting results. They are potentially very useful for rockfall hazard zoning and risk assessment. This approach will need further investigations.

  17. Flood return level analysis of Peaks over Threshold series under changing climate

    NASA Astrophysics Data System (ADS)

    Li, L.; Xiong, L.; Hu, T.; Xu, C. Y.; Guo, S.

    2016-12-01

    Obtaining insights into future flood estimation is of great significance for water planning and management. Traditional flood return level analysis with the stationarity assumption has been challenged by changing environments. A method that takes into consideration the nonstationarity context has been extended to derive flood return levels for Peaks over Threshold (POT) series. With application to POT series, a Poisson distribution is normally assumed to describe the arrival rate of exceedance events, but this distribution assumption has at times been reported as invalid. The Negative Binomial (NB) distribution is therefore proposed as an alternative to the Poisson distribution assumption. Flood return levels were extrapolated in nonstationarity context for the POT series of the Weihe basin, China under future climate scenarios. The results show that the flood return levels estimated under nonstationarity can be different with an assumption of Poisson and NB distribution, respectively. The difference is found to be related to the threshold value of POT series. The study indicates the importance of distribution selection in flood return level analysis under nonstationarity and provides a reference on the impact of climate change on flood estimation in the Weihe basin for the future.

  18. Probabilistic choice models in health-state valuation research: background, theories, assumptions and applications.

    PubMed

    Arons, Alexander M M; Krabbe, Paul F M

    2013-02-01

    Interest is rising in measuring subjective health outcomes, such as treatment outcomes that are not directly quantifiable (functional disability, symptoms, complaints, side effects and health-related quality of life). Health economists in particular have applied probabilistic choice models in the area of health evaluation. They increasingly use discrete choice models based on random utility theory to derive values for healthcare goods or services. Recent attempts have been made to use discrete choice models as an alternative method to derive values for health states. In this article, various probabilistic choice models are described according to their underlying theory. A historical overview traces their development and applications in diverse fields. The discussion highlights some theoretical and technical aspects of the choice models and their similarity and dissimilarity. The objective of the article is to elucidate the position of each model and their applications for health-state valuation.

  19. Using directed information for influence discovery in interconnected dynamical systems

    NASA Astrophysics Data System (ADS)

    Rao, Arvind; Hero, Alfred O.; States, David J.; Engel, James Douglas

    2008-08-01

    Structure discovery in non-linear dynamical systems is an important and challenging problem that arises in various applications such as computational neuroscience, econometrics, and biological network discovery. Each of these systems have multiple interacting variables and the key problem is the inference of the underlying structure of the systems (which variables are connected to which others) based on the output observations (such as multiple time trajectories of the variables). Since such applications demand the inference of directed relationships among variables in these non-linear systems, current methods that have a linear assumption on structure or yield undirected variable dependencies are insufficient. Hence, in this work, we present a methodology for structure discovery using an information-theoretic metric called directed time information (DTI). Using both synthetic dynamical systems as well as true biological datasets (kidney development and T-cell data), we demonstrate the utility of DTI in such problems.

  20. On the merging rates of envelope-deprived components of binary systems which can give rise to supernova events

    NASA Astrophysics Data System (ADS)

    Tornambe, Amedeo

    1989-08-01

    Theoretical rates of mergings of envelope-deprived components of binary systems, which can give rise to supernova events are described. The effects of the various assumptions on the physical properties of the progenitor system and of its evolutionary behavior through common envelope phases are discussed. Four cases have been analyzed: CO-CO, He-CO, He-He double degenerate mergings and He star-CO dwarf merging. It is found that, above a critical efficiency of the common envelope action in system shrinkage, the rate of CO-CO mergings is not strongly sensitive to the efficiency. Below this critical value, no CO-CO systems will survive for times larger than a few Gyr. In contrast, He-CO dwarf systems will continue to merge at a reasonable rate up to 20 Gyr, and more, also under extreme conditions.

Top