Sample records for assumptions underlying current

  1. Artificial Intelligence: Underlying Assumptions and Basic Objectives.

    ERIC Educational Resources Information Center

    Cercone, Nick; McCalla, Gordon

    1984-01-01

    Presents perspectives on methodological assumptions underlying research efforts in artificial intelligence (AI) and charts activities, motivations, methods, and current status of research in each of the major AI subareas: natural language understanding; computer vision; expert systems; search, problem solving, planning; theorem proving and logic…

  2. THE MODELING OF THE FATE AND TRANSPORT OF ENVIRONMENTAL POLLUTANTS

    EPA Science Inventory

    Current models that predict the fate of organic compounds released to the environment are based on the assumption that these compounds exist exclusively as neutral species. This assumption is untrue under many environmental conditions, as some molecules can exist as cations, anio...

  3. Self-sustained criterion with photoionization for positive dc corona plasmas between coaxial cylinders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Yuesheng, E-mail: yueshengzheng@fzu.edu.cn; Zhang, Bo, E-mail: shizbcn@tsinghua.edu.cn; He, Jinliang, E-mail: hejl@tsinghua.edu.cn

    The positive dc corona plasmas between coaxial cylinders in air under the application of a self-sustained criterion with photoionization are investigated in this paper. A photon absorption function suitable for cylindrical electrode, which can characterize the total photons within the ionization region, is proposed on the basis of the classic corona onset criteria. Based on the general fluid model with the self-sustained criterion, the role of photoionization in the ionization region is clarified. It is found that the surface electric field keeps constant under a relatively low corona current, while it is slightly weakened with the increase of the coronamore » current. Similar tendencies can be found under different conductor radii and relative air densities. The small change of the surface electric field will become more significant for the electron density distribution as well as the ionization activity under a high corona current, compared with the results under the assumption of a constant surface field. The assumption that the surface electric field remains constant should be corrected with the increase of the corona current when the energetic electrons with a distance from the conductor surface are concerned.« less

  4. Wald Sequential Probability Ratio Test for Analysis of Orbital Conjunction Data

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis; Gold, Dara

    2013-01-01

    We propose a Wald Sequential Probability Ratio Test for analysis of commonly available predictions associated with spacecraft conjunctions. Such predictions generally consist of a relative state and relative state error covariance at the time of closest approach, under the assumption that prediction errors are Gaussian. We show that under these circumstances, the likelihood ratio of the Wald test reduces to an especially simple form, involving the current best estimate of collision probability, and a similar estimate of collision probability that is based on prior assumptions about the likelihood of collision.

  5. Is herpes zoster vaccination likely to be cost-effective in Canada?

    PubMed

    Peden, Alexander D; Strobel, Stephenson B; Forget, Evelyn L

    2014-05-30

    To synthesize the current literature detailing the cost-effectiveness of the herpes zoster (HZ) vaccine, and to provide Canadian policy-makers with cost-effectiveness measurements in a Canadian context. This article builds on an existing systematic review of the HZ vaccine that offers a quality assessment of 11 recent articles. We first replicated this study, and then two assessors reviewed the articles and extracted information on vaccine effectiveness, cost of HZ, other modelling assumptions and QALY estimates. Then we transformed the results into a format useful for Canadian policy decisions. Results expressed in different currencies from different years were converted into 2012 Canadian dollars using Bank of Canada exchange rates and a Consumer Price Index deflator. Modelling assumptions that varied between studies were synthesized. We tabled the results for comparability. The Szucs systematic review presented a thorough methodological assessment of the relevant literature. However, the various studies presented results in a variety of currencies, and based their analyses on disparate methodological assumptions. Most of the current literature uses Markov chain models to estimate HZ prevalence. Cost assumptions, discount rate assumptions, assumptions about vaccine efficacy and waning and epidemiological assumptions drove variation in the outcomes. This article transforms the results into a table easily understood by policy-makers. The majority of the current literature shows that HZ vaccination is cost-effective at the price of $100,000 per QALY. Few studies showed that vaccination cost-effectiveness was higher than this threshold, and only under conservative assumptions. Cost-effectiveness was sensitive to vaccine price and discount rate.

  6. Estimating psychiatric manpower requirements based on patients' needs.

    PubMed

    Faulkner, L R; Goldman, C R

    1997-05-01

    To provide a better understanding of the complexities of estimating psychiatric manpower requirements, the authors describe several approaches to estimation and present a method based on patients' needs. A five-step method for psychiatric manpower estimation is used, with estimates of data pertinent to each step, to calculate the total psychiatric manpower requirements for the United States. The method is also used to estimate the hours of psychiatric service per patient per year that might be available under current psychiatric practice and under a managed care scenario. Depending on assumptions about data at each step in the method, the total psychiatric manpower requirements for the U.S. population range from 2,989 to 358,696 full-time-equivalent psychiatrists. The number of available hours of psychiatric service per patient per year is 14.1 hours under current psychiatric practice and 2.8 hours under the managed care scenario. The key to psychiatric manpower estimation lies in clarifying the assumptions that underlie the specific method used. Even small differences in assumptions mean large differences in estimates. Any credible manpower estimation process must include discussions and negotiations between psychiatrists, other clinicians, administrators, and patients and families to clarify the treatment needs of patients and the roles, responsibilities, and job description of psychiatrists.

  7. (In)validity of the constant field and constant currents assumptions in theories of ion transport.

    PubMed Central

    Syganow, A; von Kitzing, E

    1999-01-01

    Constant electric fields and constant ion currents are often considered in theories of ion transport. Therefore, it is important to understand the validity of these helpful concepts. The constant field assumption requires that the charge density of permeant ions and flexible polar groups is virtually voltage independent. We present analytic relations that indicate the conditions under which the constant field approximation applies. Barrier models are frequently fitted to experimental current-voltage curves to describe ion transport. These models are based on three fundamental characteristics: a constant electric field, negligible concerted motions of ions inside the channel (an ion can enter only an empty site), and concentration-independent energy profiles. An analysis of those fundamental assumptions of barrier models shows that those approximations require large barriers because the electrostatic interaction is strong and has a long range. In the constant currents assumption, the current of each permeating ion species is considered to be constant throughout the channel; thus ion pairing is explicitly ignored. In inhomogeneous steady-state systems, the association rate constant determines the strength of ion pairing. Among permeable ions, however, the ion association rate constants are not small, according to modern diffusion-limited reaction rate theories. A mathematical formulation of a constant currents condition indicates that ion pairing very likely has an effect but does not dominate ion transport. PMID:9929480

  8. Reaction μ-+6Li-->3H+3H+νμ and the axial current form factor in the timelike region

    NASA Astrophysics Data System (ADS)

    Mintz, S. L.

    1983-09-01

    The differential muon-capture rate dΓdET is obtained for the reaction μ-+6Li-->3H+3H+νμ over the allowed range of ET, the tritium energy, for two assumptions concerning the behavior of FA, the axial current form factor, in the timelike region; analytic continuation from the spacelike region and mirror behavior, FA(q2, timelike)=FA(q2, spacelike). The values of dΓdET under these two assumptions are found to vary substantially in the timelike region as a function of the mass MA in the dipole fit to FA. Values of dΓdET are given for MA2=2mπ2, 4.95mπ2, and 8mπ2. NUCLEAR REACTIONS Muon capture 6Li(μ-, νμ)3H3H, Γ, dΓdET calculated for two assumptions concerning the axial current form factor behavior in timelike region.

  9. The Dynamics of the Law of Effect: A Comparison of Models

    ERIC Educational Resources Information Center

    Navakatikyan, Michael A.; Davison, Michael

    2010-01-01

    Dynamical models based on three steady-state equations for the law of effect were constructed under the assumption that behavior changes in proportion to the difference between current behavior and the equilibrium implied by current reinforcer rates. A comparison of dynamical models showed that a model based on Navakatikyan's (2007) two-component…

  10. An Extension to Deng's Entropy in the Open World Assumption with an Application in Sensor Data Fusion.

    PubMed

    Tang, Yongchuan; Zhou, Deyun; Chan, Felix T S

    2018-06-11

    Quantification of uncertain degree in the Dempster-Shafer evidence theory (DST) framework with belief entropy is still an open issue, even a blank field for the open world assumption. Currently, the existed uncertainty measures in the DST framework are limited to the closed world where the frame of discernment (FOD) is assumed to be complete. To address this issue, this paper focuses on extending a belief entropy to the open world by considering the uncertain information represented as the FOD and the nonzero mass function of the empty set simultaneously. An extension to Deng’s entropy in the open world assumption (EDEOW) is proposed as a generalization of the Deng’s entropy and it can be degenerated to the Deng entropy in the closed world wherever necessary. In order to test the reasonability and effectiveness of the extended belief entropy, an EDEOW-based information fusion approach is proposed and applied to sensor data fusion under uncertainty circumstance. The experimental results verify the usefulness and applicability of the extended measure as well as the modified sensor data fusion method. In addition, a few open issues still exist in the current work: the necessary properties for a belief entropy in the open world assumption, whether there exists a belief entropy that satisfies all the existed properties, and what is the most proper fusion frame for sensor data fusion under uncertainty.

  11. Bayesian Power Prior Analysis and Its Application to Operational Risk and Rasch Model

    ERIC Educational Resources Information Center

    Zhang, Honglian

    2010-01-01

    When sample size is small, informative priors can be valuable in increasing the precision of estimates. Pooling historical data and current data with equal weights under the assumption that both of them are from the same population may be misleading when heterogeneity exists between historical data and current data. This is particularly true when…

  12. Projecting treatment opportunities for current Minnesota forest conditions.

    Treesearch

    W. Brad Smith; Pamela J. Jakes

    1981-01-01

    Reviews opportunities for treatment of timber stands in Minnesota for the decade of 1977-1986. Under the assumptions and management guides specified, 27% of Minnesota's commercial forest land would require timber harvest or some other form of treatment during the decade.

  13. Regression Analysis of a Disease Onset Distribution Using Diagnosis Data

    PubMed Central

    Young, Jessica G.; Jewell, Nicholas P.; Samuels, Steven J.

    2008-01-01

    Summary We consider methods for estimating the effect of a covariate on a disease onset distribution when the observed data structure consists of right-censored data on diagnosis times and current status data on onset times amongst individuals who have not yet been diagnosed. Dunson and Baird (2001, Biometrics 57, 306–403) approached this problem using maximum likelihood, under the assumption that the ratio of the diagnosis and onset distributions is monotonic nondecreasing. As an alternative, we propose a two-step estimator, an extension of the approach of van der Laan, Jewell, and Petersen (1997, Biometrika 84, 539–554) in the single sample setting, which is computationally much simpler and requires no assumptions on this ratio. A simulation study is performed comparing estimates obtained from these two approaches, as well as that from a standard current status analysis that ignores diagnosis data. Results indicate that the Dunson and Baird estimator outperforms the two-step estimator when the monotonicity assumption holds, but the reverse is true when the assumption fails. The simple current status estimator loses only a small amount of precision in comparison to the two-step procedure but requires monitoring time information for all individuals. In the data that motivated this work, a study of uterine fibroids and chemical exposure to dioxin, the monotonicity assumption is seen to fail. Here, the two-step and current status estimators both show no significant association between the level of dioxin exposure and the hazard for onset of uterine fibroids; the two-step estimator of the relative hazard associated with increasing levels of exposure has the least estimated variance amongst the three estimators considered. PMID:17680832

  14. ASP-G: an ASP-based method for finding attractors in genetic regulatory networks

    PubMed Central

    Mushthofa, Mushthofa; Torres, Gustavo; Van de Peer, Yves; Marchal, Kathleen; De Cock, Martine

    2014-01-01

    Motivation: Boolean network models are suitable to simulate GRNs in the absence of detailed kinetic information. However, reducing the biological reality implies making assumptions on how genes interact (interaction rules) and how their state is updated during the simulation (update scheme). The exact choice of the assumptions largely determines the outcome of the simulations. In most cases, however, the biologically correct assumptions are unknown. An ideal simulation thus implies testing different rules and schemes to determine those that best capture an observed biological phenomenon. This is not trivial because most current methods to simulate Boolean network models of GRNs and to compute their attractors impose specific assumptions that cannot be easily altered, as they are built into the system. Results: To allow for a more flexible simulation framework, we developed ASP-G. We show the correctness of ASP-G in simulating Boolean network models and obtaining attractors under different assumptions by successfully recapitulating the detection of attractors of previously published studies. We also provide an example of how performing simulation of network models under different settings help determine the assumptions under which a certain conclusion holds. The main added value of ASP-G is in its modularity and declarativity, making it more flexible and less error-prone than traditional approaches. The declarative nature of ASP-G comes at the expense of being slower than the more dedicated systems but still achieves a good efficiency with respect to computational time. Availability and implementation: The source code of ASP-G is available at http://bioinformatics.intec.ugent.be/kmarchal/Supplementary_Information_Musthofa_2014/asp-g.zip. Contact: Kathleen.Marchal@UGent.be or Martine.DeCock@UGent.be Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25028722

  15. Commercial newsgathering from space

    NASA Astrophysics Data System (ADS)

    1987-05-01

    This memorandum does not examine the feasibility of a specific satellite system or business plan, but rather, assesses whether current government policy is appropriate to accommodate both current activities and future developments of the use of satellite images to cover newsworthy events. The mediasat term refers to a concept of a satellite system and business organization that would routinely collect news and information for media use from space. Because the mediasat concept is, for the most part, undefined, the Office of Technology Assessment (OTA) was forced to make a series of assumptions regarding fundamental issues as cost, markets, technical capability, and utility of a mediasat. Although these assumptions are critical to OTA's conclusions, they are only best guesses, based on the advice of experts in the media and in the field of remote sensing. With regard to specific issues, such as the economic viability of a mediasat or its effect on national security and foreign policy, altering these underlying assumptions could dramatically alter the conclusions reached.

  16. Gender in Science and Engineering Faculties: Demographic Inertia Revisited.

    PubMed

    Thomas, Nicole R; Poole, Daniel J; Herbers, Joan M

    2015-01-01

    The under-representation of women on faculties of science and engineering is ascribed in part to demographic inertia, which is the lag between retirement of current faculty and future hires. The assumption of demographic inertia implies that, given enough time, gender parity will be achieved. We examine that assumption via a semi-Markov model to predict the future faculty, with simulations that predict the convergence demographic state. Our model shows that existing practices that produce gender gaps in recruitment, retention, and career progression preclude eventual gender parity. Further, we examine sensitivity of the convergence state to current gender gaps to show that all sources of disparity across the entire faculty career must be erased to produce parity: we cannot blame demographic inertia.

  17. Pregnancy intentions-a complex construct and call for new measures.

    PubMed

    Mumford, Sunni L; Sapra, Katherine J; King, Rosalind B; Louis, Jean Fredo; Buck Louis, Germaine M

    2016-11-01

    To estimate the prevalence of unintended pregnancies under relaxed assumptions regarding birth control use compared with a traditional constructed measure. Cross-sectional survey. Not applicable. Nationally representative sample of U.S. women aged 15-44 years. None. Prevalence of intended and unintended pregnancies as estimated by [1] a traditional constructed measure from the National Survey of Family Growth (NSFG), and [2] a constructed measure relaxing assumptions regarding birth control use, reasons for nonuse, and pregnancy timing. The prevalence of unintended pregnancies was 6% higher using the traditional constructed measure as compared with the approach with relaxed assumptions (NSFG: 44%, 95% confidence interval [CI] 41, 46; new construct 38%, 95% CI, 36, 41). Using the NSFG approach, only 92% of women who stopped birth control to become pregnant and 0 women who were not using contraceptives at the time of the pregnancy and reported that they did not mind getting pregnant were classified as having intended pregnancies, compared with 100% using the new construct. Current measures of pregnancy intention may overestimate rates of unintended pregnancy, with over 340,000 pregnancies in the United States misclassified as unintended using the current approach, corresponding to an estimated savings of $678 million in public health-care expenditures. Current constructs make assumptions that may not reflect contemporary reproductive practices, so improved measures are needed. Published by Elsevier Inc.

  18. A review of selected inorganic surface water quality-monitoring practices: are we really measuring what we think, and if so, are we doing it right?

    USGS Publications Warehouse

    Horowitz, Arthur J.

    2013-01-01

    Successful environmental/water quality-monitoring programs usually require a balance between analytical capabilities, the collection and preservation of representative samples, and available financial/personnel resources. Due to current economic conditions, monitoring programs are under increasing pressure to do more with less. Hence, a review of current sampling and analytical methodologies, and some of the underlying assumptions that form the bases for these programs seems appropriate, to see if they are achieving their intended objectives within acceptable error limits and/or measurement uncertainty, in a cost-effective manner. That evaluation appears to indicate that several common sampling/processing/analytical procedures (e.g., dip (point) samples/measurements, nitrogen determinations, total recoverable analytical procedures) are generating biased or nonrepresentative data, and that some of the underlying assumptions relative to current programs, such as calendar-based sampling and stationarity are no longer defensible. The extensive use of statistical models as well as surrogates (e.g., turbidity) also needs to be re-examined because the hydrologic interrelationships that support their use tend to be dynamic rather than static. As a result, a number of monitoring programs may need redesigning, some sampling and analytical procedures may need to be updated, and model/surrogate interrelationships may require recalibration.

  19. Maximization, learning, and economic behavior

    PubMed Central

    Erev, Ido; Roth, Alvin E.

    2014-01-01

    The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design. PMID:25024182

  20. Maximization, learning, and economic behavior.

    PubMed

    Erev, Ido; Roth, Alvin E

    2014-07-22

    The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design.

  1. Some Assumptions in the Assessment of Educational Disadvantage.

    ERIC Educational Resources Information Center

    Gutfreund, R.

    1979-01-01

    Analyzes the failure of three approaches currently used to explain educational under-achievement by working class children. Recommends study of distinctions between educational content and process, material and cultural insulation, and teacher-student-parent interactions. Strategy suggested is small group instruction emphasizing affective learning…

  2. Language Performance Assessment: Current Trends in Theory and Research

    ERIC Educational Resources Information Center

    El-Koumy, Abdel-Salam Abdel-Khalek

    2004-01-01

    The purpose of this paper is to review the theoretical and empirical literature relevant to language performance assessment. Following a definition of performance assessment, this paper considers: (1) theoretical assumptions underlying performance assessment; (2) purposes of performance assessment; (3) performance assessment procedures; (4) merits…

  3. Time and Education: Postmodern Eschatological Perspectives.

    ERIC Educational Resources Information Center

    Slattery, Patrick

    This paper discusses postmodern philosophical conceptions of time as they might inform educational theorizing, and it challenges the underlying assumptions about time in current educational reform literature, especially the 1994 Report of the National Commission on Time and Learning entitled "Prisoners of Time" (U.S. Department of Education,…

  4. 77 FR 54555 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-05

    ... study is conducted under the grant. Description of Respondents: Business or other for-profit; farms... estimate of burden including the validity of the methodology and assumptions used; (c) ways to enhance the... it displays a currently valid OMB control number. Rural Business-Cooperative Service Title: Renewable...

  5. On the application of the Germano identity to subgrid-scale modeling

    NASA Technical Reports Server (NTRS)

    Ronchi, C.; Ypma, M.; Canuto, V. M.

    1992-01-01

    An identity proposed by Germano (1992) has been widely applied to several turbulent flows to dynamically compute rather than adjust the Smagorinsky coefficient. The assumptions under which the method has been used are discussed, and some conceptual difficulties in its current implementation are examined.

  6. Intelligence Testing 1928-1978: What Next?

    ERIC Educational Resources Information Center

    Vernon, Philip E.

    Attention is drawn to the ways in which current conceptions of intelligence and its measurement differ from those which were generally accepted in 1928. The following principles underlying intelligence testing were generally agreed upon in 1928: (1) the assumption of intelligence as a recognizable attribute, responsible for differences among…

  7. Explaining English Middle Sentences

    ERIC Educational Resources Information Center

    Park, Kabyong

    2009-01-01

    The current paper attempts to account for the formation of English middle sentences. Discussing a set of previous analyses on the construction under investigation we show, following the assumptions of Oosten(1986) and Iwata(1999), that English middle constructions should be divided into two types: generic middle constructions and non-generic…

  8. Climate change. Accelerating extinction risk from climate change.

    PubMed

    Urban, Mark C

    2015-05-01

    Current predictions of extinction risks from climate change vary widely depending on the specific assumptions and geographic and taxonomic focus of each study. I synthesized published studies in order to estimate a global mean extinction rate and determine which factors contribute the greatest uncertainty to climate change-induced extinction risks. Results suggest that extinction risks will accelerate with future global temperatures, threatening up to one in six species under current policies. Extinction risks were highest in South America, Australia, and New Zealand, and risks did not vary by taxonomic group. Realistic assumptions about extinction debt and dispersal capacity substantially increased extinction risks. We urgently need to adopt strategies that limit further climate change if we are to avoid an acceleration of global extinctions. Copyright © 2015, American Association for the Advancement of Science.

  9. Construct Validation of Content Standards for Teaching

    ERIC Educational Resources Information Center

    van der Schaaf, Marieke F.; Stokking, Karel M.

    2011-01-01

    Current international demands to strengthen the teaching profession have led to an increased development and use of professional content standards. The study aims to provide insight in the construct validity of content standards by researching experts' underlying assumptions and preferences when participating in a delphi method. In three rounds 21…

  10. Teacher Pension Plans in Canada: A Force to Be Reckoned With.

    ERIC Educational Resources Information Center

    Lawton, Stephen B.

    1999-01-01

    Summarizes the status of teacher pension plans in Canada's 10 provinces and considers their current role in renewing and downsizing educational systems in some provinces. Discusses pensions' use as economic instruments for provincial and national development and questions assumptions underlying the rhetoric celebrating their contribution to the…

  11. Assumptions Commonly Underlying Government Quality Assessment Practices

    ERIC Educational Resources Information Center

    Schmidtlein, Frank A.

    2004-01-01

    The current interest in governmental assessment and accountability practices appears to result from:(1) an emerging view of higher education as an "industry"; (2) concerns about efficient resource allocation; (3) a lack of trust ade between government institutional officials; (4) a desire to reduce uncertainty in government/higher education…

  12. Lifelong Learning Imperative in Engineering: Sustaining American Competitiveness in the 21st Century

    ERIC Educational Resources Information Center

    Dutta, Debasish; Patil, Lalit; Porter, James B., Jr.

    2012-01-01

    The Lifelong Learning Imperative (LLI) project was initiated to assess current practices in lifelong learning for engineering professionals, reexamine the underlying assumptions behind those practices, and outline strategies for addressing unmet needs. The LLI project brought together leaders of U.S. industry, academia, government, and…

  13. Word Families and Frequency Bands in Vocabulary Tests: Challenging Conventions

    ERIC Educational Resources Information Center

    Kremmel, Benjamin

    2016-01-01

    Vocabulary test development often appears to be based on the design principles of previous tests, without questioning or empirically examining the assumptions underlying those principles. Given the current proliferation of vocabulary tests, it seems timely for the field of vocabulary testing to problematize some of those traditionalised…

  14. What's (Not) Wrong with Low-Income Marriages

    ERIC Educational Resources Information Center

    Trail, Thomas E.; Karney, Benjamin R.

    2012-01-01

    In the United States, low marriage rates and high divorce rates among the poor have led policymakers to target this group for skills- and values-based interventions. The current research evaluated the assumptions underlying these interventions; specifically, the authors examined whether low-income respondents held less traditional values toward…

  15. Learning from Programmed Instruction: Examining Implications for Modern Instructional Technology

    ERIC Educational Resources Information Center

    McDonald, Jason K.; Yanchar, Stephen C.; Osguthorpe, Russell T.

    2005-01-01

    This article reports a theoretical examination of several parallels between contemporary instructional technology (as manifest in one of its most current manifestations, online learning) and one of its direct predecessors, programmed instruction. We place particular focus on the underlying assumptions of the two movements. Our analysis suggests…

  16. First conclusions about results of GPR investigations in the Church of the Assumption of the Blessed Virgin Mary in Kłodzko, Poland

    NASA Astrophysics Data System (ADS)

    Chernov, Anatolii; Dziubacki, Dariusz; Cogoni, Martina; Bądescu, Alexandru

    2018-03-01

    The article presents results of a ground penetrating radar (GPR) investigation carried out in the Church of the Assumption of the Blessed Virgin Mary in Kłodzko, Poland, dating from the 14th to 16th centuries. Due to the 20th century wars, the current state of knowledge about the history of the church is still poor. Under the floor of the Catholic temple, unknown structures might exist. To verify the presence of underground structures such as crypts and tombs, a GPR survey was carried out in chapels and aisles with 500 and 800 MHz GPR shielded antennas. Numerous anomalies were detected. It was concluded that those under the chapels were caused by the presence of crypts beneath the floor.

  17. Pregnancy intentions – a complex construct and call for new measures

    PubMed Central

    Mumford, Sunni L.; Sapra, Katherine J.; King, Rosalind B.; Louis, Jean Fredo; Buck Louis, Germaine M.

    2016-01-01

    Objective To estimate the prevalence of unintended pregnancies under relaxed assumptions regarding birth control use compared with a traditional constructed measure. Design Cross-sectional survey. Setting Not applicable. Patients Nationally representative sample of U.S. females aged 15–44 years. Intervention(s) None. Main Outcome Measure(s) The prevalence of intended and unintended pregnancies as estimated by 1) a traditional constructed measure from the National Survey of Family Growth (NSFG), and 2) a constructed measure relaxing assumptions regarding birth control use, reasons for non-use, and pregnancy timing. Results The prevalence of unintended pregnancies was 6% higher using the traditional constructed measure as compared to the approach with relaxed assumptions (NSFG: 44%, 95% confidence interval [CI] 41, 46; new construct 38%, 95% CI 36, 41). Using the NSFG approach only 92% of women who stopped birth control to become pregnant and 0% of women who were not using contraceptives at the time of the pregnancy and reported that they did not mind getting pregnant were classified as having intended pregnancies, compared to 100% using the new construct. Conclusion Current measures of pregnancy intention may overestimate rates of unintended pregnancy, with over 340,000 pregnancies in the United States misclassified as unintended using the current approach, corresponding to an estimated savings of $678 million in public health care expenditures. Current constructs make assumptions that may not reflect contemporary reproductive practices and improved measures are needed. PMID:27490044

  18. Accelerated modern human-induced species losses: Entering the sixth mass extinction.

    PubMed

    Ceballos, Gerardo; Ehrlich, Paul R; Barnosky, Anthony D; García, Andrés; Pringle, Robert M; Palmer, Todd M

    2015-06-01

    The oft-repeated claim that Earth's biota is entering a sixth "mass extinction" depends on clearly demonstrating that current extinction rates are far above the "background" rates prevailing between the five previous mass extinctions. Earlier estimates of extinction rates have been criticized for using assumptions that might overestimate the severity of the extinction crisis. We assess, using extremely conservative assumptions, whether human activities are causing a mass extinction. First, we use a recent estimate of a background rate of 2 mammal extinctions per 10,000 species per 100 years (that is, 2 E/MSY), which is twice as high as widely used previous estimates. We then compare this rate with the current rate of mammal and vertebrate extinctions. The latter is conservatively low because listing a species as extinct requires meeting stringent criteria. Even under our assumptions, which would tend to minimize evidence of an incipient mass extinction, the average rate of vertebrate species loss over the last century is up to 100 times higher than the background rate. Under the 2 E/MSY background rate, the number of species that have gone extinct in the last century would have taken, depending on the vertebrate taxon, between 800 and 10,000 years to disappear. These estimates reveal an exceptionally rapid loss of biodiversity over the last few centuries, indicating that a sixth mass extinction is already under way. Averting a dramatic decay of biodiversity and the subsequent loss of ecosystem services is still possible through intensified conservation efforts, but that window of opportunity is rapidly closing.

  19. Accelerated modern human–induced species losses: Entering the sixth mass extinction

    PubMed Central

    Ceballos, Gerardo; Ehrlich, Paul R.; Barnosky, Anthony D.; García, Andrés; Pringle, Robert M.; Palmer, Todd M.

    2015-01-01

    The oft-repeated claim that Earth’s biota is entering a sixth “mass extinction” depends on clearly demonstrating that current extinction rates are far above the “background” rates prevailing between the five previous mass extinctions. Earlier estimates of extinction rates have been criticized for using assumptions that might overestimate the severity of the extinction crisis. We assess, using extremely conservative assumptions, whether human activities are causing a mass extinction. First, we use a recent estimate of a background rate of 2 mammal extinctions per 10,000 species per 100 years (that is, 2 E/MSY), which is twice as high as widely used previous estimates. We then compare this rate with the current rate of mammal and vertebrate extinctions. The latter is conservatively low because listing a species as extinct requires meeting stringent criteria. Even under our assumptions, which would tend to minimize evidence of an incipient mass extinction, the average rate of vertebrate species loss over the last century is up to 100 times higher than the background rate. Under the 2 E/MSY background rate, the number of species that have gone extinct in the last century would have taken, depending on the vertebrate taxon, between 800 and 10,000 years to disappear. These estimates reveal an exceptionally rapid loss of biodiversity over the last few centuries, indicating that a sixth mass extinction is already under way. Averting a dramatic decay of biodiversity and the subsequent loss of ecosystem services is still possible through intensified conservation efforts, but that window of opportunity is rapidly closing. PMID:26601195

  20. Intelligence/Electronic Warfare (IEW) direction-finding and fix estimation analysis report. Volume 2: Trailblazer

    NASA Technical Reports Server (NTRS)

    Gardner, Robert; Gillis, James W.; Griesel, Ann; Pardo, Bruce

    1985-01-01

    An analysis of the direction finding (DF) and fix estimation algorithms in TRAILBLAZER is presented. The TRAILBLAZER software analyzed is old and not currently used in the field. However, the algorithms analyzed are used in other current IEW systems. The underlying algorithm assumptions (including unmodeled errors) are examined along with their appropriateness for TRAILBLAZER. Coding and documentation problems are then discussed. A detailed error budget is presented.

  1. Osmotic Transport across Cell Membranes in Nondilute Solutions: A New Nondilute Solute Transport Equation

    PubMed Central

    Elmoazzen, Heidi Y.; Elliott, Janet A.W.; McGann, Locksley E.

    2009-01-01

    The fundamental physical mechanisms of water and solute transport across cell membranes have long been studied in the field of cell membrane biophysics. Cryobiology is a discipline that requires an understanding of osmotic transport across cell membranes under nondilute solution conditions, yet many of the currently-used transport formalisms make limiting dilute solution assumptions. While dilute solution assumptions are often appropriate under physiological conditions, they are rarely appropriate in cryobiology. The first objective of this article is to review commonly-used transport equations, and the explicit and implicit assumptions made when using the two-parameter and the Kedem-Katchalsky formalisms. The second objective of this article is to describe a set of transport equations that do not make the previous dilute solution or near-equilibrium assumptions. Specifically, a new nondilute solute transport equation is presented. Such nondilute equations are applicable to many fields including cryobiology where dilute solution conditions are not often met. An illustrative example is provided. Utilizing suitable transport equations that fit for two permeability coefficients, fits were as good as with the previous three-parameter model (which includes the reflection coefficient, σ). There is less unexpected concentration dependence with the nondilute transport equations, suggesting that some of the unexpected concentration dependence of permeability is due to the use of inappropriate transport equations. PMID:19348741

  2. The optimal age of measles immunisation in low-income countries: a secondary analysis of the assumptions underlying the current policy

    PubMed Central

    Martins, Cesário L; Garly, May-Lill; Rodrigues, Amabelia; Benn, Christine S; Whittle, Hilton

    2012-01-01

    Objective The current policy of measles vaccination at 9 months of age was decided in the mid-1970s. The policy was not tested for impact on child survival but was based on studies of seroconversion after measles vaccination at different ages. The authors examined the empirical evidence for the six underlying assumptions. Design Secondary analysis. Data sources and methods These assumptions have not been research issues. Hence, the authors examined case reports to assess the empirical evidence for the original assumptions. The authors used existing reviews, and in December 2011, the authors made a PubMed search for relevant papers. The title and abstract of papers in English, French, Portuguese, Spanish, German and Scandinavian languages were assessed to ascertain whether the paper was potentially relevant. Based on cumulative measles incidence figures, the authors calculated how many measles cases had been prevented assuming everybody was vaccinated at a specific age, how many ‘vaccine failures’ would occur after the age of vaccination and how many cases would occur before the specific age of vaccination. In the combined analyses of several studies, the authors used the Mantel–Haenszel weighted RR stratifying for study or age groups to estimate common trends. Setting and participants African community studies of measles infection. Primary and secondary outcomes Consistency between assumptions and empirical evidence and the predicted effect on mortality. Results In retrospect, the major assumptions were based on false premises. First, in the single study examining this point, seronegative vaccinated children had considerable protection against measles infection. Second, in 18 community studies, vaccinated measles cases (‘vaccine failures’) had threefold lower case death than unvaccinated cases. Third, in 24 community studies, infants had twofold higher case death than older measles cases. Fourth, the only study examining the assumption that ‘vaccine failures’ lead to lack of confidence found the opposite because vaccinated children had milder measles infection. Fifth, a one-dose policy was recommended. However, the two randomised trials of early two-dose measles vaccination compared with one-dose vaccination found significantly reduced mortality until 3 years of age. Thus, current evidence suggests that the optimal age for a single dose of measles vaccine should have been 6 or 7 months resulting in fewer severe unvaccinated cases among infants but more mild ‘vaccine failures’ among older children. Furthermore, the two-dose trials indicate that measles vaccine reduces mortality from other causes than measles infection. Conclusions Many lives may have been lost by not determining the optimal age of measles vaccination. Since seroconversion continues to be the basis for policy, the current recommendation is to increase the age of measles vaccination to 12 months in countries with limited measles transmission. This policy may lead to an increase in child mortality. PMID:22815465

  3. A global framework for future costs and benefits of river-flood protection in urban areas

    NASA Astrophysics Data System (ADS)

    Ward, Philip J.; Jongman, Brenden; Aerts, Jeroen C. J. H.; Bates, Paul D.; Botzen, Wouter J. W.; Diaz Loaiza, Andres; Hallegatte, Stephane; Kind, Jarl M.; Kwadijk, Jaap; Scussolini, Paolo; Winsemius, Hessel C.

    2017-09-01

    Floods cause billions of dollars of damage each year, and flood risks are expected to increase due to socio-economic development, subsidence, and climate change. Implementing additional flood risk management measures can limit losses, protecting people and livelihoods. Whilst several models have been developed to assess global-scale river-flood risk, methods for evaluating flood risk management investments globally are lacking. Here, we present a framework for assessing costs and benefits of structural flood protection measures in urban areas around the world. We demonstrate its use under different assumptions of current and future climate change and socio-economic development. Under these assumptions, investments in dykes may be economically attractive for reducing risk in large parts of the world, but not everywhere. In some regions, economically efficient investments could reduce future flood risk below today’s levels, in spite of climate change and economic growth. We also demonstrate the sensitivity of the results to different assumptions and parameters. The framework can be used to identify regions where river-flood protection investments should be prioritized, or where other risk-reducing strategies should be emphasized.

  4. The Role of Capital in Improving Productivity and Creating Jobs.

    ERIC Educational Resources Information Center

    Carnoy, Martin

    Causes of the significant decrease in productivity growth and dramatic increase in unemployment in the United States since the mid-1960's are examined in order to test the underlying assumption of current economic policies that increasing capital savings and investments will create fuller and more productive employment. Data on trends in…

  5. Nanoindentation size effects in wood

    Treesearch

    Joseph E. Jakes; Donald S. Stone; Charles R. Frihart

    2007-01-01

    The purpose of this work was to test some of the assumptions underlying methods currently employed to investigate nanoindentation properties of wood. We examined whether hardness and modulus depend on load. We employed a surface preparation technique that minimizes alterations of cell wall properties. Areas were determined using both (a) Oliver-Pharr method and (b) a...

  6. The Perception of Error in Production Plants of a Chemical Organisation

    ERIC Educational Resources Information Center

    Seifried, Jurgen; Hopfer, Eva

    2013-01-01

    There is considerable current interest in error-friendly corporate culture, one particular research question being how and under what conditions errors are learnt from in the workplace. This paper starts from the assumption that errors are inevitable and considers key factors which affect learning from errors in high responsibility organisations,…

  7. Decisions Under Uncertainty III: Rationality Issues, Sex Stereotypes, and Sex Role Appropriateness.

    ERIC Educational Resources Information Center

    Bonoma, Thomas V.

    The explanatory cornerstone of most currently viable social theories is a strict cost-gain assumption. The clearest formal explication of this view is contained in subjective expected utility models (SEU), in which individuals are assumed to scale their subjective likelihood estimates of decisional consequences and the personalistic worth or…

  8. Understanding the Relationship between Student Attitudes and Student Learning

    ERIC Educational Resources Information Center

    Cahill, Michael J.; McDaniel, Mark A.; Frey, Regina F.; Hynes, K. Mairin; Repice, Michelle; Zhao, Jiuqing; Trousil, Rebecca

    2018-01-01

    Student attitudes, defined as the extent to which one holds expertlike beliefs about and approaches to physics, are a major research topic in physics education research. An implicit but rarely tested assumption underlying much of this research is that student attitudes play a significant part in student learning and performance. The current study…

  9. When does power disparity help or hurt group performance?

    PubMed

    Tarakci, Murat; Greer, Lindred L; Groenen, Patrick J F

    2016-03-01

    Power differences are ubiquitous in social settings. However, the question of whether groups with higher or lower power disparity achieve better performance has thus far received conflicting answers. To address this issue, we identify 3 underlying assumptions in the literature that may have led to these divergent findings, including a myopic focus on static hierarchies, an assumption that those at the top of hierarchies are competent at group tasks, and an assumption that equality is not possible. We employ a multimethod set of studies to examine these assumptions and to understand when power disparity will help or harm group performance. First, our agent-based simulation analyses show that by unpacking these common implicit assumptions in power research, we can explain earlier disparate findings--power disparity benefits group performance when it is dynamically aligned with the power holder's task competence, and harms group performance when held constant and/or is not aligned with task competence. Second, our empirical findings in both a field study of fraud investigation groups and a multiround laboratory study corroborate the simulation results. We thereby contribute to research on power by highlighting a dynamic understanding of power in groups and explaining how current implicit assumptions may lead to opposing findings. (c) 2016 APA, all rights reserved).

  10. Is Animal Cruelty a "Red Flag" for Family Violence? Investigating Co-Occurring Violence toward Children, Partners, and Pets

    ERIC Educational Resources Information Center

    DeGue, Sarah; DiLillo, David

    2009-01-01

    Cross-reporting legislation, which permits child and animal welfare investigators to refer families with substantiated child maltreatment or animal cruelty for investigation by parallel agencies, has recently been adopted in several U.S. jurisdictions. The current study sheds light on the underlying assumption of these policies--that animal…

  11. Expanding the 5E Model.

    ERIC Educational Resources Information Center

    Eisenkraft, Arthur

    2003-01-01

    Amends the current 5E learning cycle and instructional model to a 7E model. Changes ensure that instructors do not omit crucial elements for learning from their lessons while under the incorrect assumption that they are meeting the requirements of the learning cycle. The proposed 7E model includes: (1) engage; (2) explore; (3) explain; (4) elicit;…

  12. On the Kubo-Greenwood model for electron conductivity

    NASA Astrophysics Data System (ADS)

    Dufty, James; Wrighton, Jeffrey; Luo, Kai; Trickey, S. B.

    2018-02-01

    Currently, the most common method to calculate transport properties for materials under extreme conditions is based on the phenomenological Kubo-Greenwood method. The results of an inquiry into the justification and context of that model are summarized here. Specifically, the basis for its connection to equilibrium DFT and the assumption of static ions are discussed briefly.

  13. The Impact of Information on AIDS Risk Judgments and Behavioral Change among Young Adults.

    ERIC Educational Resources Information Center

    Dunwoody, Sharon; Neuwirth, Kurt

    Participants in the debate on the media's role in the current AIDS (Acquired Immune Deficiency Syndrome) epidemic implicitly adopt a set of underlying assumptions about media processes and effects: information about AIDS proffered by the media has the capacity to influence estimates of risk, personal levels of concern, and extent of behavioral…

  14. Culture, Style and the Educative Process: Making Schools Work for Racially Diverse Students. Second Edition.

    ERIC Educational Resources Information Center

    Shade, Barbara J. Robinson, Ed.

    Many students of color are not performing to their maximum potential within the current school setting, and examinations of this problem suggest significant differences between student and teacher perceptions of how one becomes educated. The underlying assumptions of this book are that culture, through the mediation of cognitive style, determines…

  15. Indirect nontarget effects of host-specific biological control agents: Implications for biological control

    Treesearch

    Dean E. Pearson; Ragan M. Callaway

    2005-01-01

    Classical biological control of weeds currently operates under the assumption that biological control agents are safe (i.e., low risk) if they do not directly attack nontarget species. However, recent studies indicate that even highly host-specific biological control agents can impact nontarget species through indirect effects. This finding has profound...

  16. Outcomes of Quality Assurance: A Discussion of Knowledge, Methodology and Validity

    ERIC Educational Resources Information Center

    Stensaker, Bjorn

    2008-01-01

    A common characteristic in many quality assurance schemes around the world is their implicit and often narrowly formulated understanding of how organisational change is to take place as a result of the process. By identifying some of the underlying assumptions related to organisational change in current quality assurance schemes, the aim of this…

  17. Literacy and Sexuality: What's the Connection?

    ERIC Educational Resources Information Center

    Ashcraft, Catherine

    2009-01-01

    In this column, the author highlights how the current framing of teen sexuality obscures important connections between literacy and sexuality. She argues that we need to challenge two current assumptions: the assumption that teen sexuality is primarily about public health and the assumption that all efforts to address teen sexuality are…

  18. Ghost Images in Helioseismic Holography? Toy Models in a Uniform Medium

    NASA Astrophysics Data System (ADS)

    Yang, Dan

    2018-02-01

    Helioseismic holography is a powerful technique used to probe the solar interior based on estimations of the 3D wavefield. The Porter-Bojarski holography, which is a well-established method used in acoustics to recover sources and scatterers in 3D, is also an estimation of the wavefield, and hence it has the potential of being applied to helioseismology. Here we present a proof-of-concept study, where we compare helioseismic holography and Porter-Bojarski holography under the assumption that the waves propagate in a homogeneous medium. We consider the problem of locating a point source of wave excitation inside a sphere. Under these assumptions, we find that the two imaging methods have the same capability of locating the source, with the exception that helioseismic holography suffers from "ghost images" ( i.e. artificial peaks away from the source location). We conclude that Porter-Bojarski holography may improve the method currently used in helioseismology.

  19. Very Special Natives: The Evolving Role of Teachers as Informants in Educational Ethnography.

    ERIC Educational Resources Information Center

    Florio, Susan

    Underlying the current use of ethnography in the study of teaching and learning is the assumption of an analogy between the school or classroom and culture. The claim of educational ethnography is that it discovers and describes the ways that members of the school community create and share meaning. Ethnographers aim to discover the operating…

  20. Jumping to Conclusions--The PISA Knee-Jerk: Some Remarks on the Current Economic-Educational Discourse

    ERIC Educational Resources Information Center

    Bittlingmayer, Uwe H.; Boutiuc, Alina Florentina; Heinemann, Lars; Kotthoff, Hans-Georg

    2016-01-01

    Ever since PISA studies have been constantly present in mainstream media, a close relation between academic achievement and economic outcome has been routinely presumed. This essay takes a closer look at some of the most common lines of argument, then goes on to outline a critique of the assumptions underlying the alleged causal relationship…

  1. Adaptive Management of Bull Trout Populations in the Lemhi Basin

    USGS Publications Warehouse

    Peterson, James T.; Tyre, Andrew J.; Converse, Sarah J.; Bogich, Tiffany L.; Miller, Damien; Post van der Burg, Max; Thomas, Carmen; Thompson, Ralph J.; Wood, Jeri; Brewer, Donna; Runge, Michael C.

    2011-01-01

    The bull trout Salvelinus confluentus, a stream-living salmonid distributed in drainages of the northwestern United States, is listed as threatened under the Endangered Species Act because of rangewide declines. One proposed recovery action is the reconnection of tributaries in the Lemhi Basin. Past water use policies in this core area disconnected headwater spawning sites from downstream habitat and have led to the loss of migratory life history forms. We developed an adaptive management framework to analyze which types of streams should be prioritized for reconnection under a proposed Habitat Conservation Plan. We developed a Stochastic Dynamic Program that identified optimal policies over time under four different assumptions about the nature of the migratory behavior and the effects of brook trout Salvelinus fontinalis on subpopulations of bull trout. In general, given the current state of the system and the uncertainties about the dynamics, the optimal policy would be to connect streams that are currently occupied by bull trout. We also estimated the value of information as the difference between absolute certainty about which of our four assumptions were correct, and a model averaged optimization assuming no knowledge. Overall there is little to be gained by learning about the dynamics of the system in its current state, although in other parts of the state space reducing uncertainties about the system would be very valuable. We also conducted a sensitivity analysis; the optimal decision at the current state does not change even when parameter values are changed up to 75% of the baseline values. Overall, the exercise demonstrates that it is possible to apply adaptive management principles to threatened and endangered species, but logistical and data availability constraints make detailed analyses difficult.

  2. A comparison of coronal X-ray structures of active regions with magnetic fields computed from photospheric observations

    NASA Technical Reports Server (NTRS)

    Poletto, G.; Vaiana, G. S.; Zombeck, M. V.; Krieger, A. S.; Timothy, A. F.

    1975-01-01

    The appearances of several X-ray active regions observed on March 7, 1970 and June 15, 1973 are compared with the corresponding coronal magnetic-field topology. Coronal fields have been computed from measurements of the longitudinal component of the underlying magnetic fields, based on the current-free hypothesis. An overall correspondence between X-ray structures and calculated field lines is established, and the magnetic counterparts of different X-ray features are also examined. A correspondence between enhanced X-ray emission and the location of compact closed field lines is suggested. Representative magnetic-field values calculated under the assumption of current-free fields are given for heights up to 200 sec.

  3. Analysis of recoverable current from one component of magnetic flux density in MREIT and MRCDI.

    PubMed

    Park, Chunjae; Lee, Byung Il; Kwon, Oh In

    2007-06-07

    Magnetic resonance current density imaging (MRCDI) provides a current density image by measuring the induced magnetic flux density within the subject with a magnetic resonance imaging (MRI) scanner. Magnetic resonance electrical impedance tomography (MREIT) has been focused on extracting some useful information of the current density and conductivity distribution in the subject Omega using measured B(z), one component of the magnetic flux density B. In this paper, we analyze the map Tau from current density vector field J to one component of magnetic flux density B(z) without any assumption on the conductivity. The map Tau provides an orthogonal decomposition J = J(P) + J(N) of the current J where J(N) belongs to the null space of the map Tau. We explicitly describe the projected current density J(P) from measured B(z). Based on the decomposition, we prove that B(z) data due to one injection current guarantee a unique determination of the isotropic conductivity under assumptions that the current is two-dimensional and the conductivity value on the surface is known. For a two-dimensional dominating current case, the projected current density J(P) provides a good approximation of the true current J without accumulating noise effects. Numerical simulations show that J(P) from measured B(z) is quite similar to the target J. Biological tissue phantom experiments compare J(P) with the reconstructed J via the reconstructed isotropic conductivity using the harmonic B(z) algorithm.

  4. Identifying intervals of temporally invariant field-aligned currents from Swarm: Assessing the validity of single-spacecraft methods

    NASA Astrophysics Data System (ADS)

    Forsyth, C.; Rae, I. J.; Mann, I. R.; Pakhotin, I. P.

    2017-03-01

    Field-aligned currents (FACs) are a fundamental component of coupled solar wind-magnetosphere-ionosphere. By assuming that FACs can be approximated by stationary infinite current sheets that do not change on the spacecraft crossing time, single-spacecraft magnetic field measurements can be used to estimate the currents flowing in space. By combining data from multiple spacecraft on similar orbits, these stationarity assumptions can be tested. In this technical report, we present a new technique that combines cross correlation and linear fitting of multiple spacecraft measurements to determine the reliability of the FAC estimates. We show that this technique can identify those intervals in which the currents estimated from single-spacecraft techniques are both well correlated and have similar amplitudes, thus meeting the spatial and temporal stationarity requirements. Using data from European Space Agency's Swarm mission from 2014 to 2015, we show that larger-scale currents (>450 km) are well correlated and have a one-to-one fit up to 50% of the time, whereas small-scale (<50 km) currents show similar amplitudes only 1% of the time despite there being a good correlation 18% of the time. It is thus imperative to examine both the correlation and amplitude of the calculated FACs in order to assess both the validity of the underlying assumptions and hence ultimately the reliability of such single-spacecraft FAC estimates.

  5. Administration and Regulation of a Military Retirement System Funded by Private Sector Investments

    DTIC Science & Technology

    1990-03-01

    private sector , as opposed to the current method of investing the funds within the Government, between 1985 and 1989, under assumptions of administrative and regulatory constraints; the timeframe was selected because in 1985 the Government began setting aside funds for future military retirement costs versus the pay-as-you-go method in previous years. The study had three objectives: (1) identify administrative factors that result from modifying the current MRS to an MRS funded by private sector investments; (2) identify regulatory constraints that

  6. Resistance-surface-based wildlife conservation connectivity modeling: Summary of efforts in the United States and guide for practitioners

    Treesearch

    Alisa A. Wade; Kevin S. McKelvey; Michael K. Schwartz

    2015-01-01

    Resistance-surface-based connectivity modeling has become a widespread tool for conservation planning. The current ease with which connectivity models can be created, however, masks the numerous untested assumptions underlying both the rules that produce the resistance surface and the algorithms used to locate low-cost paths across the target landscape. Here we present...

  7. User's Guide to Computing High School Graduation Rates. Volume 1. Technical Report: Review of Current and Proposed Graduation Indicators. NCES 2006-604

    ERIC Educational Resources Information Center

    Seastrom, Marilyn M.; Chapman, Chris; Stillwell, Robert; McGrath, Daniel; Peltola, Pia; Dinkes, Rachel; Xu, Zeyu

    2006-01-01

    The first volume of this report examines the existing measures of high school completion and the newly proposed proxy measures. This includes a description of the computational formulas, the data required for each indicator, the assumptions underlying each formula, the strengths and weaknesses of each indicator relative to a true cohort on-time…

  8. "Why Are We an Ignored Group?" Mainstream Educational Experiences and Current Life Satisfaction of Adults on the Autism Spectrum from an Online Survey

    ERIC Educational Resources Information Center

    Parsons, Sarah

    2015-01-01

    Adults on the autism spectrum are significantly under-represented in research on educational interventions and support, such that little is known about their views and experiences of schooling and how this prepared them for adult life. In addition, "good outcomes" in adult life are often judged according to normative assumptions and tend…

  9. Air Force Training: Further Analysis and Planning Needed to Improve Effectiveness

    DTIC Science & Technology

    2016-09-01

    training, and (3) established virtual training plans that include desirable characteristics of a comprehensive strategy. GAO reviewed Air Force training...requirements may not reflect current and emerging training needs, because the Air Force has not comprehensively reassessed the assumptions underlying them...include all desirable characteristics of a comprehensive strategy, such as a risk-based investment strategy or a time line for addressing training needs

  10. Respondent-Driven Sampling: An Assessment of Current Methodology.

    PubMed

    Gile, Krista J; Handcock, Mark S

    2010-08-01

    Respondent-Driven Sampling (RDS) employs a variant of a link-tracing network sampling strategy to collect data from hard-to-reach populations. By tracing the links in the underlying social network, the process exploits the social structure to expand the sample and reduce its dependence on the initial (convenience) sample.The current estimators of population averages make strong assumptions in order to treat the data as a probability sample. We evaluate three critical sensitivities of the estimators: to bias induced by the initial sample, to uncontrollable features of respondent behavior, and to the without-replacement structure of sampling.Our analysis indicates: (1) that the convenience sample of seeds can induce bias, and the number of sample waves typically used in RDS is likely insufficient for the type of nodal mixing required to obtain the reputed asymptotic unbiasedness; (2) that preferential referral behavior by respondents leads to bias; (3) that when a substantial fraction of the target population is sampled the current estimators can have substantial bias.This paper sounds a cautionary note for the users of RDS. While current RDS methodology is powerful and clever, the favorable statistical properties claimed for the current estimates are shown to be heavily dependent on often unrealistic assumptions. We recommend ways to improve the methodology.

  11. Defining the safe current limit for opening ID photon shutter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seletskiy, S.

    The NSLS-II storage ring is protected from possible damage from insertion devices (IDs) synchrotron radiation by a dedicated active interlock system (AIS). It monitors electron beam position and angle and triggers beam drop if beam orbit exceeds the boundaries of pre-calculated active interlock envelope (AIE). The beamlines (BL) and beamline frontends (FE) are designed under assumption that the electron beam is interlocked within the AIE. For historic reasons the AIS engages the ID active interlock (AI-ID) at any non-zero beam current whenever the ID photon shutter (IDPS) is getting opened. Such arrangement creates major inconveniences for BLs commissioning. Apparently theremore » is some IDPS safe current limit (SCL) under which the IDPS can be opened without interlocking the e-beam. The goal of this paper is to find such limit.« less

  12. Niche syndromes, species extinction risks, and management under climate change.

    PubMed

    Sax, Dov F; Early, Regan; Bellemare, Jesse

    2013-09-01

    The current distributions of species are often assumed to correspond with the total set of environmental conditions under which species can persist. When this assumption is incorrect, extinction risk estimated from species distribution models can be misleading. The degree to which species can tolerate or even thrive under conditions found beyond their current distributions alters extinction risks, time lags in realizing those risks, and the usefulness of alternative management strategies. To inform these issues, we propose a conceptual framework within which empirical data could be used to generate hypotheses regarding the realized, fundamental, and 'tolerance' niche of species. Although these niche components have rarely been characterized over geographic scales, we suggest that this could be done for many plant species by comparing native, naturalized, and horticultural distributions. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. On the Genealogy of Asexual Diploids

    NASA Astrophysics Data System (ADS)

    Lam, Fumei; Langley, Charles H.; Song, Yun S.

    Given molecular genetic data from diploid individuals that, at present, reproduce mostly or exclusively asexually without recombination, an important problem in evolutionary biology is detecting evidence of past sexual reproduction (i.e., meiosis and mating) and recombination (both meiotic and mitotic). However, currently there is a lack of computational tools for carrying out such a study. In this paper, we formulate a new problem of reconstructing diploid genealogies under the assumption of no sexual reproduction or recombination, with the ultimate goal being to devise genealogy-based tools for testing deviation from these assumptions. We first consider the infinite-sites model of mutation and develop linear-time algorithms to test the existence of an asexual diploid genealogy compatible with the infinite-sites model of mutation, and to construct one if it exists. Then, we relax the infinite-sites assumption and develop an integer linear programming formulation to reconstruct asexual diploid genealogies with the minimum number of homoplasy (back or recurrent mutation) events. We apply our algorithms on simulated data sets with sizes of biological interest.

  14. Porous gravity currents: Axisymmetric propagation in horizontally graded medium and a review of similarity solutions

    NASA Astrophysics Data System (ADS)

    Lauriola, I.; Felisa, G.; Petrolo, D.; Di Federico, V.; Longo, S.

    2018-05-01

    We present an investigation on the combined effect of fluid rheology and permeability variations on the propagation of porous gravity currents in axisymmetric geometry. The fluid is taken to be of power-law type with behaviour index n and the permeability to depend from the distance from the source as a power-law function of exponent β. The model represents the injection of a current of non-Newtonian fluid along a vertical bore hole in porous media with space-dependent properties. The injection is either instantaneous (α = 0) or continuous (α > 0). A self-similar solution describing the rate of propagation and the profile of the current is derived under the assumption of small aspect ratio between the current average thickness and length. The limitations on model parameters imposed by the model assumptions are discussed in depth, considering currents of increasing/decreasing velocity, thickness, and aspect ratio, and the sensitivity of the radius, thickness, and aspect ratio to model parameters. Several critical values of α and β discriminating between opposite tendencies are thus determined. Experimental validation is performed using shear-thinning suspensions and Newtonian mixtures in different regimes. A box filled with ballotini of different diameter is used to reproduce the current, with observations from the side and bottom. Most experimental results for the radius and profile of the current agree well with the self-similar solution except at the beginning of the process, due to the limitations of the 2-D assumption and to boundary effects near the injection zone. The results for this specific case corroborate a general model for currents with constant or time-varying volume of power-law fluids propagating in porous domains of plane or radial geometry, with uniform or varying permeability, and the possible effect of channelization. All results obtained in the present and previous papers for the key parameters governing the dynamics of power-law gravity currents are summarized and compared to infer the combinations of parameters leading to the fastest/lowest rate of propagation, and of variation of thickness and aspect ratio.

  15. Teaching for Tomorrow: An Exploratory Study of Prekindergarten Teachers' Underlying Assumptions about How Children Learn

    ERIC Educational Resources Information Center

    Flynn, Erin E.; Schachter, Rachel E.

    2017-01-01

    This study investigated eight prekindergarten teachers' underlying assumptions about how children learn, and how these assumptions were used to inform and enact instruction. By contextualizing teachers' knowledge and understanding as it is used in practice we were able to provide unique insight into the work of teaching. Participants focused on…

  16. Basic principles of respiratory function monitoring in ventilated newborns: A review.

    PubMed

    Schmalisch, Gerd

    2016-09-01

    Respiratory monitoring during mechanical ventilation provides a real-time picture of patient-ventilator interaction and is a prerequisite for lung-protective ventilation. Nowadays, measurements of airflow, tidal volume and applied pressures are standard in neonatal ventilators. The measurement of lung volume during mechanical ventilation by tracer gas washout techniques is still under development. The clinical use of capnography, although well established in adults, has not been embraced by neonatologists because of technical and methodological problems in very small infants. While the ventilatory parameters are well defined, the calculation of other physiological parameters are based upon specific assumptions which are difficult to verify. Incomplete knowledge of the theoretical background of these calculations and their limitations can lead to incorrect interpretations with clinical consequences. Therefore, the aim of this review was to describe the basic principles and the underlying assumptions of currently used methods for respiratory function monitoring in ventilated newborns and to highlight methodological limitations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. SOA formation by biogenic and carbonyl compounds: data evaluation and application.

    PubMed

    Ervens, Barbara; Kreidenweis, Sonia M

    2007-06-01

    The organic fraction of atmospheric aerosols affects the physical and chemical properties of the particles and their role in the climate system. Current models greatly underpredict secondary organic aerosol (SOA) mass. Based on a compilation of literature studies that address SOA formation, we discuss different parameters that affect the SOA formation efficiency of biogenic compounds (alpha-pinene, isoprene) and aliphatic aldehydes (glyoxal, hexanal, octanal, hexadienal). Applying a simple model, we find that the estimated SOA mass after one week of aerosol processing under typical atmospheric conditions is increased by a few microg m(-3) (low NO(x) conditions). Acid-catalyzed reactions can create > 50% more SOA mass than processes under neutral conditions; however, other parameters such as the concentration ratio of organics/NO(x), relative humidity, and absorbing mass are more significant. The assumption of irreversible SOA formation not limited by equilibrium in the particle phase or by depletion of the precursor leads to unrealistically high SOA masses for some of the assumptions we made (surface vs volume controlled processes).

  18. Mathematical modelling of clostridial acetone-butanol-ethanol fermentation.

    PubMed

    Millat, Thomas; Winzer, Klaus

    2017-03-01

    Clostridial acetone-butanol-ethanol (ABE) fermentation features a remarkable shift in the cellular metabolic activity from acid formation, acidogenesis, to the production of industrial-relevant solvents, solventogensis. In recent decades, mathematical models have been employed to elucidate the complex interlinked regulation and conditions that determine these two distinct metabolic states and govern the transition between them. In this review, we discuss these models with a focus on the mechanisms controlling intra- and extracellular changes between acidogenesis and solventogenesis. In particular, we critically evaluate underlying model assumptions and predictions in the light of current experimental knowledge. Towards this end, we briefly introduce key ideas and assumptions applied in the discussed modelling approaches, but waive a comprehensive mathematical presentation. We distinguish between structural and dynamical models, which will be discussed in their chronological order to illustrate how new biological information facilitates the 'evolution' of mathematical models. Mathematical models and their analysis have significantly contributed to our knowledge of ABE fermentation and the underlying regulatory network which spans all levels of biological organization. However, the ties between the different levels of cellular regulation are not well understood. Furthermore, contradictory experimental and theoretical results challenge our current notion of ABE metabolic network structure. Thus, clostridial ABE fermentation still poses theoretical as well as experimental challenges which are best approached in close collaboration between modellers and experimentalists.

  19. Using effort information with change-in-ratio data for population estimation

    USGS Publications Warehouse

    Udevitz, Mark S.; Pollock, Kenneth H.

    1995-01-01

    Most change-in-ratio (CIR) methods for estimating fish and wildlife population sizes have been based only on assumptions about how encounter probabilities vary among population subclasses. When information on sampling effort is available, it is also possible to derive CIR estimators based on assumptions about how encounter probabilities vary over time. This paper presents a generalization of previous CIR models that allows explicit consideration of a range of assumptions about the variation of encounter probabilities among subclasses and over time. Explicit estimators are derived under this model for specific sets of assumptions about the encounter probabilities. Numerical methods are presented for obtaining estimators under the full range of possible assumptions. Likelihood ratio tests for these assumptions are described. Emphasis is on obtaining estimators based on assumptions about variation of encounter probabilities over time.

  20. Strategies for reforestation under uncertain future climates: guidelines for Alberta, Canada.

    PubMed

    Gray, Laura K; Hamann, Andreas

    2011-01-01

    Commercial forestry programs normally use locally collected seed for reforestation under the assumption that tree populations are optimally adapted to local environments. However, in western Canada this assumption is no longer valid because of climate trends that have occurred over the last several decades. The objective of this study is to show how we can arrive at reforestation recommendations with alternative species and genotypes that are viable under a majority of climate change scenarios. In a case study for commercially important tree species of Alberta, we use an ecosystem-based bioclimate envelope modeling approach for western North America to project habitat for locally adapted populations of tree species using multi-model climate projections for the 2020s, 2050s and 2080s. We find that genotypes of species that are adapted to drier climatic conditions will be the preferred planting stock over much of the boreal forest that is commercially managed. Interestingly, no alternative species that are currently not present in Alberta can be recommended with any confidence. Finally, we observe large uncertainties in projections of suitable habitat that make reforestation planning beyond the 2050s difficult for most species. More than 50,000 hectares of forests are commercially planted every year in Alberta. Choosing alternative planting stock, suitable for expected future climates, could therefore offer an effective climate change adaptation strategy at little additional cost. Habitat projections for locally adapted tree populations under observed climate change conform well to projections for the 2020s, which suggests that it is a safe strategy to change current reforestation practices and adapt to new climatic realities through assisted migration prescriptions.

  1. Gene network reconstruction from transcriptional dynamics under kinetic model uncertainty: a case for the second derivative

    PubMed Central

    Bickel, David R.; Montazeri, Zahra; Hsieh, Pei-Chun; Beatty, Mary; Lawit, Shai J.; Bate, Nicholas J.

    2009-01-01

    Motivation: Measurements of gene expression over time enable the reconstruction of transcriptional networks. However, Bayesian networks and many other current reconstruction methods rely on assumptions that conflict with the differential equations that describe transcriptional kinetics. Practical approximations of kinetic models would enable inferring causal relationships between genes from expression data of microarray, tag-based and conventional platforms, but conclusions are sensitive to the assumptions made. Results: The representation of a sufficiently large portion of genome enables computation of an upper bound on how much confidence one may place in influences between genes on the basis of expression data. Information about which genes encode transcription factors is not necessary but may be incorporated if available. The methodology is generalized to cover cases in which expression measurements are missing for many of the genes that might control the transcription of the genes of interest. The assumption that the gene expression level is roughly proportional to the rate of translation led to better empirical performance than did either the assumption that the gene expression level is roughly proportional to the protein level or the Bayesian model average of both assumptions. Availability: http://www.oisb.ca points to R code implementing the methods (R Development Core Team 2004). Contact: dbickel@uottawa.ca Supplementary information: http://www.davidbickel.com PMID:19218351

  2. Missing data in FFQs: making assumptions about item non-response.

    PubMed

    Lamb, Karen E; Olstad, Dana Lee; Nguyen, Cattram; Milte, Catherine; McNaughton, Sarah A

    2017-04-01

    FFQs are a popular method of capturing dietary information in epidemiological studies and may be used to derive dietary exposures such as nutrient intake or overall dietary patterns and diet quality. As FFQs can involve large numbers of questions, participants may fail to respond to all questions, leaving researchers to decide how to deal with missing data when deriving intake measures. The aim of the present commentary is to discuss the current practice for dealing with item non-response in FFQs and to propose a research agenda for reporting and handling missing data in FFQs. Single imputation techniques, such as zero imputation (assuming no consumption of the item) or mean imputation, are commonly used to deal with item non-response in FFQs. However, single imputation methods make strong assumptions about the missing data mechanism and do not reflect the uncertainty created by the missing data. This can lead to incorrect inference about associations between diet and health outcomes. Although the use of multiple imputation methods in epidemiology has increased, these have seldom been used in the field of nutritional epidemiology to address missing data in FFQs. We discuss methods for dealing with item non-response in FFQs, highlighting the assumptions made under each approach. Researchers analysing FFQs should ensure that missing data are handled appropriately and clearly report how missing data were treated in analyses. Simulation studies are required to enable systematic evaluation of the utility of various methods for handling item non-response in FFQs under different assumptions about the missing data mechanism.

  3. Application of random survival forests in understanding the determinants of under-five child mortality in Uganda in the presence of covariates that satisfy the proportional and non-proportional hazards assumption.

    PubMed

    Nasejje, Justine B; Mwambi, Henry

    2017-09-07

    Uganda just like any other Sub-Saharan African country, has a high under-five child mortality rate. To inform policy on intervention strategies, sound statistical methods are required to critically identify factors strongly associated with under-five child mortality rates. The Cox proportional hazards model has been a common choice in analysing data to understand factors strongly associated with high child mortality rates taking age as the time-to-event variable. However, due to its restrictive proportional hazards (PH) assumption, some covariates of interest which do not satisfy the assumption are often excluded in the analysis to avoid mis-specifying the model. Otherwise using covariates that clearly violate the assumption would mean invalid results. Survival trees and random survival forests are increasingly becoming popular in analysing survival data particularly in the case of large survey data and could be attractive alternatives to models with the restrictive PH assumption. In this article, we adopt random survival forests which have never been used in understanding factors affecting under-five child mortality rates in Uganda using Demographic and Health Survey data. Thus the first part of the analysis is based on the use of the classical Cox PH model and the second part of the analysis is based on the use of random survival forests in the presence of covariates that do not necessarily satisfy the PH assumption. Random survival forests and the Cox proportional hazards model agree that the sex of the household head, sex of the child, number of births in the past 1 year are strongly associated to under-five child mortality in Uganda given all the three covariates satisfy the PH assumption. Random survival forests further demonstrated that covariates that were originally excluded from the earlier analysis due to violation of the PH assumption were important in explaining under-five child mortality rates. These covariates include the number of children under the age of five in a household, number of births in the past 5 years, wealth index, total number of children ever born and the child's birth order. The results further indicated that the predictive performance for random survival forests built using covariates including those that violate the PH assumption was higher than that for random survival forests built using only covariates that satisfy the PH assumption. Random survival forests are appealing methods in analysing public health data to understand factors strongly associated with under-five child mortality rates especially in the presence of covariates that violate the proportional hazards assumption.

  4. Optimal management of non-Markovian biological populations

    USGS Publications Warehouse

    Williams, B.K.

    2007-01-01

    Wildlife populations typically are described by Markovian models, with population dynamics influenced at each point in time by current but not previous population levels. Considerable work has been done on identifying optimal management strategies under the Markovian assumption. In this paper we generalize this work to non-Markovian systems, for which population responses to management are influenced by lagged as well as current status and/or controls. We use the maximum principle of optimal control theory to derive conditions for the optimal management such a system, and illustrate the effects of lags on the structure of optimal habitat strategies for a predator-prey system.

  5. The myths of coping with loss in undergraduate psychiatric nursing books.

    PubMed

    Holman, E Alison; Perisho, Jennifer; Edwards, Ada; Mlakar, Natalie

    2010-12-01

    Nurses often help patients cope with loss. Recent research has cast doubt on the validity of early theories about loss and grief commonly taught to nurses. We systematically examined the accuracy of information on coping with loss presented in 23 commonly used undergraduate psychiatric nursing books. All 23 books contained at least one unsupported assumption (myth) about loss and grief. In 78% of these books, authors described four or more myths and only one evidence-based finding about coping with loss. On balance most books provided details on the myths about grief and loss with minimal discussion of the current evidence. Authors of psychiatric nursing books continue to disseminate unsupported theories about grief responses without adequately acknowledging evidence challenging core assumptions underlying them. Copyright © 2010 Wiley Periodicals, Inc.

  6. Educational Technology as a Subversive Activity: Questioning Assumptions Related to Teaching and Leading with Technology

    ERIC Educational Resources Information Center

    Kruger-Ross, Matthew J.; Holcomb, Lori B.

    2012-01-01

    The use of educational technologies is grounded in the assumptions of teachers, learners, and administrators. Assumptions are choices that structure our understandings and help us make meaning. Current advances in Web 2.0 and social media technologies challenge our assumptions about teaching and learning. The intersection of technology and…

  7. Measurement of toroidal vessel eddy current during plasma disruption on J-TEXT.

    PubMed

    Liu, L J; Yu, K X; Zhang, M; Zhuang, G; Li, X; Yuan, T; Rao, B; Zhao, Q

    2016-01-01

    In this paper, we have employed a thin, printed circuit board eddy current array in order to determine the radial distribution of the azimuthal component of the eddy current density at the surface of a steel plate. The eddy current in the steel plate can be calculated by analytical methods under the simplifying assumptions that the steel plate is infinitely large and the exciting current is of uniform distribution. The measurement on the steel plate shows that this method has high spatial resolution. Then, we extended this methodology to a toroidal geometry with the objective of determining the poloidal distribution of the toroidal component of the eddy current density associated with plasma disruption in a fusion reactor called J-TEXT. The preliminary measured result is consistent with the analysis and calculation results on the J-TEXT vacuum vessel.

  8. Challenging Assumptions of International Public Relations: When Government Is the Most Important Public.

    ERIC Educational Resources Information Center

    Taylor, Maureen; Kent, Michael L.

    1999-01-01

    Explores assumptions underlying Malaysia's and the United States' public-relations practice. Finds many assumptions guiding Western theories and practices are not applicable to other countries. Examines the assumption that the practice of public relations targets a variety of key organizational publics. Advances international public-relations…

  9. Network Structure and Biased Variance Estimation in Respondent Driven Sampling

    PubMed Central

    Verdery, Ashton M.; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J.

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927

  10. Adaptive control: Myths and realities

    NASA Technical Reports Server (NTRS)

    Athans, M.; Valavani, L.

    1984-01-01

    It was found that all currently existing globally stable adaptive algorithms have three basic properties in common: positive realness of the error equation, square-integrability of the parameter adjustment law and, need for sufficient excitation for asymptotic parameter convergence. Of the three, the first property is of primary importance since it satisfies a sufficient condition for stabillity of the overall system, which is a baseline design objective. The second property has been instrumental in the proof of asymptotic error convergence to zero, while the third addresses the issue of parameter convergence. Positive-real error dynamics can be generated only if the relative degree (excess of poles over zeroes) of the process to be controlled is known exactly; this, in turn, implies perfect modeling. This and other assumptions, such as absence of nonminimum phase plant zeros on which the mathematical arguments are based, do not necessarily reflect properties of real systems. As a result, it is natural to inquire what happens to the designs under less than ideal assumptions. The issues arising from violation of the exact modeling assumption which is extremely restrictive in practice and impacts the most important system property, stability, are discussed.

  11. Strategies for Reforestation under Uncertain Future Climates: Guidelines for Alberta, Canada

    PubMed Central

    Gray, Laura K.; Hamann, Andreas

    2011-01-01

    Background Commercial forestry programs normally use locally collected seed for reforestation under the assumption that tree populations are optimally adapted to local environments. However, in western Canada this assumption is no longer valid because of climate trends that have occurred over the last several decades. The objective of this study is to show how we can arrive at reforestation recommendations with alternative species and genotypes that are viable under a majority of climate change scenarios. Methodology/Principal Findings In a case study for commercially important tree species of Alberta, we use an ecosystem-based bioclimate envelope modeling approach for western North America to project habitat for locally adapted populations of tree species using multi-model climate projections for the 2020s, 2050s and 2080s. We find that genotypes of species that are adapted to drier climatic conditions will be the preferred planting stock over much of the boreal forest that is commercially managed. Interestingly, no alternative species that are currently not present in Alberta can be recommended with any confidence. Finally, we observe large uncertainties in projections of suitable habitat that make reforestation planning beyond the 2050s difficult for most species. Conclusion/Significance More than 50,000 hectares of forests are commercially planted every year in Alberta. Choosing alternative planting stock, suitable for expected future climates, could therefore offer an effective climate change adaptation strategy at little additional cost. Habitat projections for locally adapted tree populations under observed climate change conform well to projections for the 2020s, which suggests that it is a safe strategy to change current reforestation practices and adapt to new climatic realities through assisted migration prescriptions. PMID:21853061

  12. Automated composite ellipsoid modelling for high frequency GTD analysis

    NASA Technical Reports Server (NTRS)

    Sze, K. Y.; Rojas, R. G.; Klevenow, F. T.; Scheick, J. T.

    1991-01-01

    The preliminary results of a scheme currently being developed to fit a composite ellipsoid to the fuselage of a helicopter in the vicinity of the antenna location are discussed under the assumption that the antenna is mounted on the fuselage. The parameters of the close-fit composite ellipsoid would then be utilized as inputs into NEWAIR3, a code programmed in FORTRAN 77 for high frequency Geometrical Theory of Diffraction (GTD) Analysis of the radiation of airborne antennas.

  13. Tap density equations of granular powders based on the rate process theory and the free volume concept.

    PubMed

    Hao, Tian

    2015-02-28

    The tap density of a granular powder is often linked to the flowability via the Carr index that measures how tight a powder can be packed, under an assumption that more easily packed powders usually flow poorly. Understanding how particles are packed is important for revealing why a powder flows better than others. There are two types of empirical equations that were proposed to fit the experimental data of packing fractions vs. numbers of taps in the literature: the inverse logarithmic and the stretched exponential. Using the rate process theory and the free volume concept under the assumption that particles will obey similar thermodynamic laws during the tapping process if the "granular temperature" is defined in a different way, we obtain the tap density equations, and they are reducible to the two empirical equations currently widely used in literature. Our equations could potentially fit experimental data better with an additional adjustable parameter. The tapping amplitude and frequency, the weight of the granular materials, and the environmental temperature are grouped into this parameter that weighs the pace of the packing process. The current results, in conjunction with our previous findings, may imply that both "dry" (granular) and "wet" (colloidal and polymeric) particle systems are governed by the same physical mechanisms in term of the role of the free volume and how particles behave (a rate controlled process).

  14. Shared additive genetic influences on DSM-IV criteria for alcohol dependence in subjects of European ancestry.

    PubMed

    Palmer, Rohan H C; McGeary, John E; Heath, Andrew C; Keller, Matthew C; Brick, Leslie A; Knopik, Valerie S

    2015-12-01

    Genetic studies of alcohol dependence (AD) have identified several candidate loci and genes, but most observed effects are small and difficult to reproduce. A plausible explanation for inconsistent findings may be a violation of the assumption that genetic factors contributing to each of the seven DSM-IV criteria point to a single underlying dimension of risk. Given that recent twin studies suggest that the genetic architecture of AD is complex and probably involves multiple discrete genetic factors, the current study employed common single nucleotide polymorphisms in two multivariate genetic models to examine the assumption that the genetic risk underlying DSM-IV AD is unitary. AD symptoms and genome-wide single nucleotide polymorphism (SNP) data from 2596 individuals of European descent from the Study of Addiction: Genetics and Environment were analyzed using genomic-relatedness-matrix restricted maximum likelihood. DSM-IV AD symptom covariance was described using two multivariate genetic factor models. Common SNPs explained 30% (standard error=0.136, P=0.012) of the variance in AD diagnosis. Additive genetic effects varied across AD symptoms. The common pathway model approach suggested that symptoms could be described by a single latent variable that had a SNP heritability of 31% (0.130, P=0.008). Similarly, the exploratory genetic factor model approach suggested that the genetic variance/covariance across symptoms could be represented by a single genetic factor that accounted for at least 60% of the genetic variance in any one symptom. Additive genetic effects on DSM-IV alcohol dependence criteria overlap. The assumption of common genetic effects across alcohol dependence symptoms appears to be a valid assumption. © 2015 Society for the Study of Addiction.

  15. Validation of the underlying assumptions of the quality-adjusted life-years outcome: results from the ECHOUTCOME European project.

    PubMed

    Beresniak, Ariel; Medina-Lara, Antonieta; Auray, Jean Paul; De Wever, Alain; Praet, Jean-Claude; Tarricone, Rosanna; Torbica, Aleksandra; Dupont, Danielle; Lamure, Michel; Duru, Gerard

    2015-01-01

    Quality-adjusted life-years (QALYs) have been used since the 1980s as a standard health outcome measure for conducting cost-utility analyses, which are often inadequately labeled as 'cost-effectiveness analyses'. This synthetic outcome, which combines the quantity of life lived with its quality expressed as a preference score, is currently recommended as reference case by some health technology assessment (HTA) agencies. While critics of the QALY approach have expressed concerns about equity and ethical issues, surprisingly, very few have tested the basic methodological assumptions supporting the QALY equation so as to establish its scientific validity. The main objective of the ECHOUTCOME European project was to test the validity of the underlying assumptions of the QALY outcome and its relevance in health decision making. An experiment has been conducted with 1,361 subjects from Belgium, France, Italy, and the UK. The subjects were asked to express their preferences regarding various hypothetical health states derived from combining different health states with time durations in order to compare observed utility values of the couples (health state, time) and calculated utility values using the QALY formula. Observed and calculated utility values of the couples (health state, time) were significantly different, confirming that preferences expressed by the respondents were not consistent with the QALY theoretical assumptions. This European study contributes to establishing that the QALY multiplicative model is an invalid measure. This explains why costs/QALY estimates may vary greatly, leading to inconsistent recommendations relevant to providing access to innovative medicines and health technologies. HTA agencies should consider other more robust methodological approaches to guide reimbursement decisions.

  16. Some More Sensitive Measures of Sensitivity and Response Bias

    NASA Technical Reports Server (NTRS)

    Balakrishnan, J. D.

    1998-01-01

    In this article, the author proposes a new pair of sensitivity and response bias indices and compares them to other measures currently available, including d' and Beta of signal detection theory. Unlike d' and Beta, these new performance measures do not depend on specific distributional assumptions or assumptions about the transformation from stimulus information to a discrimination judgment with simulated and empirical data, the new sensitivity index is shown to be more accurate than d' and 16 other indices when these measures are used to compare the sensitivity levels of 2 experimental conditions. Results from a perceptual discrimination experiment demonstrate the feasibility of the new distribution-free bias index and suggest that biases of the type defined within the signal detection theory framework (i.e., the placement of a decision criterion) do not exist, even under an asymmetric payoff manipulation.

  17. Comparison of Methods for Characterizing Nonideal Solute Self-Association by Sedimentation Equilibrium

    PubMed Central

    Scott, David J.; Winzor, Donald J.

    2009-01-01

    Abstract We have examined in detail analytical solutions of expressions for sedimentation equilibrium in the analytical ultracentrifuge to describe self-association under nonideal conditions. We find that those containing the radial dependence of total solute concentration that incorporate the Adams-Fujita assumption for composition-dependence of activity coefficients reveal potential shortcomings for characterizing such systems. Similar deficiencies are shown in the use of the NONLIN software incorporating the same assumption about the interrelationship between activity coefficients for monomer and polymer species. These difficulties can be overcome by iterative analyses incorporating expressions for the composition-dependence of activity coefficients predicted by excluded volume considerations. A recommendation is therefore made for the replacement of current software packages by programs that incorporate rigorous statistical-mechanical allowance for thermodynamic nonideality in sedimentation equilibrium distributions reflecting solute self-association. PMID:19651047

  18. Discussion of examination of a cored hydraulic fracture in a deep gas well

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nolte, K.G.

    Warpinski et al. document information found from a core through a formation after a hydraulic fracture treatment. As they indicate, the core provides the first detailed evaluation of an actual propped hydraulic fracture away from the well and at a significant depth, and this evaluation leads to findings that deviate substantially from the assumptions incorporated into current fracturing models. In this discussion, a defense of current fracture design assumptions is developed. The affirmation of current assumptions, for general industry applications, is based on an assessment of the global impact of the local complexity found in the core. The assessment leadsmore » to recommendations for the evolution of fracture design practice.« less

  19. Updated Intensity - Duration - Frequency Curves Under Different Future Climate Scenarios

    NASA Astrophysics Data System (ADS)

    Ragno, E.; AghaKouchak, A.

    2016-12-01

    Current infrastructure design procedures rely on the use of Intensity - Duration - Frequency (IDF) curves retrieved under the assumption of temporal stationarity, meaning that occurrences of extreme events are expected to be time invariant. However, numerous studies have observed more severe extreme events over time. Hence, the stationarity assumption for extreme analysis may not be appropriate in a warming climate. This issue raises concerns regarding the safety and resilience of the existing and future infrastructures. Here we employ historical and projected (RCP 8.5) CMIP5 runs to investigate IDF curves of 14 urban areas across the United States. We first statistically assess changes in precipitation extremes using an energy-based test for equal distributions. Then, through a Bayesian inference approach for stationary and non-stationary extreme value analysis, we provide updated IDF curves based on climatic model projections. This presentation summarizes the projected changes in statistics of extremes. We show that, based on CMIP5 simulations, extreme precipitation events in some urban areas can be 20% more severe in the future, even when projected annual mean precipitation is expected to remain similar to the ground-based climatology.

  20. Measurement of toroidal vessel eddy current during plasma disruption on J-TEXT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, L. J.; Yu, K. X.; Zhang, M., E-mail: zhangming@hust.edu.cn

    2016-01-15

    In this paper, we have employed a thin, printed circuit board eddy current array in order to determine the radial distribution of the azimuthal component of the eddy current density at the surface of a steel plate. The eddy current in the steel plate can be calculated by analytical methods under the simplifying assumptions that the steel plate is infinitely large and the exciting current is of uniform distribution. The measurement on the steel plate shows that this method has high spatial resolution. Then, we extended this methodology to a toroidal geometry with the objective of determining the poloidal distributionmore » of the toroidal component of the eddy current density associated with plasma disruption in a fusion reactor called J-TEXT. The preliminary measured result is consistent with the analysis and calculation results on the J-TEXT vacuum vessel.« less

  1. 7 CFR 1957.2 - Transfer with assumptions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Rural Housing Trust 1987-1, and who are eligible for an FmHA or its successor agency under Public Law 103-354 § 502 loan will be given the same priority by FmHA or its successor agency under Public Law.... FmHA or its successor agency under Public Law 103-354 regulations governing transfers and assumptions...

  2. Implicit assumptions underlying simple harvest models of marine bird populations can mislead environmental management decisions.

    PubMed

    O'Brien, Susan H; Cook, Aonghais S C P; Robinson, Robert A

    2017-10-01

    Assessing the potential impact of additional mortality from anthropogenic causes on animal populations requires detailed demographic information. However, these data are frequently lacking, making simple algorithms, which require little data, appealing. Because of their simplicity, these algorithms often rely on implicit assumptions, some of which may be quite restrictive. Potential Biological Removal (PBR) is a simple harvest model that estimates the number of additional mortalities that a population can theoretically sustain without causing population extinction. However, PBR relies on a number of implicit assumptions, particularly around density dependence and population trajectory that limit its applicability in many situations. Among several uses, it has been widely employed in Europe in Environmental Impact Assessments (EIA), to examine the acceptability of potential effects of offshore wind farms on marine bird populations. As a case study, we use PBR to estimate the number of additional mortalities that a population with characteristics typical of a seabird population can theoretically sustain. We incorporated this level of additional mortality within Leslie matrix models to test assumptions within the PBR algorithm about density dependence and current population trajectory. Our analyses suggest that the PBR algorithm identifies levels of mortality which cause population declines for most population trajectories and forms of population regulation. Consequently, we recommend that practitioners do not use PBR in an EIA context for offshore wind energy developments. Rather than using simple algorithms that rely on potentially invalid implicit assumptions, we recommend use of Leslie matrix models for assessing the impact of additional mortality on a population, enabling the user to explicitly define assumptions and test their importance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Co-Dependency: An Examination of Underlying Assumptions.

    ERIC Educational Resources Information Center

    Myer, Rick A.; And Others

    1991-01-01

    Discusses need for careful examination of codependency as diagnostic category. Critically examines assumptions that codependency is disease, addiction, or predetermined by the environment. Discusses implications of assumptions. Offers recommendations for mental health counselors focusing on need for systematic research, redirection of efforts to…

  4. Ground-Based GPS Sensing of Azimuthal Variations in Precipitable Water Vapor

    NASA Technical Reports Server (NTRS)

    Kroger, P. M.; Bar-Sever, Y. E.

    1997-01-01

    Current models for troposphere delay employed by GPS software packages map the total zenith delay to the line-of-sight delay of the individual satellite-receiver link under the assumption of azimuthal homogeneity. This could be a poor approximation for many sites, in particular, those located at an ocean front or next to a mountain range. We have modified the GIPSY-OASIS II software package to include a simple non-symmetric mapping function (MacMillan, 1995) which introduces two new parameters.

  5. How Mean is the Mean?

    PubMed Central

    Speelman, Craig P.; McGann, Marek

    2013-01-01

    In this paper we voice concerns about the uncritical manner in which the mean is often used as a summary statistic in psychological research. We identify a number of implicit assumptions underlying the use of the mean and argue that the fragility of these assumptions should be more carefully considered. We examine some of the ways in which the potential violation of these assumptions can lead us into significant theoretical and methodological error. Illustrations of alternative models of research already extant within Psychology are used to explore methods of research less mean-dependent and suggest that a critical assessment of the assumptions underlying its use in research play a more explicit role in the process of study design and review. PMID:23888147

  6. Why is it Doing That? - Assumptions about the FMS

    NASA Technical Reports Server (NTRS)

    Feary, Michael; Immanuel, Barshi; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    In the glass cockpit, it's not uncommon to hear exclamations such as "why is it doing that?". Sometimes pilots ask "what were they thinking when they set it this way?" or "why doesn't it tell me what it's going to do next?". Pilots may hold a conceptual model of the automation that is the result of fleet lore, which may or may not be consistent with what the engineers had in mind. But what did the engineers have in mind? In this study, we present some of the underlying assumptions surrounding the glass cockpit. Engineers and designers make assumptions about the nature of the flight task; at the other end, instructor and line pilots make assumptions about how the automation works and how it was intended to be used. These underlying assumptions are seldom recognized or acknowledged, This study is an attempt to explicitly arti culate such assumptions to better inform design and training developments. This work is part of a larger project to support training strategies for automation.

  7. Inferred flows of electric currents in solar active regions

    NASA Technical Reports Server (NTRS)

    Ding, Y. J.; Hong, Q. F.; Hagyard, M. J.; Deloach, A. C.

    1985-01-01

    Techniques to identify sources of major current systems in active regions and their channels of flow are explored. Measured photospheric vector magnetic fields together with high resolution white light and H-alpha photographs provide the data base to derive the current systems in the photosphere and chromosphere of a solar active region. Simple mathematical constructions of active region fields and currents are used to interpret these data under the assumptions that the fields in the lower atmosphere (below 200 km) may not be force free but those in the chromosphere and higher are. The results obtained for the complex active region AR 2372 are: (1) Spots exhibiting significant spiral structure in the penumbral filaments were the source of vertical currents at the photospheric surface; (2) Magnetic neutral lines where the transverse magnetic field was strongly sheared were channels along which a strong current system flowed; (3) The inferred current systems produced a neutral sheet and oppositely-flowing currents in the area of the magnetic delta configuration that was the site of flaring.

  8. Projections of health care expenditures as a share of the GDP: actuarial and macroeconomic approaches.

    PubMed Central

    Warshawsky, M J

    1994-01-01

    STUDY QUESTION. Can the steady increases in health care expenditures as a share of GDP projected by widely cited actuarial models be rationalized by a macroeconomic model with sensible parameters and specification? DATA SOURCES. National Income and Product Accounts, and Social Security and Health Care Financing Administration are the data sources used in parameters estimates. STUDY DESIGN. Health care expenditures as a share of gross domestic product (GDP) are projected using two methodological approaches--actuarial and macroeconomic--and under various assumptions. The general equilibrium macroeconomic approach has the advantage of allowing an investigation of the causes of growth in the health care sector and its consequences for the overall economy. DATA COLLECTION METHODS. Simulations are used. PRINCIPAL FINDINGS. Both models unanimously project a continued increase in the ratio of health care expenditures to GDP. Under the most conservative assumptions, that is, robust economic growth, improved demographic trends, or a significant moderation in the rate of health care price inflation, the health care sector will consume more than a quarter of national output by 2065. Under other (perhaps more realistic) assumptions, including a continuation of current trends, both approaches predict that health care expenditures will comprise between a third and a half of national output. In the macroeconomic model, the increasing use of capital goods in the health care sector explains the observed rise in relative prices. Moreover, this "capital deepening" implies that a relatively modest fraction of the labor force is employed in health care and that the rest of the economy is increasingly starved for capital, resulting in a declining standard of living. PMID:8063567

  9. A Computational Framework for Analyzing Stochasticity in Gene Expression

    PubMed Central

    Sherman, Marc S.; Cohen, Barak A.

    2014-01-01

    Stochastic fluctuations in gene expression give rise to distributions of protein levels across cell populations. Despite a mounting number of theoretical models explaining stochasticity in protein expression, we lack a robust, efficient, assumption-free approach for inferring the molecular mechanisms that underlie the shape of protein distributions. Here we propose a method for inferring sets of biochemical rate constants that govern chromatin modification, transcription, translation, and RNA and protein degradation from stochasticity in protein expression. We asked whether the rates of these underlying processes can be estimated accurately from protein expression distributions, in the absence of any limiting assumptions. To do this, we (1) derived analytical solutions for the first four moments of the protein distribution, (2) found that these four moments completely capture the shape of protein distributions, and (3) developed an efficient algorithm for inferring gene expression rate constants from the moments of protein distributions. Using this algorithm we find that most protein distributions are consistent with a large number of different biochemical rate constant sets. Despite this degeneracy, the solution space of rate constants almost always informs on underlying mechanism. For example, we distinguish between regimes where transcriptional bursting occurs from regimes reflecting constitutive transcript production. Our method agrees with the current standard approach, and in the restrictive regime where the standard method operates, also identifies rate constants not previously obtainable. Even without making any assumptions we obtain estimates of individual biochemical rate constants, or meaningful ratios of rate constants, in 91% of tested cases. In some cases our method identified all of the underlying rate constants. The framework developed here will be a powerful tool for deducing the contributions of particular molecular mechanisms to specific patterns of gene expression. PMID:24811315

  10. Modeling the impact of novel male contraceptive methods on reductions in unintended pregnancies in Nigeria, South Africa, and the United States.

    PubMed

    Dorman, Emily; Perry, Brian; Polis, Chelsea B; Campo-Engelstein, Lisa; Shattuck, Dominick; Hamlin, Aaron; Aiken, Abigail; Trussell, James; Sokal, David

    2018-01-01

    We modeled the potential impact of novel male contraceptive methods on averting unintended pregnancies in the United States, South Africa, and Nigeria. We used an established methodology for calculating the number of couple-years of protection provided by a given contraceptive method mix. We compared a "current scenario" (reflecting current use of existing methods in each country) against "future scenarios," (reflecting whether a male oral pill or a reversible vas occlusion was introduced) in order to estimate the impact on unintended pregnancies averted. Where possible, we based our assumptions on acceptability data from studies on uptake of novel male contraceptive methods. Assuming that only 10% of interested men would take up a novel male method and that users would comprise both switchers (from existing methods) and brand-new users of contraception, the model estimated that introducing the male pill or reversible vas occlusion would decrease unintended pregnancies by 3.5% to 5.2% in the United States, by 3.2% to 5% in South Africa, and by 30.4% to 38% in Nigeria. Alternative model scenarios are presented assuming uptake as high as 15% and as low as 5% in each location. Model results were sensitive to assumptions regarding novel method uptake and proportion of switchers vs. new users. Even under conservative assumptions, the introduction of a male pill or temporary vas occlusion could meaningfully contribute to averting unintended pregnancies in a variety of contexts, especially in settings where current use of contraception is low. Novel male contraceptives could play a meaningful role in averting unintended pregnancies in a variety of contexts. The potential impact is especially great in settings where current use of contraception is low and if novel methods can attract new contraceptive users. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  11. Systematic Reviews of Animal Models: Methodology versus Epistemology

    PubMed Central

    Greek, Ray; Menache, Andre

    2013-01-01

    Systematic reviews are currently favored methods of evaluating research in order to reach conclusions regarding medical practice. The need for such reviews is necessitated by the fact that no research is perfect and experts are prone to bias. By combining many studies that fulfill specific criteria, one hopes that the strengths can be multiplied and thus reliable conclusions attained. Potential flaws in this process include the assumptions that underlie the research under examination. If the assumptions, or axioms, upon which the research studies are based, are untenable either scientifically or logically, then the results must be highly suspect regardless of the otherwise high quality of the studies or the systematic reviews. We outline recent criticisms of animal-based research, namely that animal models are failing to predict human responses. It is this failure that is purportedly being corrected via systematic reviews. We then examine the assumption that animal models can predict human outcomes to perturbations such as disease or drugs, even under the best of circumstances. We examine the use of animal models in light of empirical evidence comparing human outcomes to those from animal models, complexity theory, and evolutionary biology. We conclude that even if legitimate criticisms of animal models were addressed, through standardization of protocols and systematic reviews, the animal model would still fail as a predictive modality for human response to drugs and disease. Therefore, systematic reviews and meta-analyses of animal-based research are poor tools for attempting to reach conclusions regarding human interventions. PMID:23372426

  12. U(2)⁵ flavor symmetry and lepton universality violation in W→τν̄ τ

    DOE PAGES

    Filipuzzi, Alberto; Portolés, Jorge; González-Alonso, Martín

    2012-06-26

    The seeming violation of universality in the τ lepton coupling to the W boson suggested by LEP-II data is studied using an effective field theory (EFT) approach. Within this framework we explore how this feature fits into the current constraints from electroweak precision observables using different assumptions about the flavor structure of New Physics, namely [U(2)×U(1)]⁵ and U(2)⁵. We show the importance of leptonic and semileptonic tau decay measurements, giving 3–4 TeV bounds on the New Physics effective scale at 90% C.L. We conclude under very general assumptions that it is not possible to accommodate this deviation from universality inmore » the EFT framework, and thus such a signal could only be explained by the introduction of light degrees of freedom or New Physics strongly coupled at the electroweak scale.« less

  13. Teaching "Instant Experience" with Graphical Model Validation Techniques

    ERIC Educational Resources Information Center

    Ekstrøm, Claus Thorn

    2014-01-01

    Graphical model validation techniques for linear normal models are often used to check the assumptions underlying a statistical model. We describe an approach to provide "instant experience" in looking at a graphical model validation plot, so it becomes easier to validate if any of the underlying assumptions are violated.

  14. Experiment and simulation study on unidirectional carbon fiber composite component under dynamic 3 point bending loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Guowei; Sun, Qingping; Zeng, Danielle

    In current work, unidirectional (UD) carbon fiber composite hatsection component with two different layups are studied under dynamic 3 point bending loading. The experiments are performed at various impact velocities, and the effects of impactor velocity and layup on acceleration histories are compared. A macro model is established with LS-Dyna for more detailed study. The simulation results show that the delamination plays an important role during dynamic 3 point bending test. Based on the analysis with high speed camera, the sidewall of hatsection shows significant buckling rather than failure. Without considering the delamination, current material model cannot capture the postmore » failure phenomenon correctly. The sidewall delamination is modeled by assumption of larger failure strain together with slim parameters, and the simulation results of different impact velocities and layups match the experimental results reasonable well.« less

  15. Right-handed charged currents in the era of the Large Hadron Collider

    DOE PAGES

    Alioli, Simone; Cirigliano, Vincenzo; Dekens, Wouter Gerard; ...

    2017-05-16

    We discuss the phenomenology of right-handed charged currents in the frame-work of the Standard Model Effective Field Theory, in which they arise due to a single gauge-invariant dimension-six operator. We study the manifestations of the nine complex couplings of the W to right-handed quarks in collider physics, flavor physics, and low-energy precision measurements. We first obtain constraints on the couplings under the assumption that the right-handed operator is the dominant correction to the Standard Model at observable energies. Here, we subsequently study the impact of degeneracies with other Beyond-the-Standard-Model effective interactions and identify observables, both at colliders and low-energy experiments,more » that would uniquely point to right-handed charged currents.« less

  16. High efficiency FET microwave detector design

    NASA Astrophysics Data System (ADS)

    Luglio, Juan; Ishii, Thomas Koryu

    1990-12-01

    The work is based on an assumption that very little microwave power would be consumed at a negatively biased gate of a microwave FET, yet significant detected signals would be obtained at the drain if the bias is given. By analyzing a Taylor-series expansion of the drain-current equation in the vicinity of a fixed gate-bias voltage, the bias voltage is found to maximize the second derivative of the drain current, the gate-bias voltage characteristic curve for the maximum detected drain current under a given fixed drain-bias voltage. Based on these findings, a high-efficiency microwave detector is designed, fabricated, and tested at 8.6 GHz, and it is shown that the audio power over absorbed microwave power ratio of the detector is 135 percent due to the positive gain.

  17. Discharge current distribution in stratified soil under impulse discharge

    NASA Astrophysics Data System (ADS)

    Eniola Fajingbesi, Fawwaz; Shahida Midi, Nur; Elsheikh, Elsheikh M. A.; Hajar Yusoff, Siti

    2017-06-01

    The mobility of charge particles traversing a material defines its electrical properties. Soil (earth) have long been the universal grounding before and after the inception of active ground systems for electrical appliance purpose due to it semi-conductive properties. The soil can thus be modelled as a single material exhibiting semi-complex inductive-reactive impedance. Under impulse discharge such as lightning strikes to soil this property of soil could result in electric potential level fluctuation ranging from ground potential rise/fall to electromagnetic pulse coupling that could ultimately fail connected electrical appliance. In this work we have experimentally model the soil and lightning discharge using point to plane electrode setup to observe the current distribution characteristics at different soil conductivity [mS/m] range. The result presented from this research indicate above 5% shift in conductivity before and after discharge which is significant for consideration when dealing with grounding designs. The current distribution in soil have also be successfully observed and analysed from experimental result using mean current magnitude in relation to electrode distance and location, current density variation with depth all showing strong correlation with theoretical assumptions of a semi-complex impedance material.

  18. Can organizations benefit from worksite health promotion?

    PubMed Central

    Leviton, L C

    1989-01-01

    A decision-analytic model was developed to project the future effects of selected worksite health promotion activities on employees' likelihood of chronic disease and injury and on employer costs due to illness. The model employed a conservative set of assumptions and a limited five-year time frame. Under these assumptions, hypertension control and seat belt campaigns prevent a substantial amount of illness, injury, and death. Sensitivity analysis indicates that these two programs pay for themselves and under some conditions show a modest savings to the employer. Under some conditions, smoking cessation programs pay for themselves, preventing a modest amount of illness and death. Cholesterol reduction by behavioral means does not pay for itself under these assumptions. These findings imply priorities in prevention for employer and employee alike. PMID:2499556

  19. Inference of the ring current ion composition by means of charge exchange decay

    NASA Technical Reports Server (NTRS)

    Smith, P. H.; Bewtra, N. K.; Hoffman, R. A.

    1978-01-01

    The analysis of the measured ion fluxes during the several day storm recovery period and the assumption that beside hydrogen other ions were present and that the decays were exponential in nature, it was possible to establish three separate lifetimes for the ions. These fitted decay lifetimes are in excellent agreement with the expected charge exchange decay lifetimes for H(+), O(+), and He(+) in the energy and L-value range of the data. This inference technique, thus, establishes the presence of measurable and appreciable quantities of oxygen and helium ions as well as protons in the storm-time ring current. Indications that He(+) may also be present under these same conditions were found.

  20. A Test of the Validity of Inviscid Wall-Modeled LES

    NASA Astrophysics Data System (ADS)

    Redman, Andrew; Craft, Kyle; Aikens, Kurt

    2015-11-01

    Computational expense is one of the main deterrents to more widespread use of large eddy simulations (LES). As such, it is important to reduce computational costs whenever possible. In this vein, it may be reasonable to assume that high Reynolds number flows with turbulent boundary layers are inviscid when using a wall model. This assumption relies on the grid being too coarse to resolve either the viscous length scales in the outer flow or those near walls. We are not aware of other studies that have suggested or examined the validity of this approach. The inviscid wall-modeled LES assumption is tested here for supersonic flow over a flat plate on three different grids. Inviscid and viscous results are compared to those of another wall-modeled LES as well as experimental data - the results appear promising. Furthermore, the inviscid assumption reduces simulation costs by about 25% and 39% for supersonic and subsonic flows, respectively, with the current LES application. Recommendations are presented as are future areas of research. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.

  1. Searching for New Physics with b →s τ+τ- Processes

    NASA Astrophysics Data System (ADS)

    Capdevila, Bernat; Crivellin, Andreas; Descotes-Genon, Sébastien; Hofer, Lars; Matias, Joaquim

    2018-05-01

    In recent years, intriguing hints for the violation of lepton flavor universality (LFU) have been accumulated in semileptonic B decays, both in the charged-current transitions b →c ℓ-ν¯ℓ(i.e., RD, RD*,and RJ /ψ) and the neutral-current transitions b →s ℓ+ℓ-(i.e., RK and RK*). Hints for LFU violation in RD(*) and RJ /ψ point at large deviations from the standard model (SM) in processes involving tau leptons. Moreover, LHCb has reported deviations from the SM expectations in b →s μ+μ- processes as well as in the ratios RK and RK*, which together point at new physics (NP) affecting muons with a high significance. These hints for NP suggest the possibility of huge LFU-violating effects in b →s τ+τ- transitions. In this Letter, we predict the branching ratios of B →K τ+τ-, B →K*τ+τ-, and Bs→ϕ τ+τ-, taking into account NP effects in the Wilson coefficients C9(') ττ and C10(') τ τ. Assuming a common NP explanation of RD , RD(*), and RJ /ψ, we show that a very large enhancement of b →s τ+τ- processes, of around 3 orders of magnitude compared to the SM, can be expected under fairly general assumptions. We find that the branching ratios of Bs→τ+τ-, Bs→ϕ τ+τ-, and B →K(*)τ+τ- under these assumptions are in the observable range for LHCb and Belle II.

  2. Performance management in healthcare: a critical analysis.

    PubMed

    Hewko, Sarah J; Cummings, Greta G

    2016-01-01

    Purpose - The purpose of this paper is to explore the underlying theoretical assumptions and implications of current micro-level performance management and evaluation (PME) practices, specifically within health-care organizations. PME encompasses all activities that are designed and conducted to align employee outputs with organizational goals. Design/methodology/approach - PME, in the context of healthcare, is analyzed through the lens of critical theory. Specifically, Habermas' theory of communicative action is used to highlight some of the questions that arise in looking critically at PME. To provide a richer definition of key theoretical concepts, the authors conducted a preliminary, exploratory hermeneutic semantic analysis of the key words "performance" and "management" and of the term "performance management". Findings - Analysis reveals that existing micro-level PME systems in health-care organizations have the potential to create a workforce that is compliant, dependent, technically oriented and passive, and to support health-care systems in which inequalities and power imbalances are perpetually reinforced. Practical implications - At a time when the health-care system is under increasing pressure to provide high-quality, affordable services with fewer resources, it may be wise to investigate new sector-specific ways of evaluating and managing performance. Originality/value - In this paper, written for health-care leaders and health human resource specialists, the theoretical assumptions and implications of current PME practices within health-care organizations are explored. It is hoped that readers will be inspired to support innovative PME practices within their organizations that encourage peak performance among health-care professionals.

  3. Differential equation methods for simulation of GFP kinetics in non-steady state experiments.

    PubMed

    Phair, Robert D

    2018-03-15

    Genetically encoded fluorescent proteins, combined with fluorescence microscopy, are widely used in cell biology to collect kinetic data on intracellular trafficking. Methods for extraction of quantitative information from these data are based on the mathematics of diffusion and tracer kinetics. Current methods, although useful and powerful, depend on the assumption that the cellular system being studied is in a steady state, that is, the assumption that all the molecular concentrations and fluxes are constant for the duration of the experiment. Here, we derive new tracer kinetic analytical methods for non-steady state biological systems by constructing mechanistic nonlinear differential equation models of the underlying cell biological processes and linking them to a separate set of differential equations governing the kinetics of the fluorescent tracer. Linking the two sets of equations is based on a new application of the fundamental tracer principle of indistinguishability and, unlike current methods, supports correct dependence of tracer kinetics on cellular dynamics. This approach thus provides a general mathematical framework for applications of GFP fluorescence microscopy (including photobleaching [FRAP, FLIP] and photoactivation to frequently encountered experimental protocols involving physiological or pharmacological perturbations (e.g., growth factors, neurotransmitters, acute knockouts, inhibitors, hormones, cytokines, and metabolites) that initiate mechanistically informative intracellular transients. When a new steady state is achieved, these methods automatically reduce to classical steady state tracer kinetic analysis. © 2018 Phair. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  4. Adaptive bi-level programming for optimal gene knockouts for targeted overproduction under phenotypic constraints

    PubMed Central

    2013-01-01

    Background Optimization procedures to identify gene knockouts for targeted biochemical overproduction have been widely in use in modern metabolic engineering. Flux balance analysis (FBA) framework has provided conceptual simplifications for genome-scale dynamic analysis at steady states. Based on FBA, many current optimization methods for targeted bio-productions have been developed under the maximum cell growth assumption. The optimization problem to derive gene knockout strategies recently has been formulated as a bi-level programming problem in OptKnock for maximum targeted bio-productions with maximum growth rates. However, it has been shown that knockout mutants in fact reach the steady states with the minimization of metabolic adjustment (MOMA) from the corresponding wild-type strains instead of having maximal growth rates after genetic or metabolic intervention. In this work, we propose a new bi-level computational framework--MOMAKnock--which can derive robust knockout strategies under the MOMA flux distribution approximation. Methods In this new bi-level optimization framework, we aim to maximize the production of targeted chemicals by identifying candidate knockout genes or reactions under phenotypic constraints approximated by the MOMA assumption. Hence, the targeted chemical production is the primary objective of MOMAKnock while the MOMA assumption is formulated as the inner problem of constraining the knockout metabolic flux to be as close as possible to the steady-state phenotypes of wide-type strains. As this new inner problem becomes a quadratic programming problem, a novel adaptive piecewise linearization algorithm is developed in this paper to obtain the exact optimal solution to this new bi-level integer quadratic programming problem for MOMAKnock. Results Our new MOMAKnock model and the adaptive piecewise linearization solution algorithm are tested with a small E. coli core metabolic network and a large-scale iAF1260 E. coli metabolic network. The derived knockout strategies are compared with those from OptKnock. Our preliminary experimental results show that MOMAKnock can provide improved targeted productions with more robust knockout strategies. PMID:23368729

  5. Adaptive bi-level programming for optimal gene knockouts for targeted overproduction under phenotypic constraints.

    PubMed

    Ren, Shaogang; Zeng, Bo; Qian, Xiaoning

    2013-01-01

    Optimization procedures to identify gene knockouts for targeted biochemical overproduction have been widely in use in modern metabolic engineering. Flux balance analysis (FBA) framework has provided conceptual simplifications for genome-scale dynamic analysis at steady states. Based on FBA, many current optimization methods for targeted bio-productions have been developed under the maximum cell growth assumption. The optimization problem to derive gene knockout strategies recently has been formulated as a bi-level programming problem in OptKnock for maximum targeted bio-productions with maximum growth rates. However, it has been shown that knockout mutants in fact reach the steady states with the minimization of metabolic adjustment (MOMA) from the corresponding wild-type strains instead of having maximal growth rates after genetic or metabolic intervention. In this work, we propose a new bi-level computational framework--MOMAKnock--which can derive robust knockout strategies under the MOMA flux distribution approximation. In this new bi-level optimization framework, we aim to maximize the production of targeted chemicals by identifying candidate knockout genes or reactions under phenotypic constraints approximated by the MOMA assumption. Hence, the targeted chemical production is the primary objective of MOMAKnock while the MOMA assumption is formulated as the inner problem of constraining the knockout metabolic flux to be as close as possible to the steady-state phenotypes of wide-type strains. As this new inner problem becomes a quadratic programming problem, a novel adaptive piecewise linearization algorithm is developed in this paper to obtain the exact optimal solution to this new bi-level integer quadratic programming problem for MOMAKnock. Our new MOMAKnock model and the adaptive piecewise linearization solution algorithm are tested with a small E. coli core metabolic network and a large-scale iAF1260 E. coli metabolic network. The derived knockout strategies are compared with those from OptKnock. Our preliminary experimental results show that MOMAKnock can provide improved targeted productions with more robust knockout strategies.

  6. Automatic Spike Sorting Using Tuning Information

    PubMed Central

    Ventura, Valérie

    2011-01-01

    Current spike sorting methods focus on clustering neurons’ characteristic spike waveforms. The resulting spike-sorted data are typically used to estimate how covariates of interest modulate the firing rates of neurons. However, when these covariates do modulate the firing rates, they provide information about spikes’ identities, which thus far have been ignored for the purpose of spike sorting. This letter describes a novel approach to spike sorting, which incorporates both waveform information and tuning information obtained from the modulation of firing rates. Because it efficiently uses all the available information, this spike sorter yields lower spike misclassification rates than traditional automatic spike sorters. This theoretical result is verified empirically on several examples. The proposed method does not require additional assumptions; only its implementation is different. It essentially consists of performing spike sorting and tuning estimation simultaneously rather than sequentially, as is currently done. We used an expectation-maximization maximum likelihood algorithm to implement the new spike sorter. We present the general form of this algorithm and provide a detailed implementable version under the assumptions that neurons are independent and spike according to Poisson processes. Finally, we uncover a systematic flaw of spike sorting based on waveform information only. PMID:19548802

  7. Automatic spike sorting using tuning information.

    PubMed

    Ventura, Valérie

    2009-09-01

    Current spike sorting methods focus on clustering neurons' characteristic spike waveforms. The resulting spike-sorted data are typically used to estimate how covariates of interest modulate the firing rates of neurons. However, when these covariates do modulate the firing rates, they provide information about spikes' identities, which thus far have been ignored for the purpose of spike sorting. This letter describes a novel approach to spike sorting, which incorporates both waveform information and tuning information obtained from the modulation of firing rates. Because it efficiently uses all the available information, this spike sorter yields lower spike misclassification rates than traditional automatic spike sorters. This theoretical result is verified empirically on several examples. The proposed method does not require additional assumptions; only its implementation is different. It essentially consists of performing spike sorting and tuning estimation simultaneously rather than sequentially, as is currently done. We used an expectation-maximization maximum likelihood algorithm to implement the new spike sorter. We present the general form of this algorithm and provide a detailed implementable version under the assumptions that neurons are independent and spike according to Poisson processes. Finally, we uncover a systematic flaw of spike sorting based on waveform information only.

  8. On Some Unwarranted Tacit Assumptions in Cognitive Neuroscience†

    PubMed Central

    Mausfeld, Rainer

    2011-01-01

    The cognitive neurosciences are based on the idea that the level of neurons or neural networks constitutes a privileged level of analysis for the explanation of mental phenomena. This paper brings to mind several arguments to the effect that this presumption is ill-conceived and unwarranted in light of what is currently understood about the physical principles underlying mental achievements. It then scrutinizes the question why such conceptions are nevertheless currently prevailing in many areas of psychology. The paper argues that corresponding conceptions are rooted in four different aspects of our common-sense conception of mental phenomena and their explanation, which are illegitimately transferred to scientific enquiry. These four aspects pertain to the notion of explanation, to conceptions about which mental phenomena are singled out for enquiry, to an inductivist epistemology, and, in the wake of behavioristic conceptions, to a bias favoring investigations of input–output relations at the expense of enquiries into internal principles. To the extent that the cognitive neurosciences methodologically adhere to these tacit assumptions, they are prone to turn into a largely a-theoretical and data-driven endeavor while at the same time enhancing the prospects for receiving widespread public appreciation of their empirical findings. PMID:22435062

  9. Feeding nine billion: the challenge to sustainable crop production.

    PubMed

    Gregory, Peter J; George, Timothy S

    2011-11-01

    In the recent past there was a widespread working assumption in many countries that problems of food production had been solved, and that food security was largely a matter of distribution and access to be achieved principally by open markets. The events of 2008 challenged these assumptions, and made public a much wider debate about the costs of current food production practices to the environment and whether these could be sustained. As in the past 50 years, it is anticipated that future increases in crop production will be achieved largely by increasing yields per unit area rather than by increasing the area of cropped land. However, as yields have increased, so the ratio of photosynthetic energy captured to energy expended in crop production has decreased. This poses a considerable challenge: how to increase yield while simultaneously reducing energy consumption (allied to greenhouse gas emissions) and utilizing resources such as water and phosphate more efficiently. Given the timeframe in which the increased production has to be realized, most of the increase will need to come from crop genotypes that are being bred now, together with known agronomic and management practices that are currently under-developed.

  10. Upper Limb Coordination in Individuals With Stroke: Poorly Defined and Poorly Quantified.

    PubMed

    Tomita, Yosuke; Rodrigues, Marcos R M; Levin, Mindy F

    2017-01-01

    The identification of deficits in interjoint coordination is important in order to better focus upper limb rehabilitative treatment after stroke. The majority of standardized clinical measures characterize endpoint performance, such as accuracy, speed, and smoothness, based on the assumption that endpoint performance reflects interjoint coordination, without measuring the underlying temporal and spatial sequences of joint recruitment directly. However, this assumption is questioned since improvements of endpoint performance can be achieved through different degrees of restitution or compensation of upper limb motor impairments based on the available kinematic redundancy of the system. Confusion about adequate measurement may stem from a lack a definition of interjoint coordination during reaching. We suggest an operational definition of interjoint coordination during reaching as a goal-oriented process in which joint degrees of freedom are organized in both spatial and temporal domains such that the endpoint reaches a desired location in a context-dependent manner. In this point-of-view article, we consider how current approaches to laboratory and clinical measures of coordination comply with our definition. We propose future study directions and specific research strategies to develop clinical measures of interjoint coordination with better construct and content validity than those currently in use.

  11. New method for determining central axial orientation of flux rope embedded within current sheet using multipoint measurements

    NASA Astrophysics Data System (ADS)

    Li, ZhaoYu; Chen, Tao; Yan, GuangQing

    2016-10-01

    A new method for determining the central axial orientation of a two-dimensional coherent magnetic flux rope (MFR) via multipoint analysis of the magnetic-field structure is developed. The method is devised under the following geometrical assumptions: (1) on its cross section, the structure is left-right symmetric; (2) the projected structure velocity is vertical to the line of symmetry. The two conditions can be naturally satisfied for cylindrical MFRs and are expected to be satisfied for MFRs that are flattened within current sheets. The model test demonstrates that, for determining the axial orientation of such structures, the new method is more efficient and reliable than traditional techniques such as minimum-variance analysis of the magnetic field, Grad-Shafranov (GS) reconstruction, and the more recent method based on the cylindrically symmetric assumption. A total of five flux transfer events observed by Cluster are studied using the proposed approach, and the application results indicate that the observed structures, regardless of their actual physical properties, fit the assumed geometrical model well. For these events, the inferred axial orientations are all in excellent agreement with those obtained using the multi-GS reconstruction technique.

  12. Simulation of the hybrid and steady state advanced operating modes in ITER

    NASA Astrophysics Data System (ADS)

    Kessel, C. E.; Giruzzi, G.; Sips, A. C. C.; Budny, R. V.; Artaud, J. F.; Basiuk, V.; Imbeaux, F.; Joffrin, E.; Schneider, M.; Murakami, M.; Luce, T.; St. John, Holger; Oikawa, T.; Hayashi, N.; Takizuka, T.; Ozeki, T.; Na, Y.-S.; Park, J. M.; Garcia, J.; Tucillo, A. A.

    2007-09-01

    Integrated simulations are performed to establish a physics basis, in conjunction with present tokamak experiments, for the operating modes in the International Thermonuclear Experimental Reactor (ITER). Simulations of the hybrid mode are done using both fixed and free-boundary 1.5D transport evolution codes including CRONOS, ONETWO, TSC/TRANSP, TOPICS and ASTRA. The hybrid operating mode is simulated using the GLF23 and CDBM05 energy transport models. The injected powers are limited to the negative ion neutral beam, ion cyclotron and electron cyclotron heating systems. Several plasma parameters and source parameters are specified for the hybrid cases to provide a comparison of 1.5D core transport modelling assumptions, source physics modelling assumptions, as well as numerous peripheral physics modelling. Initial results indicate that very strict guidelines will need to be imposed on the application of GLF23, for example, to make useful comparisons. Some of the variations among the simulations are due to source models which vary widely among the codes used. In addition, there are a number of peripheral physics models that should be examined, some of which include fusion power production, bootstrap current, treatment of fast particles and treatment of impurities. The hybrid simulations project to fusion gains of 5.6-8.3, βN values of 2.1-2.6 and fusion powers ranging from 350 to 500 MW, under the assumptions outlined in section 3. Simulations of the steady state operating mode are done with the same 1.5D transport evolution codes cited above, except the ASTRA code. In these cases the energy transport model is more difficult to prescribe, so that energy confinement models will range from theory based to empirically based. The injected powers include the same sources as used for the hybrid with the possible addition of lower hybrid. The simulations of the steady state mode project to fusion gains of 3.5-7, βN values of 2.3-3.0 and fusion powers of 290 to 415 MW, under the assumptions described in section 4. These simulations will be presented and compared with particular focus on the resulting temperature profiles, source profiles and peripheral physics profiles. The steady state simulations are at an early stage and are focused on developing a range of safety factor profiles with 100% non-inductive current.

  13. Three-class ROC analysis--the equal error utility assumption and the optimality of three-class ROC surface using the ideal observer.

    PubMed

    He, Xin; Frey, Eric C

    2006-08-01

    Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.

  14. Modeling soil CO2 production and transport with dynamic source and diffusion terms: testing the steady-state assumption using DETECT v1.0

    NASA Astrophysics Data System (ADS)

    Ryan, Edmund M.; Ogle, Kiona; Kropp, Heather; Samuels-Crow, Kimberly E.; Carrillo, Yolima; Pendall, Elise

    2018-05-01

    The flux of CO2 from the soil to the atmosphere (soil respiration, Rsoil) is a major component of the global carbon (C) cycle. Methods to measure and model Rsoil, or partition it into different components, often rely on the assumption that soil CO2 concentrations and fluxes are in steady state, implying that Rsoil is equal to the rate at which CO2 is produced by soil microbial and root respiration. Recent research, however, questions the validity of this assumption. Thus, the aim of this work was two-fold: (1) to describe a non-steady state (NSS) soil CO2 transport and production model, DETECT, and (2) to use this model to evaluate the environmental conditions under which Rsoil and CO2 production are likely in NSS. The backbone of DETECT is a non-homogeneous, partial differential equation (PDE) that describes production and transport of soil CO2, which we solve numerically at fine spatial and temporal resolution (e.g., 0.01 m increments down to 1 m, every 6 h). Production of soil CO2 is simulated for every depth and time increment as the sum of root respiration and microbial decomposition of soil organic matter. Both of these factors can be driven by current and antecedent soil water content and temperature, which can also vary by time and depth. We also analytically solved the ordinary differential equation (ODE) corresponding to the steady-state (SS) solution to the PDE model. We applied the DETECT NSS and SS models to the six-month growing season period representative of a native grassland in Wyoming. Simulation experiments were conducted with both model versions to evaluate factors that could affect departure from SS, such as (1) varying soil texture; (2) shifting the timing or frequency of precipitation; and (3) with and without the environmental antecedent drivers. For a coarse-textured soil, Rsoil from the SS model closely matched that of the NSS model. However, in a fine-textured (clay) soil, growing season Rsoil was ˜ 3 % higher under the assumption of NSS (versus SS). These differences were exaggerated in clay soil at daily time scales whereby Rsoil under the SS assumption deviated from NSS by up to 35 % on average in the 10 days following a major precipitation event. Incorporation of antecedent drivers increased the magnitude of Rsoil by 15 to 37 % for coarse- and fine-textured soils, respectively. However, the responses of Rsoil to the timing of precipitation and antecedent drivers did not differ between SS and NSS assumptions. In summary, the assumption of SS conditions can be violated depending on soil type and soil moisture status, as affected by precipitation inputs. The DETECT model provides a framework for accommodating NSS conditions to better predict Rsoil and associated soil carbon cycling processes.

  15. Towards a Full Waveform Ambient Noise Inversion

    NASA Astrophysics Data System (ADS)

    Sager, K.; Ermert, L. A.; Boehm, C.; Fichtner, A.

    2015-12-01

    Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green's function between the two receivers. This assumption, however, is only met under specific conditions, for instance, wavefield diffusivity and equipartitioning, zero attenuation, etc., that are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations regarding Earth structure and noise generation. To overcome this limitation we attempt to develop a method that consistently accounts for noise distribution, 3D heterogeneous Earth structure and the full seismic wave propagation physics in order to improve the current resolution of tomographic images of the Earth. As an initial step towards a full waveform ambient noise inversion we develop a preliminary inversion scheme based on a 2D finite-difference code simulating correlation functions and on adjoint techniques. With respect to our final goal, a simultaneous inversion for noise distribution and Earth structure, we address the following two aspects: (1) the capabilities of different misfit functionals to image wave speed anomalies and source distribution and (2) possible source-structure trade-offs, especially to what extent unresolvable structure could be mapped into the inverted noise source distribution and vice versa.

  16. Hunter-gatherers have less famine than agriculturalists.

    PubMed

    Berbesque, J Colette; Marlowe, Frank W; Shaw, Peter; Thompson, Peter

    2014-01-01

    The idea that hunter-gatherer societies experience more frequent famine than societies with other modes of subsistence is pervasive in the literature on human evolution. This idea underpins, for example, the 'thrifty genotype hypothesis'. This hypothesis proposes that our hunter-gatherer ancestors were adapted to frequent famines, and that these once adaptive 'thrifty genotypes' are now responsible for the current obesity epidemic. The suggestion that hunter-gatherers are more prone to famine also underlies the widespread assumption that these societies live in marginal habitats. Despite the ubiquity of references to 'feast and famine' in the literature describing our hunter-gatherer ancestors, it has rarely been tested whether hunter-gatherers suffer from more famine than other societies. Here, we analyse famine frequency and severity in a large cross-cultural database, in order to explore relationships between subsistence and famine risk. This is the first study to report that, if we control for habitat quality, hunter-gatherers actually had significantly less--not more--famine than other subsistence modes. This finding challenges some of the assumptions underlying for models of the evolution of the human diet, as well as our understanding of the recent epidemic of obesity and type 2 diabetes mellitus.

  17. Do all inhibitions act alike? A study of go/no-go and stop-signal paradigms

    PubMed Central

    Takács, Ádám

    2017-01-01

    Response inhibition is frequently measured by the Go/no-go and Stop-signal tasks. These two are often used indiscriminately under the assumption that both measure similar inhibitory control abilities. However, accumulating evidence show differences in both tasks' modulations, raising the question of whether they tap into equivalent cognitive mechanisms. In the current study, a comparison of the performance in both tasks took place under the influence of negative stimuli, following the assumption that ''controlled inhibition'', as measured by Stop-signal, but not ''automatic inhibition'', as measured by Go/no-go, will be affected. 54 young adults performed a task in which negative pictures, neutral pictures or no-pictures preceded go trials, no-go trials, and stop-trials. While the exposure to negative pictures impaired performance on go trials and improved the inhibitory capacity in Stop-signal task, the inhibitory performance in Go/no-go task was generally unaffected. The results support the conceptualization of different mechanisms operated by both tasks, thus emphasizing the necessity to thoroughly fathom both inhibitory processes and identify their corresponding cognitive measures. Implications regarding the usage of cognitive tasks for strengthening inhibitory capacity among individuals struggling with inhibitory impairments are discussed. PMID:29065184

  18. A Note on the Assumption of Identical Distributions for Nonparametric Tests of Location

    ERIC Educational Resources Information Center

    Nordstokke, David W.; Colp, S. Mitchell

    2018-01-01

    Often, when testing for shift in location, researchers will utilize nonparametric statistical tests in place of their parametric counterparts when there is evidence or belief that the assumptions of the parametric test are not met (i.e., normally distributed dependent variables). An underlying and often unattended to assumption of nonparametric…

  19. 10 CFR 436.17 - Establishing energy or water cost data.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... escalation rate assumptions under § 436.14. When energy costs begin to accrue at a later time, subtract the... assumptions under § 436.14. When water costs begin to accrue at a later time, subtract the present value of... Methodology and Procedures for Life Cycle Cost Analyses § 436.17 Establishing energy or water cost data. (a...

  20. Political Assumptions Underlying Pedagogies of National Education: The Case of Student Teachers Teaching 'British Values' in England

    ERIC Educational Resources Information Center

    Sant, Edda; Hanley, Chris

    2018-01-01

    Teacher education in England now requires that student teachers follow practices that do not undermine "fundamental British values" where these practices are assessed against a set of ethics and behaviour standards. This paper examines the political assumptions underlying pedagogical interpretations about the education of national…

  1. The effect of terrain slope on firefighter safety zone effectiveness

    Treesearch

    Bret Butler; J. Forthofer; K. Shannon; D. Jimenez; D. Frankman

    2010-01-01

    The current safety zone guidelines used in the US were developed based on the assumption that the fire and safety zone were located on flat terrain. The minimum safe distance for a firefighter to be from a flame was calculated as that corresponding to a radiant incident energy flux level of 7.0kW-m-2. Current firefighter safety guidelines are based on the assumption...

  2. High Altitude Venus Operations Concept Trajectory Design, Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Lugo, Rafael A.; Ozoroski, Thomas A.; Van Norman, John W.; Arney, Dale C.; Dec, John A.; Jones, Christopher A.; Zumwalt, Carlie H.

    2015-01-01

    A trajectory design and analysis that describes aerocapture, entry, descent, and inflation of manned and unmanned High Altitude Venus Operation Concept (HAVOC) lighter-than-air missions is presented. Mission motivation, concept of operations, and notional entry vehicle designs are presented. The initial trajectory design space is analyzed and discussed before investigating specific trajectories that are deemed representative of a feasible Venus mission. Under the project assumptions, while the high-mass crewed mission will require further research into aerodynamic decelerator technology, it was determined that the unmanned robotic mission is feasible using current technology.

  3. Economic consequences for Medicaid of human immunodeficiency virus infection

    PubMed Central

    Baily, Mary Ann; Bilheimer, Linda; Wooldridge, Judith; well, Kathryn Lang; Greenberg, Warren

    1990-01-01

    Medicaid is currently a major source of financing for health care for those with acquired immunodeficiency syndrome (AIDS) and to a lesser extent, for those with other manifestations of human immunodeficiency virus (HIV) infection. It is likely to become even more important in the future. This article focuses on the structure of Medicaid in the context of the HIV epidemic, covering epidemiological issues, eligibility, service coverage and use, and reimbursement. A simple methodology for estimating HI\\'-related Medicaid costs under alternative assumptions about the future is also explained. PMID:10113503

  4. Intelligence/Electronic Warfare (IEW) Direction-Finding and Fix Estimation Analysis Report. Volume 2. Trailblazer

    DTIC Science & Technology

    1985-12-20

    Report) Approved for Public Disemination I 17. DISTRIBUTION STATEMENT (of the abstract entered In Block 20, It different from Report) I1. SUPPLEMENTARY...Continue an riverl. aid. It neceseary ind Idoni..•y by block number) Fix Estimation Statistical Assumptions, Error Budget, Unnodclcd Errors, Coding...llgedl i t Eh’ fI) t r !". 1 I ’ " r, tl 1: a Icr it h m hc ro ,, ] y zcd arc Csedil other Current TIV! Sysem ’ he report examines the underlying

  5. TOPICAL REVIEW: The stability for the Cauchy problem for elliptic equations

    NASA Astrophysics Data System (ADS)

    Alessandrini, Giovanni; Rondi, Luca; Rosset, Edi; Vessella, Sergio

    2009-12-01

    We discuss the ill-posed Cauchy problem for elliptic equations, which is pervasive in inverse boundary value problems modeled by elliptic equations. We provide essentially optimal stability results, in wide generality and under substantially minimal assumptions. As a general scheme in our arguments, we show that all such stability results can be derived by the use of a single building brick, the three-spheres inequality. Due to the current absence of research funding from the Italian Ministry of University and Research, this work has been completed without any financial support.

  6. The crux of the method: assumptions in ordinary least squares and logistic regression.

    PubMed

    Long, Rebecca G

    2008-10-01

    Logistic regression has increasingly become the tool of choice when analyzing data with a binary dependent variable. While resources relating to the technique are widely available, clear discussions of why logistic regression should be used in place of ordinary least squares regression are difficult to find. The current paper compares and contrasts the assumptions of ordinary least squares with those of logistic regression and explains why logistic regression's looser assumptions make it adept at handling violations of the more important assumptions in ordinary least squares.

  7. The Importance of the Assumption of Uncorrelated Errors in Psychometric Theory

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.; Patelis, Thanos

    2015-01-01

    A critical discussion of the assumption of uncorrelated errors in classical psychometric theory and its applications is provided. It is pointed out that this assumption is essential for a number of fundamental results and underlies the concept of parallel tests, the Spearman-Brown's prophecy and the correction for attenuation formulas as well as…

  8. Under What Assumptions Do Site-by-Treatment Instruments Identify Average Causal Effects?

    ERIC Educational Resources Information Center

    Reardon, Sean F.; Raudenbush, Stephen W.

    2011-01-01

    The purpose of this paper is to clarify the assumptions that must be met if this--multiple site, multiple mediator--strategy, hereafter referred to as "MSMM," is to identify the average causal effects (ATE) in the populations of interest. The authors' investigation of the assumptions of the multiple-mediator, multiple-site IV model demonstrates…

  9. Keeping Things Simple: Why the Human Development Index Should Not Diverge from Its Equal Weights Assumption

    ERIC Educational Resources Information Center

    Stapleton, Lee M.; Garrod, Guy D.

    2007-01-01

    Using a range of statistical criteria rooted in Information Theory we show that there is little justification for relaxing the equal weights assumption underlying the United Nation's Human Development Index (HDI) even if the true HDI diverges significantly from this assumption. Put differently, the additional model complexity that unequal weights…

  10. Empirical Tests of the Assumptions Underlying Models for Foreign Exchange Rates.

    DTIC Science & Technology

    1984-03-01

    Research Report COs 481 EMPIRICAL TESTS OF THE ASSUMPTIO:IS UNDERLYING MODELS FOR FOREIGN EXCHANGE RATES by P. Brockett B. Golany 00 00 CENTER FOR...Research Report CCS 481 EMPIRICAL TESTS OF THE ASSUMPTIONS UNDERLYING MODELS FOR FOREIGN EXCHANGE RATES by P. Brockett B. Golany March 1984...applying these tests to the U.S. dollar to Japanese Yen foreign exchange rates . Conclusions and discussion is given in section VI. 1The previous authors

  11. Using the Folstein Mini Mental State Exam (MMSE) to explore methodological issues in cognitive aging research.

    PubMed

    Monroe, Todd; Carter, Michael

    2012-09-01

    Cognitive scales are used frequently in geriatric research and practice. These instruments are constructed with underlying assumptions that are a part of their validation process. A common measurement scale used in older adults is the Folstein Mini Mental State Exam (MMSE). The MMSE was designed to screen for cognitive impairment and is used often in geriatric research. This paper has three aims. Aim one was to explore four potential threats to validity in the use of the MMSE: (1) administering the exam without meeting the underlying assumptions, (2) not reporting that the underlying assumptions were assessed prior to test administration, (3) use of variable and inconsistent cut-off scores for the determination of presence of cognitive impairment, and (4) failure to adjust the scores based on the demographic characteristics of the tested subject. Aim two was to conduct a literature search to determine if the assumptions of (1) education level assessment, (2) sensory assessment, and (3) language fluency were being met and clearly reported in published research using the MMSE. Aim three was to provide recommendations to minimalize threats to validity in research studies that use cognitive scales, such as the MMSE. We found inconsistencies in published work in reporting whether or not subjects meet the assumptions that underlie a reliable and valid MMSE score. These inconsistencies can pose threats to the reliability of exam results. Fourteen of the 50 studies reviewed reported inclusion of all three of these assumptions. Inconsistencies in reporting the inclusion of the underlying assumptions for a reliable score could mean that subjects were not appropriate to be tested by use of the MMSE or that an appropriate test administration of the MMSE was not clearly reported. Thus, the research literature could have threats to both validity and reliability based on misuse of or improper reported use of the MMSE. Six recommendations are provided to minimalize these threats in future research.

  12. Flood return level analysis of Peaks over Threshold series under changing climate

    NASA Astrophysics Data System (ADS)

    Li, L.; Xiong, L.; Hu, T.; Xu, C. Y.; Guo, S.

    2016-12-01

    Obtaining insights into future flood estimation is of great significance for water planning and management. Traditional flood return level analysis with the stationarity assumption has been challenged by changing environments. A method that takes into consideration the nonstationarity context has been extended to derive flood return levels for Peaks over Threshold (POT) series. With application to POT series, a Poisson distribution is normally assumed to describe the arrival rate of exceedance events, but this distribution assumption has at times been reported as invalid. The Negative Binomial (NB) distribution is therefore proposed as an alternative to the Poisson distribution assumption. Flood return levels were extrapolated in nonstationarity context for the POT series of the Weihe basin, China under future climate scenarios. The results show that the flood return levels estimated under nonstationarity can be different with an assumption of Poisson and NB distribution, respectively. The difference is found to be related to the threshold value of POT series. The study indicates the importance of distribution selection in flood return level analysis under nonstationarity and provides a reference on the impact of climate change on flood estimation in the Weihe basin for the future.

  13. The Farley-Buneman Instability in the Solar Chromosphere

    NASA Astrophysics Data System (ADS)

    Madsen, Chad A.; Dimant, Yakov S.; Oppenheim, Meers M.; Fontenla, Juan M.

    2012-10-01

    Strong currents drive the Farley-Buneman Instability (FBI) in the E-region ionosphere creating turbulence and heating. The solar chromosphere is a similar weakly ionized region with strong local Pedersen currents, and the FBI may play a role in sustaining the thin layer of enhanced temperature observed there. The plasma of the solar chromosphere requires a new theory of the FBI accounting for the presence of multiple ion species, higher temperatures and collisions between ionized metals and neutral hydrogen. This paper discusses the assumptions underlying the derivation of the multi-species FBI dispersion relation. It presents the predicted critical electron drift velocity needed to trigger the instability. Finally, this work argues that observed chromospheric neutral flow speeds are sufficiently large to trigger the multi-species FBI.

  14. Analytic drain current model for III-V cylindrical nanowire transistors

    NASA Astrophysics Data System (ADS)

    Marin, E. G.; Ruiz, F. G.; Schmidt, V.; Godoy, A.; Riel, H.; Gámiz, F.

    2015-07-01

    An analytical model is proposed to determine the drain current of III-V cylindrical nanowires (NWs). The model uses the gradual channel approximation and takes into account the complete analytical solution of the Poisson and Schrödinger equations for the Γ-valley and for an arbitrary number of subbands. Fermi-Dirac statistics are considered to describe the 1D electron gas in the NWs, being the resulting recursive Fermi-Dirac integral of order -1/2 successfully integrated under reasonable assumptions. The model has been validated against numerical simulations showing excellent agreement for different semiconductor materials, diameters up to 40 nm, gate overdrive biases up to 0.7 V, and densities of interface states up to 1013eV-1cm-2 .

  15. Assumptions of Statistical Tests: What Lies Beneath.

    PubMed

    Jupiter, Daniel C

    We have discussed many statistical tests and tools in this series of commentaries, and while we have mentioned the underlying assumptions of the tests, we have not explored them in detail. We stop to look at some of the assumptions of the t-test and linear regression, justify and explain them, mention what can go wrong when the assumptions are not met, and suggest some solutions in this case. Copyright © 2017 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  16. Improving inference for aerial surveys of bears: The importance of assumptions and the cost of unnecessary complexity.

    PubMed

    Schmidt, Joshua H; Wilson, Tammy L; Thompson, William L; Reynolds, Joel H

    2017-07-01

    Obtaining useful estimates of wildlife abundance or density requires thoughtful attention to potential sources of bias and precision, and it is widely understood that addressing incomplete detection is critical to appropriate inference. When the underlying assumptions of sampling approaches are violated, both increased bias and reduced precision of the population estimator may result. Bear ( Ursus spp.) populations can be difficult to sample and are often monitored using mark-recapture distance sampling (MRDS) methods, although obtaining adequate sample sizes can be cost prohibitive. With the goal of improving inference, we examined the underlying methodological assumptions and estimator efficiency of three datasets collected under an MRDS protocol designed specifically for bears. We analyzed these data using MRDS, conventional distance sampling (CDS), and open-distance sampling approaches to evaluate the apparent bias-precision tradeoff relative to the assumptions inherent under each approach. We also evaluated the incorporation of informative priors on detection parameters within a Bayesian context. We found that the CDS estimator had low apparent bias and was more efficient than the more complex MRDS estimator. When combined with informative priors on the detection process, precision was increased by >50% compared to the MRDS approach with little apparent bias. In addition, open-distance sampling models revealed a serious violation of the assumption that all bears were available to be sampled. Inference is directly related to the underlying assumptions of the survey design and the analytical tools employed. We show that for aerial surveys of bears, avoidance of unnecessary model complexity, use of prior information, and the application of open population models can be used to greatly improve estimator performance and simplify field protocols. Although we focused on distance sampling-based aerial surveys for bears, the general concepts we addressed apply to a variety of wildlife survey contexts.

  17. TARGETED SEQUENTIAL DESIGN FOR TARGETED LEARNING INFERENCE OF THE OPTIMAL TREATMENT RULE AND ITS MEAN REWARD.

    PubMed

    Chambaz, Antoine; Zheng, Wenjing; van der Laan, Mark J

    2017-01-01

    This article studies the targeted sequential inference of an optimal treatment rule (TR) and its mean reward in the non-exceptional case, i.e. , assuming that there is no stratum of the baseline covariates where treatment is neither beneficial nor harmful, and under a companion margin assumption. Our pivotal estimator, whose definition hinges on the targeted minimum loss estimation (TMLE) principle, actually infers the mean reward under the current estimate of the optimal TR. This data-adaptive statistical parameter is worthy of interest on its own. Our main result is a central limit theorem which enables the construction of confidence intervals on both mean rewards under the current estimate of the optimal TR and under the optimal TR itself. The asymptotic variance of the estimator takes the form of the variance of an efficient influence curve at a limiting distribution, allowing to discuss the efficiency of inference. As a by product, we also derive confidence intervals on two cumulated pseudo-regrets, a key notion in the study of bandits problems. A simulation study illustrates the procedure. One of the corner-stones of the theoretical study is a new maximal inequality for martingales with respect to the uniform entropy integral.

  18. Bartnik’s splitting conjecture and Lorentzian Busemann function

    NASA Astrophysics Data System (ADS)

    Amini, Roya; Sharifzadeh, Mehdi; Bahrampour, Yousof

    2018-05-01

    In 1988 Bartnik posed the splitting conjecture about the cosmological space-time. This conjecture has been proved by several people, with different approaches and by using some additional assumptions such as ‘S-ray condition’ and ‘level set condition’. It is known that the ‘S-ray condition’ yields the ‘level set condition’. We have proved that the two are indeed equivalent, by giving a different proof under the assumption of the ‘level set condition’. In addition, we have shown several properties of the cosmological space-time, under the presence of the ‘level set condition’. Finally we have provided a proof of the conjecture under a different assumption on the cosmological space-time. But we first prove some results without the timelike convergence condition which help us to state our proofs.

  19. Effects of fish movement assumptions on the design of a marine protected area to protect an overfished stock.

    PubMed

    Cornejo-Donoso, Jorge; Einarsson, Baldvin; Birnir, Bjorn; Gaines, Steven D

    2017-01-01

    Marine Protected Areas (MPA) are important management tools shown to protect marine organisms, restore biomass, and increase fisheries yields. While MPAs have been successful in meeting these goals for many relatively sedentary species, highly mobile organisms may get few benefits from this type of spatial protection due to their frequent movement outside the protected area. The use of a large MPA can compensate for extensive movement, but testing this empirically is challenging, as it requires both large areas and sufficient time series to draw conclusions. To overcome this limitation, MPA models have been used to identify designs and predict potential outcomes, but these simulations are highly sensitive to the assumptions describing the organism's movements. Due to recent improvements in computational simulations, it is now possible to include very complex movement assumptions in MPA models (e.g. Individual Based Model). These have renewed interest in MPA simulations, which implicitly assume that increasing the detail in fish movement overcomes the sensitivity to the movement assumptions. Nevertheless, a systematic comparison of the designs and outcomes obtained under different movement assumptions has not been done. In this paper, we use an individual based model, interconnected to population and fishing fleet models, to explore the value of increasing the detail of the movement assumptions using four scenarios of increasing behavioral complexity: a) random, diffusive movement, b) aggregations, c) aggregations that respond to environmental forcing (e.g. sea surface temperature), and d) aggregations that respond to environmental forcing and are transported by currents. We then compare these models to determine how the assumptions affect MPA design, and therefore the effective protection of the stocks. Our results show that the optimal MPA size to maximize fisheries benefits increases as movement complexity increases from ~10% for the diffusive assumption to ~30% when full environment forcing was used. We also found that in cases of limited understanding of the movement dynamics of a species, simplified assumptions can be used to provide a guide for the minimum MPA size needed to effectively protect the stock. However, using oversimplified assumptions can produce suboptimal designs and lead to a density underestimation of ca. 30%; therefore, the main value of detailed movement dynamics is to provide more reliable MPA design and predicted outcomes. Large MPAs can be effective in recovering overfished stocks, protect pelagic fish and provide significant increases in fisheries yields. Our models provide a means to empirically test this spatial management tool, which theoretical evidence consistently suggests as an effective alternative to managing highly mobile pelagic stocks.

  20. Medical cost analysis: application to colorectal cancer data from the SEER Medicare database.

    PubMed

    Bang, Heejung

    2005-10-01

    Incompleteness is a key feature of most survival data. Numerous well established statistical methodologies and algorithms exist for analyzing life or failure time data. However, induced censorship invalidates the use of those standard analytic tools for some survival-type data such as medical costs. In this paper, some valid methods currently available for analyzing censored medical cost data are reviewed. Some cautionary findings under different assumptions are envisioned through application to medical costs from colorectal cancer patients. Cost analysis should be suitably planned and carefully interpreted under various meaningful scenarios even with judiciously selected statistical methods. This approach would be greatly helpful to policy makers who seek to prioritize health care expenditures and to assess the elements of resource use.

  1. The future SwissFEL facility - challenges from a radiation protection point of view

    NASA Astrophysics Data System (ADS)

    Strabel, Claudia; Fuchs, Albert; Galev, Roman; Hohmann, Eike; Lüscher, Roland; Musto, Elisa; Mayer, Sabine

    2017-09-01

    The Swiss Free Electron Laser is a new large-scale facility currently under construction at the Paul Scherrer Institute. Accessible areas surrounding the 720 m long accelerator tunnel, together with the pulsed time structure of the primary beam, lead to new challenges for ensuring that the radiation level in these areas remains in compliance with the legal constraints. For this purpose an online survey system based on the monitoring of the ambient dose rate arising from neutrons inside of the accelerator tunnel and opportunely calibrated to indicate the total dose rate outside of the tunnel, will be installed. The presented study provides a conceptual overview of this system, its underlying assumptions and measurements so far performed to validate its concept.

  2. Searching for New Physics with b→sτ^{+}τ^{-} Processes.

    PubMed

    Capdevila, Bernat; Crivellin, Andreas; Descotes-Genon, Sébastien; Hofer, Lars; Matias, Joaquim

    2018-05-04

    In recent years, intriguing hints for the violation of lepton flavor universality (LFU) have been accumulated in semileptonic B decays, both in the charged-current transitions b→cℓ^{-}ν[over ¯]_{ℓ} (i.e., R_{D}, R_{D^{*}}, and R_{J/ψ}) and the neutral-current transitions b→sℓ^{+}ℓ^{-} (i.e., R_{K} and R_{K^{*}}). Hints for LFU violation in R_{D^{(*)}} and R_{J/ψ} point at large deviations from the standard model (SM) in processes involving tau leptons. Moreover, LHCb has reported deviations from the SM expectations in b→sμ^{+}μ^{-} processes as well as in the ratios R_{K} and R_{K^{*}}, which together point at new physics (NP) affecting muons with a high significance. These hints for NP suggest the possibility of huge LFU-violating effects in b→sτ^{+}τ^{-} transitions. In this Letter, we predict the branching ratios of B→Kτ^{+}τ^{-}, B→K^{*}τ^{+}τ^{-}, and B_{s}→ϕτ^{+}τ^{-}, taking into account NP effects in the Wilson coefficients C_{9(^{'})}^{ττ} and C_{10(^{'})}^{ττ}. Assuming a common NP explanation of R_{D}, R_{D^{(*)}}, and R_{J/ψ}, we show that a very large enhancement of b→sτ^{+}τ^{-} processes, of around 3 orders of magnitude compared to the SM, can be expected under fairly general assumptions. We find that the branching ratios of B_{s}→τ^{+}τ^{-}, B_{s}→ϕτ^{+}τ^{-}, and B→K^{(*)}τ^{+}τ^{-} under these assumptions are in the observable range for LHCb and Belle II.

  3. Brownian motion with adaptive drift for remaining useful life prediction: Revisited

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Tsui, Kwok-Leung

    2018-01-01

    Linear Brownian motion with constant drift is widely used in remaining useful life predictions because its first hitting time follows the inverse Gaussian distribution. State space modelling of linear Brownian motion was proposed to make the drift coefficient adaptive and incorporate on-line measurements into the first hitting time distribution. Here, the drift coefficient followed the Gaussian distribution, and it was iteratively estimated by using Kalman filtering once a new measurement was available. Then, to model nonlinear degradation, linear Brownian motion with adaptive drift was extended to nonlinear Brownian motion with adaptive drift. However, in previous studies, an underlying assumption used in the state space modelling was that in the update phase of Kalman filtering, the predicted drift coefficient at the current time exactly equalled the posterior drift coefficient estimated at the previous time, which caused a contradiction with the predicted drift coefficient evolution driven by an additive Gaussian process noise. In this paper, to alleviate such an underlying assumption, a new state space model is constructed. As a result, in the update phase of Kalman filtering, the predicted drift coefficient at the current time evolves from the posterior drift coefficient at the previous time. Moreover, the optimal Kalman filtering gain for iteratively estimating the posterior drift coefficient at any time is mathematically derived. A discussion that theoretically explains the main reasons why the constructed state space model can result in high remaining useful life prediction accuracies is provided. Finally, the proposed state space model and its associated Kalman filtering gain are applied to battery prognostics.

  4. Ordinal preference elicitation methods in health economics and health services research: using discrete choice experiments and ranking methods.

    PubMed

    Ali, Shehzad; Ronaldson, Sarah

    2012-09-01

    The predominant method of economic evaluation is cost-utility analysis, which uses cardinal preference elicitation methods, including the standard gamble and time trade-off. However, such approach is not suitable for understanding trade-offs between process attributes, non-health outcomes and health outcomes to evaluate current practices, develop new programmes and predict demand for services and products. Ordinal preference elicitation methods including discrete choice experiments and ranking methods are therefore commonly used in health economics and health service research. Cardinal methods have been criticized on the grounds of cognitive complexity, difficulty of administration, contamination by risk and preference attitudes, and potential violation of underlying assumptions. Ordinal methods have gained popularity because of reduced cognitive burden, lower degree of abstract reasoning, reduced measurement error, ease of administration and ability to use both health and non-health outcomes. The underlying assumptions of ordinal methods may be violated when respondents use cognitive shortcuts, or cannot comprehend the ordinal task or interpret attributes and levels, or use 'irrational' choice behaviour or refuse to trade-off certain attributes. CURRENT USE AND GROWING AREAS: Ordinal methods are commonly used to evaluate preference for attributes of health services, products, practices, interventions, policies and, more recently, to estimate utility weights. AREAS FOR ON-GOING RESEARCH: There is growing research on developing optimal designs, evaluating the rationalization process, using qualitative tools for developing ordinal methods, evaluating consistency with utility theory, appropriate statistical methods for analysis, generalizability of results and comparing ordinal methods against each other and with cardinal measures.

  5. The future of future-oriented cognition in non-humans: theory and the empirical case of the great apes.

    PubMed

    Osvath, Mathias; Martin-Ordas, Gema

    2014-11-05

    One of the most contested areas in the field of animal cognition is non-human future-oriented cognition. We critically examine key underlying assumptions in the debate, which is mainly preoccupied with certain dichotomous positions, the most prevalent being whether or not 'real' future orientation is uniquely human. We argue that future orientation is a theoretical construct threatening to lead research astray. Cognitive operations occur in the present moment and can be influenced only by prior causation and the environment, at the same time that most appear directed towards future outcomes. Regarding the current debate, future orientation becomes a question of where on various continua cognition becomes 'truly' future-oriented. We question both the assumption that episodic cognition is the most important process in future-oriented cognition and the assumption that future-oriented cognition is uniquely human. We review the studies on future-oriented cognition in the great apes to find little doubt that our closest relatives possess such ability. We conclude by urging that future-oriented cognition not be viewed as expression of some select set of skills. Instead, research into future-oriented cognition should be approached more like research into social and physical cognition. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  6. Sensitivity of secondary production and export flux to choice of trophic transfer formulation in marine ecosystem models

    NASA Astrophysics Data System (ADS)

    Anderson, Thomas R.; Hessen, Dag O.; Mitra, Aditee; Mayor, Daniel J.; Yool, Andrew

    2013-09-01

    The performance of four contemporary formulations describing trophic transfer, which have strongly contrasting assumptions as regards the way that consumer growth is calculated as a function of food C:N ratio and in the fate of non-limiting substrates, was compared in two settings: a simple steady-state ecosystem model and a 3D biogeochemical general circulation model. Considerable variation was seen in predictions for primary production, transfer to higher trophic levels and export to the ocean interior. The physiological basis of the various assumptions underpinning the chosen formulations is open to question. Assumptions include Liebig-style limitation of growth, strict homeostasis in zooplankton biomass, and whether excess C and N are released by voiding in faecal pellets or via respiration/excretion post-absorption by the gut. Deciding upon the most appropriate means of formulating trophic transfer is not straightforward because, despite advances in ecological stoichiometry, the physiological mechanisms underlying these phenomena remain incompletely understood. Nevertheless, worrying inconsistencies are evident in the way in which fundamental transfer processes are justified and parameterised in the current generation of marine ecosystem models, manifested in the resulting simulations of ocean biogeochemistry. Our work highlights the need for modellers to revisit and appraise the equations and parameter values used to describe trophic transfer in marine ecosystem models.

  7. The future of future-oriented cognition in non-humans: theory and the empirical case of the great apes

    PubMed Central

    Osvath, Mathias; Martin-Ordas, Gema

    2014-01-01

    One of the most contested areas in the field of animal cognition is non-human future-oriented cognition. We critically examine key underlying assumptions in the debate, which is mainly preoccupied with certain dichotomous positions, the most prevalent being whether or not ‘real’ future orientation is uniquely human. We argue that future orientation is a theoretical construct threatening to lead research astray. Cognitive operations occur in the present moment and can be influenced only by prior causation and the environment, at the same time that most appear directed towards future outcomes. Regarding the current debate, future orientation becomes a question of where on various continua cognition becomes ‘truly’ future-oriented. We question both the assumption that episodic cognition is the most important process in future-oriented cognition and the assumption that future-oriented cognition is uniquely human. We review the studies on future-oriented cognition in the great apes to find little doubt that our closest relatives possess such ability. We conclude by urging that future-oriented cognition not be viewed as expression of some select set of skills. Instead, research into future-oriented cognition should be approached more like research into social and physical cognition. PMID:25267827

  8. Observation of radiation damage induced by single-ion hits at the heavy ion microbeam system

    NASA Astrophysics Data System (ADS)

    Kamiya, Tomihiro; Sakai, Takuro; Hirao, Toshio; Oikawa, Masakazu

    2001-07-01

    A single-ion hit system combined with the JAERI heavy ion microbeam system can be applied to observe individual phenomena induced by interactions between high-energy ions and a semiconductor device using a technique to measure the pulse height of transient current (TC) signals. The reduction of the TC pulse height for a Si PIN photodiode was measured under irradiation of 15 MeV Ni ions onto various micron-sized areas in the diode. The data containing damage effect by these irradiations were analyzed with least-square fitting using a Weibull distribution function. Changes of the scale and the shape parameters as functions of the width of irradiation areas brought us an assumption that a charge collection in a diode has a micron level lateral extent larger than a spatial resolution of the microbeam at 1 μm. Numerical simulations for these measurements were made with a simplified two-dimensional model based on this assumption using a Monte Carlo method. Calculated data reproducing the pulse-height reductions by single-ion irradiations were analyzed using the same function as that for the measurement. The result of this analysis, which shows the same tendency in change of parameters as that by measurements, seems to support our assumption.

  9. Impact of actuarial assumptions on pension costs: A simulation analysis

    NASA Astrophysics Data System (ADS)

    Yusof, Shaira; Ibrahim, Rose Irnawaty

    2013-04-01

    This study investigates the sensitivity of pension costs to changes in the underlying assumptions of a hypothetical pension plan in order to gain a perspective on the relative importance of the various actuarial assumptions via a simulation analysis. Simulation analyses are used to examine the impact of actuarial assumptions on pension costs. There are two actuarial assumptions will be considered in this study which are mortality rates and interest rates. To calculate pension costs, Accrued Benefit Cost Method, constant amount (CA) modification, constant percentage of salary (CS) modification are used in the study. The mortality assumptions and the implied mortality experience of the plan can potentially have a significant impact on pension costs. While for interest rate assumptions, it is inversely related to the pension costs. Results of the study have important implications for analyst of pension costs.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bogen, K.T.; Conrado, C.L.; Robison, W.L.

    A detailed analysis of uncertainty and interindividual variability in estimated doses was conducted for a rehabilitation scenario for Bikini Island at Bikini Atoll, in which the top 40 cm of soil would be removed in the housing and village area, and the rest of the island is treated with potassium fertilizer, prior to an assumed resettlement date of 1999. Predicted doses were considered for the following fallout-related exposure pathways: ingested Cesium-137 and Strontium-90, external gamma exposure, and inhalation and ingestion of Americium-241 + Plutonium-239+240. Two dietary scenarios were considered: (1) imported foods are available (IA), and (2) imported foods aremore » unavailable (only local foods are consumed) (IUA). Corresponding calculations of uncertainty in estimated population-average dose showed that after {approximately}5 y of residence on Bikini, the upper and lower 95% confidence limits with respect to uncertainty in this dose are estimated to be approximately 2-fold higher and lower than its population-average value, respectively (under both IA and IUA assumptions). Corresponding calculations of interindividual variability in the expected value of dose with respect to uncertainty showed that after {approximately}5 y of residence on Bikini, the upper and lower 95% confidence limits with respect to interindividual variability in this dose are estimated to be approximately 2-fold higher and lower than its expected value, respectively (under both IA and IUA assumptions). For reference, the expected values of population-average dose at age 70 were estimated to be 1.6 and 5.2 cSv under the IA and IUA dietary assumptions, respectively. Assuming that 200 Bikini resettlers would be exposed to local foods (under both IA and IUA assumptions), the maximum 1-y dose received by any Bikini resident is most likely to be approximately 2 and 8 mSv under the IA and IUA assumptions, respectively.« less

  11. Does muscle creatine phosphokinase have access to the total pool of phosphocreatine plus creatine?

    PubMed

    Hochachka, P W; Mossey, M K

    1998-03-01

    Two fundamental assumptions underlie currently accepted dogma on creatine phosphokinase (CPK) function in phosphagen-containing cells: 1) CPK always operates near equilibrium and 2) CPK has access to, and reacts with, the entire pool of phosphocreatine (PCr) and creatine (Cr). We tested the latter assumption in fish fast-twitch or white muscle (WM) by introducing [14C]Cr into the WM pool in vivo. To avoid complications arising from working with muscles formed from a mixture of fast and slow fibers, it was advantageous to work with fish WM because it is uniformly fast twitch and is anatomically separated from other fiber types. According to current theory, at steady state after [14C]Cr administration, the specific activities of PCr and Cr should be the same under essentially all conditions. In contrast, we found that, in various metabolic states between rest and recovery from exercise, the specific activity of PCr greatly exceeds that of Cr. The data imply that a significant fraction of Cr is not free to rapidly exchange with exogenously added [14C]Cr. Releasing of this unlabeled or "cold" Cr on acid extraction accounts for lowered specific activities. This unexpected and provocative result is not consistent with traditional models of phosphagen function.

  12. Surface Crystallization of Cloud Droplets: Implications for Climate Change and Ozone Depletion

    NASA Technical Reports Server (NTRS)

    Tabazadeh, A.; Djikaev, Y. S.; Reiss, H.; Gore, Warren J. (Technical Monitor)

    2002-01-01

    The process of supercooled liquid water crystallization into ice is still not well understood. Current experimental data on homogeneous freezing rates of ice nucleation in supercooled water droplets show considerable scatter. For example, at -33 C, the reported freezing nucleation rates vary by as much as 5 orders of magnitude, which is well outside the range of measurement uncertainties. Until now, experimental data on the freezing of supercooled water has been analyzed under the assumption that nucleation of ice took place in the interior volume of a water droplet. Here, the same data is reanalyzed assuming that the nucleation occurred "pseudoheterogeneously" at the air (or oil)-liquid water interface of the droplet. Our analysis suggest that the scatter in the nucleation data can be explained by two main factors. First, the current assumption that nucleation occurs solely inside the volume of a water droplet is incorrect. Second, because the nucleation process most likely occurs on the surface, the rates of nuclei formation could differ vastly when oil or air interfaces are involved. Our results suggest that ice freezing in clouds may initiate on droplet surfaces and such a process can allow for low amounts of liquid water (approx. 0.002 g per cubic meters) to remain supercooled down to -40 C as observed in the atmosphere.

  13. Evolutionary origin and early biogeography of otophysan fishes (Ostariophysi: Teleostei).

    PubMed

    Chen, Wei-Jen; Lavoué, Sébastien; Mayden, Richard L

    2013-08-01

    The biogeography of the mega-diverse, freshwater, and globally distributed Otophysi has received considerable attention. This attraction largely stems from assumptions as to their ancient origin, the clade being almost exclusively freshwater, and their suitability as to explanations of trans-oceanic distributions. Despite multiple hypotheses explaining present-day distributions, problems remain, precluding more parsimonious explanations. Underlying previous hypotheses are alternative phylogenies for Otophysi, uncertainties as to temporal diversification and assumptions integral to various explanations. We reexamine the origin and early diversification of this clade based on a comprehensive time-calibrated, molecular-based phylogenetic analysis and event-based approaches for ancestral range inference of lineages. Our results do not corroborate current phylogenetic classifications of otophysans. We demonstrate Siluriformes are never sister to Gymnotiformes and Characiformes are most likely nonmonophyletic. Divergence time estimates specify a split between Cypriniformes and Characiphysi with the fragmentation of Pangea. The early diversification of characiphysans either predated, or was contemporary with, the separation of Africa and South America, and involved a combination of within- and between-continental divergence events for these lineages. The intercontinental diversification of siluroids and characoids postdated major intercontinental tectonic fragmentations (<90 Mya). Post-tectonic drift dispersal events are hypothesized to account for their current distribution patterns. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.

  14. The Robustness of LOGIST and BILOG IRT Estimation Programs to Violations of Local Independence.

    ERIC Educational Resources Information Center

    Ackerman, Terry A.

    One of the important underlying assumptions of all item response theory (IRT) models is that of local independence. This assumption requires that the response to an item on a test not be influenced by the response to any other items. This assumption is often taken for granted, with little or no scrutiny of the response process required to answer…

  15. Fundamental Assumptions and Aims Underlying the Principles and Policies of Federal Financial Aid to Students. Research Report.

    ERIC Educational Resources Information Center

    Johnstone, D. Bruce

    As background to the National Dialogue on Student Financial Aid, this essay discusses the fundamental assumptions and aims that underlie the principles and policies of federal financial aid to students. These eight assumptions and aims are explored: (1) higher education is the province of states, and not of the federal government; (2) the costs of…

  16. Neural correlates of fixation duration in natural reading: Evidence from fixation-related fMRI.

    PubMed

    Henderson, John M; Choi, Wonil; Luke, Steven G; Desai, Rutvik H

    2015-10-01

    A key assumption of current theories of natural reading is that fixation duration reflects underlying attentional, language, and cognitive processes associated with text comprehension. The neurocognitive correlates of this relationship are currently unknown. To investigate this relationship, we compared neural activation associated with fixation duration in passage reading and a pseudo-reading control condition. The results showed that fixation duration was associated with activation in oculomotor and language areas during text reading. Fixation duration during pseudo-reading, on the other hand, showed greater involvement of frontal control regions, suggesting flexibility and task dependency of the eye movement network. Consistent with current models, these results provide support for the hypothesis that fixation duration in reading reflects attentional engagement and language processing. The results also demonstrate that fixation-related fMRI provides a method for investigating the neurocognitive bases of natural reading. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Current demographics suggest future energy supplies will be inadequate to slow human population growth.

    PubMed

    DeLong, John P; Burger, Oskar; Hamilton, Marcus J

    2010-10-05

    Influential demographic projections suggest that the global human population will stabilize at about 9-10 billion people by mid-century. These projections rest on two fundamental assumptions. The first is that the energy needed to fuel development and the associated decline in fertility will keep pace with energy demand far into the future. The second is that the demographic transition is irreversible such that once countries start down the path to lower fertility they cannot reverse to higher fertility. Both of these assumptions are problematic and may have an effect on population projections. Here we examine these assumptions explicitly. Specifically, given the theoretical and empirical relation between energy-use and population growth rates, we ask how the availability of energy is likely to affect population growth through 2050. Using a cross-country data set, we show that human population growth rates are negatively related to per-capita energy consumption, with zero growth occurring at ∼13 kW, suggesting that the global human population will stop growing only if individuals have access to this amount of power. Further, we find that current projected future energy supply rates are far below the supply needed to fuel a global demographic transition to zero growth, suggesting that the predicted leveling-off of the global population by mid-century is unlikely to occur, in the absence of a transition to an alternative energy source. Direct consideration of the energetic constraints underlying the demographic transition results in a qualitatively different population projection than produced when the energetic constraints are ignored. We suggest that energetic constraints be incorporated into future population projections.

  18. Regularity Results for a Class of Functionals with Non-Standard Growth

    NASA Astrophysics Data System (ADS)

    Acerbi, Emilio; Mingione, Giuseppe

    We consider the integral functional under non-standard growth assumptions that we call p(x) type: namely, we assume that a relevant model case being the functional Under sharp assumptions on the continuous function p(x)>1 we prove regularity of minimizers. Energies exhibiting this growth appear in several models from mathematical physics.

  19. Scoping review of response shift methods: current reporting practices and recommendations.

    PubMed

    Sajobi, Tolulope T; Brahmbatt, Ronak; Lix, Lisa M; Zumbo, Bruno D; Sawatzky, Richard

    2018-05-01

    Response shift (RS) has been defined as a change in the meaning of an individual's self-evaluation of his/her health status and quality of life. Several statistical model- and design-based methods have been developed to test for RS in longitudinal data. We reviewed the uptake of these methods in patient-reported outcomes (PRO) literature. CINHAHL, EMBASE, Medline, ProQuest, PsycINFO, and Web of Science were searched to identify English-language articles about RS published until 2016. Data on year and country of publication, PRO measure adopted, RS detection method, type of RS detected, and testing of underlying model assumptions were extracted from the included articles. Of the 1032 articles identified, 101 (9.8%) articles were included in the study. While 54.5 of the articles reported on the Then-test, 30.7% of the articles reported on Oort's or Schmitt's structural equation modeling (SEM) procedure. Newer RS detection methods, such as relative importance analysis and random forest regression, have been used less frequently. Less than 25% reported on testing the assumptions underlying the adopted RS detection method(s). Despite rapid methodological advancements in RS research, this review highlights the need for further research about RS detection methods for complex longitudinal data and standardized reporting guidelines.

  20. Why you cannot transform your way out of trouble for small counts.

    PubMed

    Warton, David I

    2018-03-01

    While data transformation is a common strategy to satisfy linear modeling assumptions, a theoretical result is used to show that transformation cannot reasonably be expected to stabilize variances for small counts. Under broad assumptions, as counts get smaller, it is shown that the variance becomes proportional to the mean under monotonic transformations g(·) that satisfy g(0)=0, excepting a few pathological cases. A suggested rule-of-thumb is that if many predicted counts are less than one then data transformation cannot reasonably be expected to stabilize variances, even for a well-chosen transformation. This result has clear implications for the analysis of counts as often implemented in the applied sciences, but particularly for multivariate analysis in ecology. Multivariate discrete data are often collected in ecology, typically with a large proportion of zeros, and it is currently widespread to use methods of analysis that do not account for differences in variance across observations nor across responses. Simulations demonstrate that failure to account for the mean-variance relationship can have particularly severe consequences in this context, and also in the univariate context if the sampling design is unbalanced. © 2017 The Authors. Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  1. Calorimetry at the International Linear Collider

    NASA Astrophysics Data System (ADS)

    Repond, José

    2007-03-01

    The physics potential of the International Linear Collider depends critically on the jet energy resolution of its detector. Detector concepts are being developed which optimize the jet energy resolution, with the aim of achieving σjet=30%/√{Ejet}. Under the assumption that Particle Flow Algorithms (PFAs), which combine tracking and calorimeter information to reconstruct the energy of hadronic jets, can provide this unprecedented jet energy resolution, calorimeters with very fine granularity are being developed. After a brief introduction outlining the principles of PFAs, the current status of various calorimeter prototype construction projects and their plans for the next few years will be reviewed.

  2. Interim MELCOR Simulation of the Fukushima Daiichi Unit 2 Accident Reactor Core Isolation Cooling Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ross, Kyle W.; Gauntt, Randall O.; Cardoni, Jeffrey N.

    2013-11-01

    Data, a brief description of key boundary conditions, and results of Sandia National Laboratories’ ongoing MELCOR analysis of the Fukushima Unit 2 accident are given for the reactor core isolation cooling (RCIC) system. Important assumptions and related boundary conditions in the current analysis additional to or different than what was assumed/imposed in the work of SAND2012-6173 are identified. This work is for the U.S. Department of Energy’s Nuclear Energy University Programs fiscal year 2014 Reactor Safety Technologies Research and Development Program RC-7: RCIC Performance under Severe Accident Conditions.

  3. 3D Multi-Level Non-LTE Radiative Transfer for the CO Molecule

    NASA Astrophysics Data System (ADS)

    Berkner, A.; Schweitzer, A.; Hauschildt, P. H.

    2015-01-01

    The photospheres of cool stars are both rich in molecules and an environment where the assumption of LTE can not be upheld under all circumstances. Unfortunately, detailed 3D non-LTE calculations involving molecules are hardly feasible with current computers. For this reason, we present our implementation of the super level technique, in which molecular levels are combined into super levels, to reduce the number of unknowns in the rate equations and, thus, the computational effort and memory requirements involved, and show the results of our first tests against the 1D implementation of the same method.

  4. Overview of Threats and Failure Models for Safety-Relevant Computer-Based Systems

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2015-01-01

    This document presents a high-level overview of the threats to safety-relevant computer-based systems, including (1) a description of the introduction and activation of physical and logical faults; (2) the propagation of their effects; and (3) function-level and component-level error and failure mode models. These models can be used in the definition of fault hypotheses (i.e., assumptions) for threat-risk mitigation strategies. This document is a contribution to a guide currently under development that is intended to provide a general technical foundation for designers and evaluators of safety-relevant systems.

  5. Quasi-static evolution of coronal magnetic fields

    NASA Technical Reports Server (NTRS)

    Longcope, D. W.; Sudan, R. N.

    1992-01-01

    A formalism is developed to describe the purely quasi-static part of the evolution of a coronal loop driven by its footpoints. This is accomplished under assumptions of a long, thin loop. The quasi-static equations reveal the possibility for sudden 'loss of equilibrium' at which time the system evolves dynamically rather than quasi-statically. Such quasi-static crises produce high-frequency Alfven waves and, in conjunction with Alfven wave dissipation models, form a viable coronal heating mechanism. Furthermore, an approximate solution to the quasi-static equations by perturbation method verifies the development of small-scale spatial current structure.

  6. Landau-type expansion for the energy landscape of the designed heteropolymer

    NASA Astrophysics Data System (ADS)

    Grosberg, Alexander; Pande, Vijay; Tanaka, Toyoichi

    1997-03-01

    The concept of evolutional optimization of heteropolymer sequences is used to construct the phenomenological theory describing folding/unfoolding kinetics of the polymers with designed sequences. The relevant energy landscape is described in terms of Landau expansion over the powers of the overlap parameter of the current and the native conformations. It is shown that only linear term is sequence (mutation) dependent, the rest being determined by the underlying conformational geometry. The theory os free of the assumptions of the uncorrelated energy landscape type. We demonstrate the power of the theory by comparing data to the simulations and experiments.

  7. Effect of process parameters on temperature distribution in twin-electrode TIG coupling arc

    NASA Astrophysics Data System (ADS)

    Zhang, Guangjun; Xiong, Jun; Gao, Hongming; Wu, Lin

    2012-10-01

    The twin-electrode TIG coupling arc is a new type of welding heat source, which is generated in a single welding torch that has two tungsten electrodes insulated from each other. This paper aims at determining the distribution of temperature for the coupling arc using the Fowler-Milne method under the assumption of local thermodynamic equilibrium. The influences of welding current, arc length, and distance between both electrode tips on temperature distribution of the coupling arc were analyzed. Based on the results, a better understanding of the twin-electrode TIG welding process was obtained.

  8. An Unsolved Mystery: The Target-Recognizing RNA Species of MicroRNA Genes

    PubMed Central

    Chen, Chang-Zheng

    2013-01-01

    MicroRNAs (miRNAs) are an abundant class of endogenous ~ 21-nucleotide (nt) RNAs. These small RNAs are produced from long primary miRNA transcripts — pri-miRNAs — through sequential endonucleolytic maturation steps that yield precursor miRNA (pre-miRNA) intermediates and then the mature miRNAs. The mature miRNAs are loaded into the RNA-induced silencing complexes (RISC), and guide RISC to target mRNAs for cleavage and/or translational repression. This paradigm, which represents one of major discoveries of modern molecular biology, is built on the assumption that mature miRNAs are the only species produced from miRNA genes that recognize targets. This assumption has guided the miRNA field for more than a decade and has led to our current understanding of the mechanisms of target recognition and repression by miRNAs. Although progress has been made, fundamental questions remain unanswered with regard to the principles of target recognition and mechanisms of repression. Here I raise questions about the assumption that mature miRNAs are the only target-recognizing species produced from miRNA genes and discuss the consequences of working under an incomplete or incorrect assumption. Moreover, I present evolution-based and experimental evidence that support the roles of pri-/pre-miRNAs in target recognition and repression. Finally, I propose a conceptual framework that integrates the functions of pri-/pre-miRNAs and mature miRNAs in target recognition and repression. The integrated framework opens experimental enquiry and permits interpretation of fundamental problems that have so far been precluded. PMID:23685275

  9. Hot-spot heating susceptibility due to reverse bias operating conditions

    NASA Technical Reports Server (NTRS)

    Gonzalez, C. C.

    1985-01-01

    Because of field experience (indicating that cell and module degradation could occur as a result of hot spot heating), a laboratory test was developed at JPL to determine hot spot susceptibility of modules. The initial hot spot testing work at JPL formed a foundation for the test development. Test parameters are selected as follows. For high shunt resistance cells, the applied back bias test current is set equal to the test cell current at maximum power. For low shunt resistance cells, the test current is set equal to the cell short circuit current. The shadow level is selected to conform to that which would lead to maximum back bias voltage under the appropriate test current level. The test voltage is determined by the bypass diode frequency. The test conditions are meant to simulate the thermal boundary conditions for 100 mW/sq cm, 40C ambient environment. The test lasts 100 hours. A key assumption made during the development of the test is that no current imbalance results from the connecting of multiparallel cell strings. Therefore, the test as originally developed was applicable for single string case only.

  10. Questionable assumptions hampered interpretation of a network meta-analysis of primary care depression treatments.

    PubMed

    Linde, Klaus; Rücker, Gerta; Schneider, Antonius; Kriston, Levente

    2016-03-01

    We aimed to evaluate the underlying assumptions of a network meta-analysis investigating which depression treatment works best in primary care and to highlight challenges and pitfalls of interpretation under consideration of these assumptions. We reviewed 100 randomized trials investigating pharmacologic and psychological treatments for primary care patients with depression. Network meta-analysis was carried out within a frequentist framework using response to treatment as outcome measure. Transitivity was assessed by epidemiologic judgment based on theoretical and empirical investigation of the distribution of trial characteristics across comparisons. Homogeneity and consistency were investigated by decomposing the Q statistic. There were important clinical and statistically significant differences between "pure" drug trials comparing pharmacologic substances with each other or placebo (63 trials) and trials including a psychological treatment arm (37 trials). Overall network meta-analysis produced results well comparable with separate meta-analyses of drug trials and psychological trials. Although the homogeneity and consistency assumptions were mostly met, we considered the transitivity assumption unjustifiable. An exchange of experience between reviewers and, if possible, some guidance on how reviewers addressing important clinical questions can proceed in situations where important assumptions for valid network meta-analysis are not met would be desirable. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Identification of Extraterrestrial Microbiology

    NASA Technical Reports Server (NTRS)

    Flynn, Michael; Rasky, Daniel J. (Technical Monitor)

    1998-01-01

    Many of the key questions addressed in the field of Astrobiology are based upon the assumption that life exists, or at one time existed, in locations throughout the universe. However, this assumption is just that, an assumption. No definitive proof exists. On Earth, life has been found to exist in many diverse environment. We believe that this tendency towards diversity supports the assumption that life could exists throughout the universe. This paper provides a summary of several innovative techniques for the detection of extraterrestrial life forms. The primary questions addressed are does life currently exist beyond Earth and if it does, is that life evolutionary related to life on Earth?

  12. Weak annihilation and new physics in charmless [Formula: see text] decays.

    PubMed

    Bobeth, Christoph; Gorbahn, Martin; Vickers, Stefan

    We use currently available data of nonleptonic charmless 2-body [Formula: see text] decays ([Formula: see text]) that are mediated by [Formula: see text] QCD- and QED-penguin operators to study weak annihilation and new-physics effects in the framework of QCD factorization. In particular we introduce one weak-annihilation parameter for decays related by [Formula: see text] quark interchange and test this universality assumption. Within the standard model, the data supports this assumption with the only exceptions in the [Formula: see text] system, which exhibits the well-known "[Formula: see text] puzzle", and some tensions in [Formula: see text]. Beyond the standard model, we simultaneously determine weak-annihilation and new-physics parameters from data, employing model-independent scenarios that address the "[Formula: see text] puzzle", such as QED-penguins and [Formula: see text] current-current operators. We discuss also possibilities that allow further tests of our assumption once improved measurements from LHCb and Belle II become available.

  13. Dendritic solidification. I - Analysis of current theories and models. II - A model for dendritic growth under an imposed thermal gradient

    NASA Technical Reports Server (NTRS)

    Laxmanan, V.

    1985-01-01

    A critical review of the present dendritic growth theories and models is presented. Mathematically rigorous solutions to dendritic growth are found to rely on an ad hoc assumption that dendrites grow at the maximum possible growth rate. This hypothesis is found to be in error and is replaced by stability criteria which consider the conditions under which a dendrite tip advances in a stable fashion in a liquid. The important elements of a satisfactory model for dendritic solidification are summarized and a theoretically consistent model for dendritic growth under an imposed thermal gradient is proposed and described. The model is based on the modification of an analysis due to Burden and Hunt (1974) and predicts correctly in all respects, the transition from a dendritic to a planar interface at both very low and very large growth rates.

  14. Algae biodiesel - a feasibility report

    PubMed Central

    2012-01-01

    Background Algae biofuels have been studied numerous times including the Aquatic Species program in 1978 in the U.S., smaller laboratory research projects and private programs. Results Using Molina Grima 2003 and Department of Energy figures, captial costs and operating costs of the closed systems and open systems were estimated. Cost per gallon of conservative estimates yielded $1,292.05 and $114.94 for closed and open ponds respectively. Contingency scenarios were generated in which cost per gallon of closed system biofuels would reach $17.54 under the generous conditions of 60% yield, 50% reduction in the capital costs and 50% hexane recovery. Price per gallon of open system produced fuel could reach $1.94 under generous assumptions of 30% yield and $0.2/kg CO2. Conclusions Current subsidies could allow biodiesel to be produced economically under the generous conditions specified by the model. PMID:22540986

  15. Modelling the effect of electrode displacement on transcranial direct current stimulation (tDCS)

    NASA Astrophysics Data System (ADS)

    Ramaraju, Sriharsha; Roula, Mohammed A.; McCarthy, Peter W.

    2018-02-01

    Objective. Transcranial direct current stimulation (tDCS) is a neuromodulatory technique that delivers a low-intensity, direct current to cortical areas with the purpose of modulating underlying brain activity. Recent studies have reported inconsistencies in tDCS outcomes. The underlying assumption of many tDCS studies has been that replication of electrode montage equates to replicating stimulation conditions. It is possible however that anatomical difference between subjects, as well as inherent inaccuracies in montage placement, could affect current flow to targeted areas. The hypothesis that stimulation of a defined brain region will be stable under small displacements was tested. Approach. Initially, we compared the total simulated current flowing through ten specific brain areas for four commonly used tDCS montages: F3-Fp2, C3-Fp2, Fp1-F4, and P3-P4 using the software tool COMETS. The effect of a slight (~1 cm in each of four directions) anode displacement on the simulated regional current density for each of the four tDCS montages was then determined. Current flow was calculated and compared through ten segmented brain areas to determine the effect of montage type and displacement. The regional currents, as well as the localised current densities, were compared with the original electrode location, for each of these new positions. Main results. Recommendations for montages that maximise stimulation current for the ten brain regions are considered. We noted that the extent to which stimulation is affected by electrode displacement varies depending on both area and montage type. The F3-Fp2 montage was found to be the least stable with up to 38% change in average current density in the left frontal lobe while the Fp1-F4 montage was found to the most stable exhibiting only 1% change when electrodes were displaced. Significance. These results indicate that even relatively small changes in stimulation electrode placement appear to result in surprisingly large changes in current densities and distribution.

  16. Qualifications and Assignments of Alternatively Certified Teachers: Testing Core Assumptions

    ERIC Educational Resources Information Center

    Cohen-Vogel, Lora; Smith, Thomas M.

    2007-01-01

    By analyzing data from the Schools and Staffing Survey, the authors empirically test four of the core assumptions embedded in current arguments for expanding alternative teacher certification (AC): AC attracts experienced candidates from fields outside of education; AC attracts top-quality, well-trained teachers; AC disproportionately trains…

  17. The Cost of CAI: A Matter of Assumptions.

    ERIC Educational Resources Information Center

    Kearsley, Greg P.

    Cost estimates for Computer Assisted Instruction (CAI) depend crucially upon the particular assumptions made about the components of the system to be included in the costs, the expected lifetime of the system and courseware, and the anticipated student utilization of the system/courseware. The cost estimates of three currently operational systems…

  18. Queries for Bias Testing

    NASA Technical Reports Server (NTRS)

    Gordon, Diana F.

    1992-01-01

    Selecting a good bias prior to concept learning can be difficult. Therefore, dynamic bias adjustment is becoming increasingly popular. Current dynamic bias adjustment systems, however, are limited in their ability to identify erroneous assumptions about the relationship between the bias and the target concept. Without proper diagnosis, it is difficult to identify and then remedy faulty assumptions. We have developed an approach that makes these assumptions explicit, actively tests them with queries to an oracle, and adjusts the bias based on the test results.

  19. A multi-scale health impact assessment of air pollution over the 21st century.

    PubMed

    Likhvar, Victoria N; Pascal, Mathilde; Markakis, Konstantinos; Colette, Augustin; Hauglustaine, Didier; Valari, Myrto; Klimont, Zbigniew; Medina, Sylvia; Kinney, Patrick

    2015-05-01

    Ozone and PM₂.₅ are current risk factors for premature death all over the globe. In coming decades, substantial improvements in public health may be achieved by reducing air pollution. To better understand the potential of emissions policies, studies are needed that assess possible future health impacts under alternative assumptions about future emissions and climate across multiple spatial scales. We used consistent climate-air-quality-health modeling framework across three geographical scales (World, Europe and Ile-de-France) to assess future (2030-2050) health impacts of ozone and PM₂.₅ under two emissions scenarios (Current Legislation Emissions, CLE, and Maximum Feasible Reductions, MFR). Consistently across the scales, we found more reductions in deaths under MFR scenario compared to CLE. 1.5 [95% CI: 0.4, 2.4] million CV deaths could be delayed each year in 2030 compared to 2010 under MFR scenario, 84% of which would occur in Asia, especially in China. In Europe, the benefits under MFR scenario (219000 CV deaths) are noticeably larger than those under CLE (109,000 CV deaths). In Ile-de-France, under MFR more than 2830 annual CV deaths associated with PM₂.₅ changes could be delayed in 2050 compared to 2010. In Paris, ozone-related respiratory mortality should increase under both scenarios. Multi-scale HIAs can illustrate the difference in direct consequences of costly mitigation policies and provide results that may help decision-makers choose between different policy alternatives at different scales. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Development of perspective-based water management strategies for the Rhine and Meuse basins.

    PubMed

    van Deursen, W P A; Middelkoop, H

    2005-01-01

    Water management is surrounded by uncertainties. Water management thus has to answer the question: given the uncertainties, what is the best management strategy? This paper describes the application of the perspectives method on water management in the Rhine and Meuse basins. In the perspectives method, a structured framework to analyse water management strategies under uncertainty is provided. Various strategies are clustered in perspectives according to their underlying assumptions. This framework allows for an analysis of current water management strategies, but also allows for evaluation of the robustness of proposed future water strategies. It becomes clear that no water management strategy is superior to the others, but that inherent choices on risk acceptance and costs make a real political dilemma which will not be solved by further optimisation.

  1. Modeling the allocation system: principles for robust design before restructuring.

    PubMed

    Mehrotra, Sanjay; Kilambi, Vikram; Gilroy, Richard; Ladner, Daniela P; Klintmalm, Goran B; Kaplan, Bruce

    2015-02-01

    The United Network for Organ Sharing is poised to resolve geographic disparity in liver transplantation and promote allocation based on medical urgency. At the time of writing, United Network for Organ Sharing is considering redistricting the organ procurement and transplantation network so that patient model for end-stage liver disease scores at transplant is more uniform across regions.We review the proposal with a systems-engineering focus and find that although the proposal is promising, it currently lacks evidence that it would perform effectively under realistic departures from its underlying data and assumptions. Moreover, we caution against prematurely focusing on redistricting as the only method to mitigate disparity. We describe system modeling principles which, if followed, will ensure that the redesigned allocation system is effective and efficient in achieving the intended goals.

  2. Energy storage and dissipation in the magnetotail during substorms. I - Particle simulations. II - MHD simulations

    NASA Technical Reports Server (NTRS)

    Winglee, R. M.; Steinolfson, R. S.

    1993-01-01

    2D electromagnetic particle simulations are used to investigate the dynamics of the tail during development of substorms under the influence of the pressure in the magnetospheric boundary layer and the dawn-to-dusk electric field. It is shown that pressure pulses result in thinning of the tail current sheet as the magnetic field becomes pinched near the region where the pressure pulse is applied. The pinching leads to the tailward flow of the current sheet plasma and the eventual formation and injection of a plasmoid. Surges in the dawn-to-dusk electric field cause plasma on the flanks to convect into the center of the current sheet, thereby thinning the current sheet. The pressure in the magnetospheric boundary laser is coupled to the dawn-to-dusk electric field through the conductivity of the tail. Changes in the predicted evolution of the magnetosphere during substorms due to changes in the resistivity are investigated under the assumption that MHD theory provides a suitable representation of the global or large-scale evolution of the magnetotail to changes in the solar wind and to reconnection at the dayside magnetopause. It is shown that the overall evolution of the magnetosphere is about the same for three different resistivity distributions with plasmoid formation and ejection in each case.

  3. Energy breakdown in capacitive deionization.

    PubMed

    Hemmatifar, Ali; Palko, James W; Stadermann, Michael; Santiago, Juan G

    2016-11-01

    We explored the energy loss mechanisms in capacitive deionization (CDI). We hypothesize that resistive and parasitic losses are two main sources of energy losses. We measured contribution from each loss mechanism in water desalination with constant current (CC) charge/discharge cycling. Resistive energy loss is expected to dominate in high current charging cases, as it increases approximately linearly with current for fixed charge transfer (resistive power loss scales as square of current and charging time scales as inverse of current). On the other hand, parasitic loss is dominant in low current cases, as the electrodes spend more time at higher voltages. We built a CDI cell with five electrode pairs and standard flow between architecture. We performed a series of experiments with various cycling currents and cut-off voltages (voltage at which current is reversed) and studied these energy losses. To this end, we measured series resistance of the cell (contact resistances, resistance of wires, and resistance of solution in spacers) during charging and discharging from voltage response of a small amplitude AC current signal added to the underlying cycling current. We performed a separate set of experiments to quantify parasitic (or leakage) current of the cell versus cell voltage. We then used these data to estimate parasitic losses under the assumption that leakage current is primarily voltage (and not current) dependent. Our results confirmed that resistive and parasitic losses respectively dominate in the limit of high and low currents. We also measured salt adsorption and report energy-normalized adsorbed salt (ENAS, energy loss per ion removed) and average salt adsorption rate (ASAR). We show a clear tradeoff between ASAR and ENAS and show that balancing these losses leads to optimal energy efficiency. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Energy breakdown in capacitive deionization

    DOE PAGES

    Hemmatifar, Ali; Palko, James W.; Stadermann, Michael; ...

    2016-08-12

    We explored the energy loss mechanisms in capacitive deionization (CDI). We hypothesize that resistive and parasitic losses are two main sources of energy losses. We measured contribution from each loss mechanism in water desalination with constant current (CC) charge/discharge cycling. Resistive energy loss is expected to dominate in high current charging cases, as it increases approximately linearly with current for fixed charge transfer (resistive power loss scales as square of current and charging time scales as inverse of current). On the other hand, parasitic loss is dominant in low current cases, as the electrodes spend more time at higher voltages.more » We built a CDI cell with five electrode pairs and standard flow between architecture. We performed a series of experiments with various cycling currents and cut-off voltages (voltage at which current is reversed) and studied these energy losses. To this end, we measured series resistance of the cell (contact resistances, resistance of wires, and resistance of solution in spacers) during charging and discharging from voltage response of a small amplitude AC current signal added to the underlying cycling current. We performed a separate set of experiments to quantify parasitic (or leakage) current of the cell versus cell voltage. We then used these data to estimate parasitic losses under the assumption that leakage current is primarily voltage (and not current) dependent. Our results confirmed that resistive and parasitic losses respectively dominate in the limit of high and low currents. We also measured salt adsorption and report energy-normalized adsorbed salt (ENAS, energy loss per ion removed) and average salt adsorption rate (ASAR). As a result, we show a clear tradeoff between ASAR and ENAS and show that balancing these losses leads to optimal energy efficiency.« less

  5. Validity in work-based assessment: expanding our horizons.

    PubMed

    Govaerts, Marjan; van der Vleuten, Cees P M

    2013-12-01

    Although work-based assessments (WBA) may come closest to assessing habitual performance, their use for summative purposes is not undisputed. Most criticism of WBA stems from approaches to validity consistent with the quantitative psychometric framework. However, there is increasing research evidence that indicates that the assumptions underlying the predictive, deterministic framework of psychometrics may no longer hold. In this discussion paper we argue that meaningfulness and appropriateness of current validity evidence can be called into question and that we need alternative strategies to assessment and validity inquiry that build on current theories of learning and performance in complex and dynamic workplace settings. Drawing from research in various professional fields we outline key issues within the mechanisms of learning, competence and performance in the context of complex social environments and illustrate their relevance to WBA. In reviewing recent socio-cultural learning theory and research on performance and performance interpretations in work settings, we demonstrate that learning, competence (as inferred from performance) as well as performance interpretations are to be seen as inherently contextualised, and can only be under-stood 'in situ'. Assessment in the context of work settings may, therefore, be more usefully viewed as a socially situated interpretive act. We propose constructivist-interpretivist approaches towards WBA in order to capture and understand contextualised learning and performance in work settings. Theoretical assumptions underlying interpretivist assessment approaches call for a validity theory that provides the theoretical framework and conceptual tools to guide the validation process in the qualitative assessment inquiry. Basic principles of rigour specific to qualitative research have been established, and they can and should be used to determine validity in interpretivist assessment approaches. If used properly, these strategies generate trustworthy evidence that is needed to develop the validity argument in WBA, allowing for in-depth and meaningful information about professional competence. © 2013 John Wiley & Sons Ltd.

  6. A Reactor Development Scenario for the FUZE Shear-flow Stabilized Z-pinch

    NASA Astrophysics Data System (ADS)

    McLean, H. S.; Higginson, D. P.; Schmidt, A.; Tummel, K. K.; Shumlak, U.; Nelson, B. A.; Claveau, E. L.; Golingo, R. P.; Weber, T. R.

    2016-10-01

    We present a conceptual design, scaling calculations, and a development path for a pulsed fusion reactor based on the shear-flow-stabilized Z-pinch device. Experiments performed on the ZaP device have demonstrated stable operation for 40 us at 150 kA total discharge current (with 100 kA in the pinch) for pinches that are 1cm in diameter and 100 cm long. Scaling calculations show that achieving stabilization for a pulse of 100 usec, for discharge current 1.5 MA, in a shortened pinch 50 cm, results in a pinch diameter of 200 um and a reactor plant Q 5 for reasonable assumptions of the various system efficiencies. We propose several key intermediate performance levels in order to justify further development. These include achieving operation at pinch currents of 300 kA, where Te and Ti are calculated to exceed 1 keV, 700 kA where fusion power exceeds pinch input power, and 1 MA where fusion energy per pulse exceeds input energy per pulse. This work funded by USDOE ARPAe ALPHA Program and performed under the auspices of Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-697801.

  7. A Logistic Regression and Markov Chain Model for the Prediction of Nation-state Violent Conflicts and Transitions

    DTIC Science & Technology

    2016-03-24

    McCarthy, Blood Meridian 1.1 General Issue Violent conflict between competing groups has been a pervasive and driving force for all of human history...It has evolved from small skirmishes between unarmed groups , wielding rudimentary weapons, to industrialized global conflagrations. Global...methodology is presented in Figure 2. Figure 2: Study Methodology 5 1.6 Study Assumptions and Limitations Assumptions Four underlying assumptions were

  8. Teaching Practices: Reexamining Assumptions.

    ERIC Educational Resources Information Center

    Spodek, Bernard, Ed.

    This publication contains eight papers, selected from papers presented at the Bicentennial Conference on Early Childhood Education, that discuss different aspects of teaching practices. The first two chapters reexamine basic assumptions underlying the organization of curriculum experiences for young children. Chapter 3 discusses the need to…

  9. 29 CFR 4050.2 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... interest rate means the rate of interest applicable to underpayments of guaranteed benefits by the PBGC... of proof of death, individuals not located are presumed living. Missing participant annuity assumptions means the interest rate assumptions and actuarial methods for valuing benefits under § 4044.52 of...

  10. 29 CFR 4050.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... interest rate means the rate of interest applicable to underpayments of guaranteed benefits by the PBGC... of proof of death, individuals not located are presumed living. Missing participant annuity assumptions means the interest rate assumptions and actuarial methods for valuing benefits under § 4044.52 of...

  11. 29 CFR 4050.2 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... interest rate means the rate of interest applicable to underpayments of guaranteed benefits by the PBGC... of proof of death, individuals not located are presumed living. Missing participant annuity assumptions means the interest rate assumptions and actuarial methods for valuing benefits under § 4044.52 of...

  12. 29 CFR 4050.2 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... interest rate means the rate of interest applicable to underpayments of guaranteed benefits by the PBGC... of proof of death, individuals not located are presumed living. Missing participant annuity assumptions means the interest rate assumptions and actuarial methods for valuing benefits under § 4044.52 of...

  13. 29 CFR 4050.2 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... interest rate means the rate of interest applicable to underpayments of guaranteed benefits by the PBGC... of proof of death, individuals not located are presumed living. Missing participant annuity assumptions means the interest rate assumptions and actuarial methods for valuing benefits under § 4044.52 of...

  14. Are Assumptions of Well-Known Statistical Techniques Checked, and Why (Not)?

    PubMed Central

    Hoekstra, Rink; Kiers, Henk A. L.; Johnson, Addie

    2012-01-01

    A valid interpretation of most statistical techniques requires that one or more assumptions be met. In published articles, however, little information tends to be reported on whether the data satisfy the assumptions underlying the statistical techniques used. This could be due to self-selection: Only manuscripts with data fulfilling the assumptions are submitted. Another explanation could be that violations of assumptions are rarely checked for in the first place. We studied whether and how 30 researchers checked fictitious data for violations of assumptions in their own working environment. Participants were asked to analyze the data as they would their own data, for which often used and well-known techniques such as the t-procedure, ANOVA and regression (or non-parametric alternatives) were required. It was found that the assumptions of the techniques were rarely checked, and that if they were, it was regularly by means of a statistical test. Interviews afterward revealed a general lack of knowledge about assumptions, the robustness of the techniques with regards to the assumptions, and how (or whether) assumptions should be checked. These data suggest that checking for violations of assumptions is not a well-considered choice, and that the use of statistics can be described as opportunistic. PMID:22593746

  15. Adjusting Estimates of the Expected Value of Information for Implementation: Theoretical Framework and Practical Application.

    PubMed

    Andronis, Lazaros; Barton, Pelham M

    2016-04-01

    Value of information (VoI) calculations give the expected benefits of decision making under perfect information (EVPI) or sample information (EVSI), typically on the premise that any treatment recommendations made in light of this information will be implemented instantly and fully. This assumption is unlikely to hold in health care; evidence shows that obtaining further information typically leads to "improved" rather than "perfect" implementation. To present a method of calculating the expected value of further research that accounts for the reality of improved implementation. This work extends an existing conceptual framework by introducing additional states of the world regarding information (sample information, in addition to current and perfect information) and implementation (improved implementation, in addition to current and optimal implementation). The extension allows calculating the "implementation-adjusted" EVSI (IA-EVSI), a measure that accounts for different degrees of implementation. Calculations of implementation-adjusted estimates are illustrated under different scenarios through a stylized case study in non-small cell lung cancer. In the particular case study, the population values for EVSI and IA-EVSI were £ 25 million and £ 8 million, respectively; thus, a decision assuming perfect implementation would have overestimated the expected value of research by about £ 17 million. IA-EVSI was driven by the assumed time horizon and, importantly, the specified rate of change in implementation: the higher the rate, the greater the IA-EVSI and the lower the difference between IA-EVSI and EVSI. Traditionally calculated measures of population VoI rely on unrealistic assumptions about implementation. This article provides a simple framework that accounts for improved, rather than perfect, implementation and offers more realistic estimates of the expected value of research. © The Author(s) 2015.

  16. Smoldering of porous media: numerical model and comparison of calculations with experiment

    NASA Astrophysics Data System (ADS)

    Lutsenko, N. A.; Levin, V. A.

    2017-10-01

    Numerical modelling of smoldering in porous media under natural convection is considered. Smoldering can be defined as a flameless exothermic surface reaction; it is a type of heterogeneous combustion which can propagate in porous media. Peatbogs, landfills and other natural or man-made porous objects can sustain smoldering under natural (or free) convection, when the flow rate of gas passed through the porous object is unknown a priori. In the present work a numerical model is proposed for investigating smoldering in porous media under natural convection. The model is based on the assumption of interacting interpenetrating continua using classical approaches of the theory of filtration combustion and includes equations of state, continuity, momentum conservation and energy for solid and gas phases. Computational results obtained by means of the numerical model in one-dimensional case are compared with the experimental data of the smoldering combustion in polyurethane foam under free convection in the gravity field, which were described in literature. Calculations shows that when simulating both co-current combustion (when the smoldering wave moves upward) and counter-current combustion (when the smoldering wave moves downward), the numerical model can provide a good quantitative agreement with experiment if the parameters of the model are well defined.

  17. Ego depletion and attention regulation under pressure: is a temporary loss of self-control strength indeed related to impaired attention regulation?

    PubMed

    Englert, Chris; Zwemmer, Kris; Bertrams, Alex; Oudejans, Raôul R

    2015-04-01

    In the current study we investigated whether ego depletion negatively affects attention regulation under pressure in sports by assessing participants' dart throwing performance and accompanying gaze behavior. According to the strength model of self-control, the most important aspect of self-control is attention regulation. Because higher levels of state anxiety are associated with impaired attention regulation, we chose a mixed design with ego depletion (yes vs. no) as between-subjects and anxiety level (high vs. low) as within-subjects factor. Participants performed a perceptual-motor task requiring selective attention, namely, dart throwing. In line with our expectations, depleted participants in the high-anxiety condition performed worse and displayed a shorter final fixation on bull's eye, demonstrating that when one's self-control strength is depleted, attention regulation under pressure cannot be maintained. This is the first study that directly supports the general assumption that ego depletion is a major factor in influencing attention regulation under pressure.

  18. Foundations for Protecting Renewable-Rich Distribution Systems.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellis, Abraham; Brahma, Sukumar; Ranade, Satish

    High proliferation of Inverter Interfaced Distributed Energy Resources (IIDERs) into the electric distribution grid introduces new challenges to protection of such systems. This is because the existing protection systems are designed with two assumptions: 1) system is single-sourced, resulting in unidirectional fault current, and (2) fault currents are easily detectable due to much higher magnitudes compared to load currents. Due to the fact that most renewables interface with the grid though inverters, and inverters restrict their current output to levels close to the full load currents, both these assumptions are no longer valid - the system becomes multi-sourced, and overcurrent-basedmore » protection does not work. The primary scope of this study is to analyze the response of a grid-tied inverter to different faults in the grid, leading to new guidelines on protecting renewable-rich distribution systems.« less

  19. An Analysis of the Economic Assumptions Underlying Fiscal Plans FY1981 - FY1984.

    DTIC Science & Technology

    1986-06-01

    OF THE ECONOMIC ASSUMPTIONS UNDERLYING FISCAL PLANS FY1981 - FY1984 by Robert Welch Beck June 1986 Thesis Advisor: P. M. CARRICK Approved for public ...DOWGRDIN SHEDLEApproved for public releace; it - 2b ECLSSIICAIONI DWNGAD G SHEDLEbut ion is unlimited. 4! PERFORMING ORGANIZATION REPORT NUMBER(S) S...SECURITY CLASSIFICATION OF T4𔃿 PAC~E All other editions are obsolete Approved for public release; distribution is unlimited. An Analysis of the

  20. Intergenerational resource transfers with random offspring numbers

    PubMed Central

    Arrow, Kenneth J.; Levin, Simon A.

    2009-01-01

    A problem common to biology and economics is the transfer of resources from parents to children. We consider the issue under the assumption that the number of offspring is unknown and can be represented as a random variable. There are 3 basic assumptions. The first assumption is that a given body of resources can be divided into consumption (yielding satisfaction) and transfer to children. The second assumption is that the parents' welfare includes a concern for the welfare of their children; this is recursive in the sense that the children's welfares include concern for their children and so forth. However, the welfare of a child from a given consumption is counted somewhat differently (generally less) than that of the parent (the welfare of a child is “discounted”). The third assumption is that resources transferred may grow (or decline). In economic language, investment, including that in education or nutrition, is productive. Under suitable restrictions, precise formulas for the resulting allocation of resources are found, demonstrating that, depending on the shape of the utility curve, uncertainty regarding the number of offspring may or may not favor increased consumption. The results imply that wealth (stock of resources) will ultimately have a log-normal distribution. PMID:19617553

  1. Demystifying Welfare: Its Feminization and Its Effect on Stakeholders

    ERIC Educational Resources Information Center

    Hartlep, Nicholas D.

    2008-01-01

    Welfare is misunderstood, mystified, and feminized by many stakeholders (i.e. government, media, majoritarian culture, etc.). This text analysis will assess how well the text achieved the following: (1) articulate why the current U.S. welfare state is based upon myths or false assumptions, (2) analyze what these false assumptions mean for…

  2. Random mandatory drugs testing of prisoners: a biassed means of gathering information.

    PubMed

    Gore, S M; Bird, A G; Strang, J S

    1999-01-01

    Our objective was to develop and test a methodology for inferring the percentage of prisoners currently using opiates from the percentage of prisoners testing positive for opiates in random mandatory drugs testing (rMDT). The study used results from Willing Anonymous Salivary HIV (WASH) studies (1994-6) in six adult Scottish prisons, and surveys (1994-5 and 1997) in 14 prisons in England and Wales. For Scottish prisons, the percentage of prisoners currently using opiates was determined by assuming, with varying empirical support, that: current users of opiates in prison were 1.5 times as many as current inside-injectors; and current inside-injectors were 0.75 times as many as ever injectors in prison. We also assumed that current inside-users' frequency of use of opiates (by any route) was equal to the frequency of inside-injecting by current inside-injectors in Aberdeen and Lowmoss Prisons in 1996, namely six times in 4 weeks. We assumed that some scheduling of heroin-use prior to weekends takes place, so that only 50% of current inside-users of opiates would test positive for opiates in rMDT: these assumptions allow us to arrive at WASH-based expectations for the total percentage of prisoners testing positive for opiates in rMDT. For England and Wales, a multiplier of 118/68 was applied which was derived from prisoners' interviews, to convert the results from ever inside-injectors, as determined by WASH studies, to the percentage of current inside users of opiates. We made the same assumptions on frequency of inside-use of opiates as in dealing with the Scottish results. We expected 202.7 opiate positive results in April to September 1997 in rMDTs at six adult prisons in Scotland, 226 were observed. We expected 227.0 at a set of 13 adult prisons and one other in England and Wales; 211 were observed. Further testing of the methodology for prisons in England and Wales will be possible when 1997 WASH data are released. So far, the methodology has performed well. From it, we infer that 24% of inmates at the six adult prisons in Scotland were current inside-users of opiates, compared to 11% at the 14 adult prisons where survey data were available in England and Wales. The corresponding April to September 1997 percentage of opiate positives in rMDT were: 13% (results from the six Scottish prisons) and 5.4% (results from 14 prisons in England and Wales), a two-fold under-estimate of % current users of opiates in prison (24% and 11%). Planning of drug rehabilitation places for prisoners should thus be based on twice the percentage of prisoners testing opiate positive in rMDT. This correction factor of two should be kept under review.

  3. A Memory Based Model of Posttraumatic Stress Disorder: Evaluating Basic Assumptions Underlying the PTSD Diagnosis

    PubMed Central

    Rubin, David C.; Berntsen, Dorthe; Johansen, Malene Klindt

    2009-01-01

    In the mnemonic model of PTSD, the current memory of a negative event, not the event itself determines symptoms. The model is an alternative to the current event-based etiology of PTSD represented in the DSM. The model accounts for important and reliable findings that are often inconsistent with the current diagnostic view and that have been neglected by theoretical accounts of the disorder, including the following observations. The diagnosis needs objective information about the trauma and peritraumatic emotions, but uses retrospective memory reports that can have substantial biases. Negative events and emotions that do not satisfy the current diagnostic criteria for a trauma can be followed by symptoms that would otherwise qualify for PTSD. Predisposing factors that affect the current memory have large effects on symptoms. The inability-to-recall-an-important-aspect-of-the-trauma symptom does not correlate with other symptoms. Loss or enhancement of the trauma memory affects PTSD symptoms in predictable ways. Special mechanisms that apply only to traumatic memories are not needed, increasing parsimony and the knowledge that can be applied to understanding PTSD. PMID:18954211

  4. Associating ground magnetometer observations with current or voltage generators

    NASA Astrophysics Data System (ADS)

    Hartinger, M. D.; Xu, Z.; Clauer, C. R.; Yu, Y.; Weimer, D. R.; Kim, H.; Pilipenko, V.; Welling, D. T.; Behlke, R.; Willer, A. N.

    2017-07-01

    A circuit analogy for magnetosphere-ionosphere current systems has two extremes for drivers of ionospheric currents: ionospheric electric fields/voltages constant while current/conductivity vary—the "voltage generator"—and current constant while electric field/conductivity vary—the "current generator." Statistical studies of ground magnetometer observations associated with dayside Transient High Latitude Current Systems (THLCS) driven by similar mechanisms find contradictory results using this paradigm: some studies associate THLCS with voltage generators, others with current generators. We argue that most of this contradiction arises from two assumptions used to interpret ground magnetometer observations: (1) measurements made at fixed position relative to the THLCS field-aligned current and (2) negligible auroral precipitation contributions to ionospheric conductivity. We use observations and simulations to illustrate how these two assumptions substantially alter expectations for magnetic perturbations associated with either a current or a voltage generator. Our results demonstrate that before interpreting ground magnetometer observations of THLCS in the context of current/voltage generators, the location of a ground magnetometer station relative to the THLCS field-aligned current and the location of any auroral zone conductivity enhancements need to be taken into account.

  5. Assumptions Underlying the Use of Different Types of Simulations.

    ERIC Educational Resources Information Center

    Cunningham, J. Barton

    1984-01-01

    Clarifies appropriateness of certain simulation approaches by distinguishing between different types of simulations--experimental, predictive, evaluative, and educational--on the basis of purpose, assumptions, procedures, and criteria for evaluating. The kinds of questions each type best responds to are discussed. (65 references) (MBR)

  6. ADJECTIVES AS NOUN PHRASES.

    ERIC Educational Resources Information Center

    ROSS, JOHN ROBERT

    THIS ANALYSIS OF UNDERLYING SYNTACTIC STRUCTURE IS BASED ON THE ASSUMPTION THAT THE PARTS OF SPEECH CALLED "VERBS" AND "ADJECTIVES" ARE TWO SUBCATEGORIES OF ONE MAJOR LEXICAL CATEGORY, "PREDICATE." FROM THIS ASSUMPTION, THE HYPOTHESIS IS ADVANCED THAT, IN LANGUAGES EXHIBITING THE COPULA, THE DEEP STRUCTURE OF SENTENCES CONTAINING PREDICATE…

  7. 24 CFR 58.4 - Assumption authority.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ..., decision-making, and action that would otherwise apply to HUD under NEPA and other provisions of law that... environmental review, decision-making and action for programs authorized by the Native American Housing... separate decision regarding assumption of responsibilities for each of these Acts and communicate that...

  8. Strategies to take into account variations in extreme rainfall events for design storms in urban area: an example over Naples (Southern Italy)

    NASA Astrophysics Data System (ADS)

    Mercogliano, P.; Rianna, G.

    2017-12-01

    Eminent works highlighted how available observations display ongoing increases in extreme rainfall events while climate models assess them for future. Although the constraints in rainfall networks observations and uncertainties in climate modelling currently affect in significant way investigations, the huge impacts potentially induced by climate changes (CC) suggest adopting effective adaptation measures in order to take proper precautions. In this regard, design storms are used by engineers to size hydraulic infrastructures potentially affected by direct (e.g. pluvial/urban flooding) and indirect (e.g. river flooding) effects of extreme rainfall events. Usually they are expressed as IDF curves, mathematical relationships between rainfall Intensity, Duration, and the return period (frequency, F). They are estimated interpreting through Extreme Theories Statistical Theories (ETST) past rainfall records under the assumption of steady conditions resulting then unsuitable under climate change. In this work, a methodology to estimate future variations in IDF curves is presented and carried out for the city of Naples (Southern Italy). In this regard, the Equidistance Quantile Matching Approach proposed by Sivrastav et al. (2014) is adopted. According it, daily-subdaily maximum precipitation observations [a] and the analogous daily data provided by climate projections on current [b] and future time spans [c] are interpreted in IDF terms through Generalized Extreme Value (GEV) approach. After, quantile based mapping approach is used to establish a statistical relationship between cumulative distribution functions resulting by GEV of [a] and [b] (spatial downscaling) and [b] and [c] functions (temporal downscaling). Coupling so-obtained relations permits generating IDF curves under CC assumption. To account for uncertainties in future projections, all climate simulations available for the area in Euro-Cordex multimodel ensemble at 0.11° (about 12 km) are considered under three different concentration scenarios (RCP2.6, RCP4.5 and RCP8.5). The results appear largely influenced by models, RCPs and time horizon of interest; nevertheless, clear indications of increases are detectable although with different magnitude on the different precipitation durations.

  9. Behavior of Triple Langmuir Probes in Non-Equilibrium Plasmas

    NASA Technical Reports Server (NTRS)

    Polzin, Kurt A.; Ratcliffe, Alicia C.

    2018-01-01

    The triple Langmuir probe is an electrostatic probe in which three probe tips collect current when inserted into a plasma. The triple probe differs from a simple single Langmuir probe in the nature of the voltage applied to the probe tips. In the single probe, a swept voltage is applied to the probe tip to acquire a waveform showing the collected current as a function of applied voltage (I-V curve). In a triple probe three probe tips are electrically coupled to each other with constant voltages applied between each of the tips. The voltages are selected such that they would represent three points on the single Langmuir probe I-V curve. Elimination of the voltage sweep makes it possible to measure time-varying plasma properties in transient plasmas. Under the assumption of a Maxwellian plasma, one can determine the time-varying plasma temperature T(sub e)(t) and number density n(sub e)(t) from the applied voltage levels and the time-histories of the collected currents. In the present paper we examine the theory of triple probe operation, specifically focusing on the assumption of a Maxwellian plasma. Triple probe measurements have been widely employed for a number of pulsed and timevarying plasmas, including pulsed plasma thrusters (PPTs), dense plasma focus devices, plasma flows, and fusion experiments. While the equilibrium assumption may be justified for some applications, it is unlikely that it is fully justifiable for all pulsed and time-varying plasmas or for all times during the pulse of a plasma device. To examine a simple non-equilibrium plasma case, we return to basic governing equations of probe current collection and compute the current to the probes for a distribution function consisting of two Maxwellian distributions with different temperatures (the two-temperature Maxwellian). A variation of this method is also employed, where one of the Maxwellians is offset from zero (in velocity space) to add a suprathermal beam of electrons to the tail of the main Maxwellian distribution (the bump-on-the-tail distribution function). For a range of parameters in these non-Maxwellian distributions, we compute the current collection to the probes. We compare the distribution function that was assumed a priori with the distribution function one would infer when applying standard triple probe theory to analyze the collected currents. For the assumed class of non-Maxwellian distribution functions this serves to illustrate the effect a non-Maxwellian plasma would have on results interpreted using the equilibrium triple probe current collection theory, allowing us to state the magnitudes of these deviations as a function of the assumed distribution function properties.

  10. Comment on ``Unraveling the Causes of Radiation Belt Enhancements''

    NASA Astrophysics Data System (ADS)

    Campbell, Wallace H.

    2008-09-01

    The excellent article by M. W. Liemohn and A. A. Chan on the radiation belts (see Eos, 88(42), 16 October 2007) is misleading in its implication that the disturbance storm-time (Dst) index is an indicator of a magnetospheric ring current. That index is formed from an average of magnetic data from three or four low-latitude stations that have been fallaciously ``adjusted'' to a magnetic equatorial location under the 1960's assumption [Sugiura, 1964] that the fields arrive from the growth and decay of a giant ring of current in the magnetosphere. In truth, the index has a negative lognormal form [Campbell, 1996; Yago and Kamide, 2003] as a result of its composition from numerous negative ionospheric and magnetospheric disturbance field sources, each having normal field amplitude distributions [Campbell, 2004]. Some partial ring currents [Lui et al., 1987] and their associated field-aligned currents, as well as major ionospheric currents flowing from the auroral zone to equatorial latitudes, are the main contributors to the Dst index. No full magnetospheric ring of currents is involved, despite its false name (``Equatorial Dst Ring Current Index'') given by the index suppliers, the Geomagnetism Laboratory at Kyoto University, Japan.

  11. MEMBRANE POTENTIAL OF THE SQUID GIANT AXON DURING CURRENT FLOW

    PubMed Central

    Cole, Kenneth S.; Curtis, Howard J.

    1941-01-01

    The squid giant axon was placed in a shallow narrow trough and current was sent in at two electrodes in opposite sides of the trough and out at a third electrode several centimeters away. The potential difference across the membrane was measured between an inside fine capillary electrode with its tip in the axoplasm between the pair of polarizing electrodes, and an outside capillary electrode with its tip flush with the surface of one polarizing electrode. The initial transient was roughly exponential at the anode make and damped oscillatory at the sub-threshold cathode make with the action potential arising from the first maximum when threshold was reached. The constant change of membrane potential, after the initial transient, was measured as a function of the total polarizing current and from these data the membrane potential is obtained as a function of the membrane current density. The absolute value of the resting membrane resistance approached at low polarizing currents is about 23 ohm cm.2. This low value is considered to be a result of the puncture of the axon. The membrane was found to be an excellent rectifier with a ratio of about one hundred between the high resistance at the anode and the low resistance at the cathode for the current range investigated. On the assumption that the membrane conductance is a measure of its ion permeability, these experiments show an increase of ion permeability under a cathode and a decrease under an anode. PMID:19873234

  12. Neural models on temperature regulation for cold-stressed animals

    NASA Technical Reports Server (NTRS)

    Horowitz, J. M.

    1975-01-01

    The present review evaluates several assumptions common to a variety of current models for thermoregulation in cold-stressed animals. Three areas covered by the models are discussed: signals to and from the central nervous system (CNS), portions of the CNS involved, and the arrangement of neurons within networks. Assumptions in each of these categories are considered. The evaluation of the models is based on the experimental foundations of the assumptions. Regions of the nervous system concerned here include the hypothalamus, the skin, the spinal cord, the hippocampus, and the septal area of the brain.

  13. Modelling rotational and cyclical spectral solar irradiance variations

    NASA Astrophysics Data System (ADS)

    Unruh, Yvonne

    Solar irradiance changes are highly wavelength dependent: solar-cycle variations in the UV can be on the order of tens of percent, while changes in the visible are typically only of the order of one or two permille. With the launch of a number of instruments to measure spectral solar irradiance, we are now for a first time in a good position to explore the changing solar irradiance over a large range of wavelengths and to test our irradiance models as well as some of their underlying assumptions. I will introduce some of the current modelling approaches and present model-data comparisons, using the SATIRE irradiance model and SORCE/SIM measurements as an example. I will conclude by highlighting a number of outstanding questions regarding the modelling of spectral irradiance and current approaches to address these.

  14. Modelling of thermal stresses in bearing steel structure generated by electrical current impulses

    NASA Astrophysics Data System (ADS)

    Birjukovs, M.; Jakovics, A.; Holweger, W.

    2018-05-01

    This work is the study of one particular candidate for white etching crack (WEC) initiation mechanism in wind turbine gearbox bearings: discharge current impulses flowing through bearing steel with associated thermal stresses and material fatigue. Using data/results from previously published works, the authors develop a series of models that are utilized to simulate these processes under various conditions/local microstructure configurations, as well as to verify the results of the previous numerical studies. Presented models show that the resulting stresses are several orders of magnitude below the fatigue limit/yield strength for the parameters used herein. Results and analysis of models provided by Scepanskis, M. et al. also indicate that certain effects predicted in their previous work resulted from a physically unfounded assumption about material thermodynamic properties and numerical model implementation issues.

  15. Approximate calculation of multispar cantilever and semicantilever wings with parallel ribs under direct and indirect loading

    NASA Technical Reports Server (NTRS)

    Sanger, Eugen

    1932-01-01

    A method is presented for approximate static calculation, which is based on the customary assumption of rigid ribs, while taking into account the systematic errors in the calculation results due to this arbitrary assumption. The procedure is given in greater detail for semicantilever and cantilever wings with polygonal spar plan form and for wings under direct loading only. The last example illustrates the advantages of the use of influence lines for such wing structures and their practical interpretation.

  16. Single Cell Genomics: Approaches and Utility in Immunology

    PubMed Central

    Neu, Karlynn E; Tang, Qingming; Wilson, Patrick C; Khan, Aly A

    2017-01-01

    Single cell genomics offers powerful tools for studying lymphocytes, which make it possible to observe rare and intermediate cell states that cannot be resolved at the population-level. Advances in computer science and single cell sequencing technology have created a data-driven revolution in immunology. The challenge for immunologists is to harness computing and turn an avalanche of quantitative data into meaningful discovery of immunological principles, predictive models, and strategies for therapeutics. Here, we review the current literature on computational analysis of single cell RNA-seq data and discuss underlying assumptions, methods, and applications in immunology, and highlight important directions for future research. PMID:28094102

  17. Hall effects on peristalsis of boron nitride-ethylene glycol nanofluid with temperature dependent thermal conductivity

    NASA Astrophysics Data System (ADS)

    Abbasi, F. M.; Gul, Maimoona; Shehzad, S. A.

    2018-05-01

    Current study provides a comprehensive numerical investigation of the peristaltic transport of boron nitride-ethylene glycol nanofluid through a symmetric channel in presence of magnetic field. Significant effects of Brownian motion and thermophoresis have been included in the energy equation. Hall and Ohmic heating effects are also taken into consideration. Resulting system of non-linear equations is solved numerically using NDSolve in Mathematica. Expressions for velocity, temperature, concentration and streamlines are derived and plotted under the assumption of long wavelength and low Reynolds number. Influence of various parameters on heat and mass transfer rates have been discussed with the help of bar charts.

  18. Reliability of Children’s Testimony in the Era of Developmental Reversals

    PubMed Central

    Brainerd, C. J.; Reyna, V. F.

    2012-01-01

    A hoary assumption of the law is that children are more prone to false-memory reports than adults, and hence, their testimony is less reliable than adults’. Since the 1980s, that assumption has been buttressed by numerous studies that detected declines in false memory between early childhood and young adulthood under controlled conditions. Fuzzy-trace theory predicted reversals of this standard developmental pattern in circumstances that are directly relevant to testimony because they involve using the gist of experience to remember events. That prediction has been investigated during the past decade, and a large number of experiments have been published in which false memories have indeed been found to increase between early childhood and young adulthood. Further, experimentation has tied age increases in false memory to improvements in children’s memory for semantic gist. According to current scientific evidence, the principle that children’s testimony is necessarily more infected with false memories than adults’ and that, other things being equal, juries should regard adult’s testimony as necessarily more faithful to actual events is untenable. PMID:23139439

  19. Time evolution of predictability of epidemics on networks.

    PubMed

    Holme, Petter; Takaguchi, Taro

    2015-04-01

    Epidemic outbreaks of new pathogens, or known pathogens in new populations, cause a great deal of fear because they are hard to predict. For theoretical models of disease spreading, on the other hand, quantities characterizing the outbreak converge to deterministic functions of time. Our goal in this paper is to shed some light on this apparent discrepancy. We measure the diversity of (and, thus, the predictability of) outbreak sizes and extinction times as functions of time given different scenarios of the amount of information available. Under the assumption of perfect information-i.e., knowing the state of each individual with respect to the disease-the predictability decreases exponentially, or faster, with time. The decay is slowest for intermediate values of the per-contact transmission probability. With a weaker assumption on the information available, assuming that we know only the fraction of currently infectious, recovered, or susceptible individuals, the predictability also decreases exponentially most of the time. There are, however, some peculiar regions in this scenario where the predictability decreases. In other words, to predict its final size with a given accuracy, we would need increasingly more information about the outbreak.

  20. Is DNA a worm-like chain in Couette flow? In search of persistence length, a critical review.

    PubMed

    Rittman, Martyn; Gilroy, Emma; Koohya, Hashem; Rodger, Alison; Richards, Adair

    2009-01-01

    Persistence length is the foremost measure of DNA flexibility. Its origins lie in polymer theory which was adapted for DNA following the determination of BDNA structure in 1953. There is no single definition of persistence length used, and the links between published definitions are based on assumptions which may, or may not be, clearly stated. DNA flexibility is affected by local ionic strength, solvent environment, bound ligands and intrinsic sequence-dependent flexibility. This article is a review of persistence length providing a mathematical treatment of the relationships between four definitions of persistence length, including: correlation, Kuhn length, bending, and curvature. Persistence length has been measured using various microscopy, force extension and solution methods such as linear dichroism and transient electric birefringence. For each experimental method a model of DNA is required to interpret the data. The importance of understanding the underlying models, along with the assumptions required by each definition to determine a value of persistence length, is highlighted for linear dichroism data, where it transpires that no model is currently available for long DNA or medium to high shear rate experiments.

  1. Disambiguating brain functional connectivity.

    PubMed

    Duff, Eugene P; Makin, Tamar; Cottaar, Michiel; Smith, Stephen M; Woolrich, Mark W

    2018-06-01

    Functional connectivity (FC) analyses of correlations of neural activity are used extensively in neuroimaging and electrophysiology to gain insights into neural interactions. However, analyses assessing changes in correlation fail to distinguish effects produced by sources as different as changes in neural signal amplitudes or noise levels. This ambiguity substantially diminishes the value of FC for inferring system properties and clinical states. Network modelling approaches may avoid ambiguities, but require specific assumptions. We present an enhancement to FC analysis with improved specificity of inferences, minimal assumptions and no reduction in flexibility. The Additive Signal Change (ASC) approach characterizes FC changes into certain prevalent classes of signal change that involve the input of additional signal to existing activity. With FMRI data, the approach reveals a rich diversity of signal changes underlying measured changes in FC, suggesting that it could clarify our current understanding of FC changes in many contexts. The ASC method can also be used to disambiguate other measures of dependency, such as regression and coherence, providing a flexible tool for the analysis of neural data. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Reviewed approach to defining the Active Interlock Envelope for Front End ray tracing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seletskiy, S.; Shaftan, T.

    To protect the NSLS-II Storage Ring (SR) components from damage from synchrotron radiation produced by insertion devices (IDs) the Active Interlock (AI) keeps electron beam within some safe envelope (a.k.a Active Interlock Envelope or AIE) in the transverse phase space. The beamline Front Ends (FEs) are designed under assumption that above certain beam current (typically 2 mA) the ID synchrotron radiation (IDSR) fan is produced by the interlocked e-beam. These assumptions also define how the ray tracing for FE is done. To simplify the FE ray tracing for typical uncanted ID it was decided to provide the Mechanical Engineering groupmore » with a single set of numbers (x,x’,y,y’) for the AIE at the center of the long (or short) ID straight section. Such unified approach to the design of the beamline Front Ends will accelerate the design process and save valuable human resources. In this paper we describe our new approach to defining the AI envelope and provide the resulting numbers required for design of the typical Front End.« less

  3. Time evolution of predictability of epidemics on networks

    NASA Astrophysics Data System (ADS)

    Holme, Petter; Takaguchi, Taro

    2015-04-01

    Epidemic outbreaks of new pathogens, or known pathogens in new populations, cause a great deal of fear because they are hard to predict. For theoretical models of disease spreading, on the other hand, quantities characterizing the outbreak converge to deterministic functions of time. Our goal in this paper is to shed some light on this apparent discrepancy. We measure the diversity of (and, thus, the predictability of) outbreak sizes and extinction times as functions of time given different scenarios of the amount of information available. Under the assumption of perfect information—i.e., knowing the state of each individual with respect to the disease—the predictability decreases exponentially, or faster, with time. The decay is slowest for intermediate values of the per-contact transmission probability. With a weaker assumption on the information available, assuming that we know only the fraction of currently infectious, recovered, or susceptible individuals, the predictability also decreases exponentially most of the time. There are, however, some peculiar regions in this scenario where the predictability decreases. In other words, to predict its final size with a given accuracy, we would need increasingly more information about the outbreak.

  4. My self as an other: on autoimmunity and "other" paradoxes.

    PubMed

    Cohen, E

    2004-06-01

    The rubric autoimmunity currently encompasses sixty to seventy diverse illnesses which affect many of the tissues of the human body. Western medical practice asserts that the crisis known as autoimmune disease arises when a biological organism compromises its own integrity by misrecognising parts of itself as other than itself and then seeks to eliminate these unrecognised and hence antagonistic aspects of itself. That is, autoimmune illnesses seem to manifest the contradictory and sometimes deadly proposition that the "identity": body/self both is and is not "itself". Based on the assumption that under normal circumstances "the self" ought to coincide naturally with "the body"-or at the very least the self ought to inhabit the living location of the body more or less unproblematically-this scientific paradigm depicts autoimmune illness as a vital paradox. Yet for those of us who have lived through the experience of an autoimmune crisis, the living paradox that we embody may also lead us to question the basis upon which these medical assumptions rest. This essay raises some of these questions.

  5. Walking through the statistical black boxes of plant breeding.

    PubMed

    Xavier, Alencar; Muir, William M; Craig, Bruce; Rainey, Katy Martin

    2016-10-01

    The main statistical procedures in plant breeding are based on Gaussian process and can be computed through mixed linear models. Intelligent decision making relies on our ability to extract useful information from data to help us achieve our goals more efficiently. Many plant breeders and geneticists perform statistical analyses without understanding the underlying assumptions of the methods or their strengths and pitfalls. In other words, they treat these statistical methods (software and programs) like black boxes. Black boxes represent complex pieces of machinery with contents that are not fully understood by the user. The user sees the inputs and outputs without knowing how the outputs are generated. By providing a general background on statistical methodologies, this review aims (1) to introduce basic concepts of machine learning and its applications to plant breeding; (2) to link classical selection theory to current statistical approaches; (3) to show how to solve mixed models and extend their application to pedigree-based and genomic-based prediction; and (4) to clarify how the algorithms of genome-wide association studies work, including their assumptions and limitations.

  6. Intensity - Duration - Frequency Curves for U.S. Cities in a Warming Climate

    NASA Astrophysics Data System (ADS)

    Ragno, Elisa; AghaKouchak, Amir; Love, Charlotte; Vahedifard, Farshid; Cheng, Linyin; Lima, Carlos

    2017-04-01

    Current infrastructure design procedures rely on the use of Intensity - Duration - Frequency (IDF) curves retrieved under the assumption of temporal stationarity, meaning that occurrences of extreme events are expected to be time invariant. However, numerous studies have observed more severe extreme events over time. Hence, the stationarity assumption for extreme analysis may not be appropriate in a warming climate. This issue raises concerns regarding the safety and resilience of infrastructures and natural slopes. Here we employ daily precipitation data from historical and projected (RCP 8.5) CMIP5 runs to investigate IDF curves of 14 urban areas across the United States. We first statistically assess changes in precipitation extremes using an energy-based test for equal distributions. Then, through a Bayesian inference approach for stationary and non-stationary extreme value analysis, we provide updated IDF curves based on future climatic model projections. We show that, based on CMIP5 simulations, U.S cities may experience extreme precipitation events up to 20% more intense and twice as frequently, relative to historical records, despite the expectation of unchanged annual mean precipitation.

  7. Understanding the relationship between repetition priming and mere exposure.

    PubMed

    Butler, Laurie T; Berry, Dianne C

    2004-11-01

    Over the last two decades interest in implicit memory, most notably repetition priming, has grown considerably. During the same period, research has also focused on the mere exposure effect. Although the two areas have developed relatively independently, a number of studies has described the mere exposure effect as an example of implicit memory. Tacit in their comparisons is the assumption that the effect is more specifically a demonstration of repetition priming. Having noted that this assumption has attracted relatively little attention, this paper reviews current evidence and shows that it is by no means conclusive. Although some evidence is suggestive of a common underlying mechanism, even a modified repetition priming (perceptual fluency/attribution) framework cannot accommodate all of the differences between the two phenomena. Notwithstanding this, it seems likely that a version of this theoretical framework still offers the best hope of a comprehensive explanation for the mere exposure effect and its relationship to repetition priming. As such, the paper finishes by offering some initial guidance as to ways in which the perceptual fluency/attribution framework might be extended, as well as outlining important areas for future research.

  8. [The Basic-Symptom Concept and its Influence on Current International Research on the Prediction of Psychoses].

    PubMed

    Schultze-Lutter, F

    2016-12-01

    The early detection of psychoses has become increasingly relevant in research and clinic. Next to the ultra-high risk (UHR) approach that targets an immediate risk of developing frank psychosis, the basic symptom approach that targets the earliest possible detection of the developing disorder is being increasingly used worldwide. The present review gives an introduction to the development and basic assumptions of the basic symptom concept, summarizes the results of studies on the specificity of basic symptoms for psychoses in different age groups as well as on studies of their psychosis-predictive value, and gives an outlook on future results. Moreover, a brief introduction to first recent imaging studies is given that supports one of the main assumptions of the basic symptom concept, i. e., that basic symptoms are the most immediate phenomenological expression of the cerebral aberrations underlying the development of psychosis. From this, it is concluded that basic symptoms might be able to provide important information on future neurobiological research on the etiopathology of psychoses. © Georg Thieme Verlag KG Stuttgart · New York.

  9. An experiment in software reliability: Additional analyses using data from automated replications

    NASA Technical Reports Server (NTRS)

    Dunham, Janet R.; Lauterbach, Linda A.

    1988-01-01

    A study undertaken to collect software error data of laboratory quality for use in the development of credible methods for predicting the reliability of software used in life-critical applications is summarized. The software error data reported were acquired through automated repetitive run testing of three independent implementations of a launch interceptor condition module of a radar tracking problem. The results are based on 100 test applications to accumulate a sufficient sample size for error rate estimation. The data collected is used to confirm the results of two Boeing studies reported in NASA-CR-165836 Software Reliability: Repetitive Run Experimentation and Modeling, and NASA-CR-172378 Software Reliability: Additional Investigations into Modeling With Replicated Experiments, respectively. That is, the results confirm the log-linear pattern of software error rates and reject the hypothesis of equal error rates per individual fault. This rejection casts doubt on the assumption that the program's failure rate is a constant multiple of the number of residual bugs; an assumption which underlies some of the current models of software reliability. data raises new questions concerning the phenomenon of interacting faults.

  10. Fair lineups are better than biased lineups and showups, but not because they increase underlying discriminability.

    PubMed

    Smith, Andrew M; Wells, Gary L; Lindsay, R C L; Penrod, Steven D

    2017-04-01

    Receiver Operating Characteristic (ROC) analysis has recently come in vogue for assessing the underlying discriminability and the applied utility of lineup procedures. Two primary assumptions underlie recommendations that ROC analysis be used to assess the applied utility of lineup procedures: (a) ROC analysis of lineups measures underlying discriminability, and (b) the procedure that produces superior underlying discriminability produces superior applied utility. These same assumptions underlie a recently derived diagnostic-feature detection theory, a theory of discriminability, intended to explain recent patterns observed in ROC comparisons of lineups. We demonstrate, however, that these assumptions are incorrect when ROC analysis is applied to lineups. We also demonstrate that a structural phenomenon of lineups, differential filler siphoning, and not the psychological phenomenon of diagnostic-feature detection, explains why lineups are superior to showups and why fair lineups are superior to biased lineups. In the process of our proofs, we show that computational simulations have assumed, unrealistically, that all witnesses share exactly the same decision criteria. When criterial variance is included in computational models, differential filler siphoning emerges. The result proves dissociation between ROC curves and underlying discriminability: Higher ROC curves for lineups than for showups and for fair than for biased lineups despite no increase in underlying discriminability. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. Population Estimates for Chum Salmon Spawning in the Mainstem Columbia River, 2002 Technical Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rawding, Dan; Hillson, Todd D.

    2003-11-15

    Accurate and precise population estimates of chum salmon (Oncorhynchus keta) spawning in the mainstem Columbia River are needed to provide a basis for informed water allocation decisions, to determine the status of chum salmon listed under the Endangered Species Act, and to evaluate the contribution of the Duncan Creek re-introduction program to mainstem spawners. Currently, mark-recapture experiments using the Jolly-Seber model provide the only framework for this type of estimation. In 2002, a study was initiated to estimate mainstem Columbia River chum salmon populations using seining data collected while capturing broodstock as part of the Duncan Creek re-introduction. The fivemore » assumptions of the Jolly-Seber model were examined using hypothesis testing within a statistical framework, including goodness of fit tests and secondary experiments. We used POPAN 6, an integrated computer system for the analysis of capture-recapture data, to obtain maximum likelihood estimates of standard model parameters, derived estimates, and their precision. A more parsimonious final model was selected using Akaike Information Criteria. Final chum salmon escapement estimates and (standard error) from seining data for the Ives Island, Multnomah, and I-205 sites are 3,179 (150), 1,269 (216), and 3,468 (180), respectively. The Ives Island estimate is likely lower than the total escapement because only the largest two of four spawning sites were sampled. The accuracy and precision of these estimates would improve if seining was conducted twice per week instead of weekly, and by incorporating carcass recoveries into the analysis. Population estimates derived from seining mark-recapture data were compared to those obtained using the current mainstem Columbia River salmon escapement methodologies. The Jolly-Seber population estimate from carcass tagging in the Ives Island area was 4,232 adults with a standard error of 79. This population estimate appears reasonable and precise but batch marks and lack of secondary studies made it difficult to test Jolly-Seber assumptions, necessary for unbiased estimates. We recommend that individual tags be applied to carcasses to provide a statistical basis for goodness of fit tests and ultimately model selection. Secondary or double marks should be applied to assess tag loss and male and female chum salmon carcasses should be enumerated separately. Carcass tagging population estimates at the two other sites were biased low due to limited sampling. The Area-Under-the-Curve escapement estimates at all three sites were 36% to 76% of Jolly-Seber estimates. Area-Under-the Curve estimates are likely biased low because previous assumptions that observer efficiency is 100% and residence time is 10 days proved incorrect. If managers continue to rely on Area-Under-the-Curve to estimate mainstem Columbia River spawners, a methodology is provided to develop annual estimates of observer efficiency and residence time, and to incorporate uncertainty into the Area-Under-the-Curve escapement estimate.« less

  12. On the Use of Rank Tests and Estimates in the Linear Model.

    DTIC Science & Technology

    1982-06-01

    assumption A5, McKean and Hettmansperger (1976) show that 10 w (W(N-c) - W (c+l))/ (2Z /2) (14) where 2Z is the 1-a interpercentile range of the standard...r(.75n) - r(.25n)) (13) The window width h incorporates a resistant estimate of scale, then interquartile range of the residuals, and a normalizing...alternative estimate of i is available with the additional assumption of symmetry of the error distribution. ASSUMPTION: A5. Suppose the underlying error

  13. Fourier's law of heat conduction: quantum mechanical master equation analysis.

    PubMed

    Wu, Lian-Ao; Segal, Dvira

    2008-06-01

    We derive the macroscopic Fourier's Law of heat conduction from the exact gain-loss time convolutionless quantum master equation under three assumptions for the interaction kernel. To second order in the interaction, we show that the first two assumptions are natural results of the long time limit. The third assumption can be satisfied by a family of interactions consisting of an exchange effect. The pure exchange model directly leads to energy diffusion in a weakly coupled spin- 12 chain.

  14. You and the Civil Air Patrol.

    DTIC Science & Technology

    1988-04-01

    currently holds the rank of Major In CAP. He was the Director of Senior Programs for the National Capital Wing for three years. In this capacity, he...2 Assumptions and Limitations ............................ 2 Previous Studies...................................... 2 CHAPTER TWO... limited thereby contributing to the problem. ASSUMPTIONS AND LIMITATIONS The time limitation and limited scope of this study prevented surveying all the

  15. Ontology Extraction Tools: An Empirical Study with Educators

    ERIC Educational Resources Information Center

    Hatala, M.; Gasevic, D.; Siadaty, M.; Jovanovic, J.; Torniai, C.

    2012-01-01

    Recent research in Technology-Enhanced Learning (TEL) demonstrated several important benefits that semantic technologies can bring to the TEL domain. An underlying assumption for most of these research efforts is the existence of a domain ontology. The second unspoken assumption follows that educators will build domain ontologies for their…

  16. Extracurricular Business Planning Competitions: Challenging the Assumptions

    ERIC Educational Resources Information Center

    Watson, Kayleigh; McGowan, Pauric; Smith, Paul

    2014-01-01

    Business planning competitions [BPCs] are a commonly offered yet under-examined extracurricular activity. Given the extent of sceptical comment about business planning, this paper offers what the authors believe is a much-needed critical discussion of the assumptions that underpin the provision of such competitions. In doing so it is suggested…

  17. Hybrid Approaches and Industrial Applications of Pattern Recognition,

    DTIC Science & Technology

    1980-10-01

    emphasized that the probability distribution in (9) is correct only under the assumption that P( wIx ) is known exactly. In practice this assumption will...sufficient precision. The alternative would be to take the probability distribution of estimates of P( wix ) into account in the analysis. However, from the

  18. Diagnostic tools for nearest neighbors techniques when used with satellite imagery

    Treesearch

    Ronald E. McRoberts

    2009-01-01

    Nearest neighbors techniques are non-parametric approaches to multivariate prediction that are useful for predicting both continuous and categorical forest attribute variables. Although some assumptions underlying nearest neighbor techniques are common to other prediction techniques such as regression, other assumptions are unique to nearest neighbor techniques....

  19. Ontological, Epistemological and Methodological Assumptions: Qualitative versus Quantitative

    ERIC Educational Resources Information Center

    Ahmed, Abdelhamid

    2008-01-01

    The review to follow is a comparative analysis of two studies conducted in the field of TESOL in Education published in "TESOL QUARTERLY." The aspects to be compared are as follows. First, a brief description of each study will be presented. Second, the ontological, epistemological and methodological assumptions underlying each study…

  20. Questionable Validity of Poisson Assumptions in a Combined Loglinear/MDS Mapping Model.

    ERIC Educational Resources Information Center

    Gleason, John M.

    1993-01-01

    This response to an earlier article on a combined log-linear/MDS model for mapping journals by citation analysis discusses the underlying assumptions of the Poisson model with respect to characteristics of the citation process. The importance of empirical data analysis is also addressed. (nine references) (LRW)

  1. Shattering the Glass Ceiling: Women in School Administration.

    ERIC Educational Resources Information Center

    Patterson, Jean A.

    Consistent with national trends, white males hold the majority of public school administrator positions in North Carolina. This paper examines the barriers and underlying assumptions that have prevented women and minorities from gaining access to high-level positions in educational administration. These include: (1) the assumption that leadership…

  2. Transferring Goods or Splitting a Resource Pool

    ERIC Educational Resources Information Center

    Dijkstra, Jacob; Van Assen, Marcel A. L. M.

    2008-01-01

    We investigated the consequences for exchange outcomes of the violation of an assumption underlying most social psychological research on exchange. This assumption is that the negotiated direct exchange of commodities between two actors (pure exchange) can be validly represented as two actors splitting a fixed pool of resources (split pool…

  3. Preparing Democratic Education Leaders

    ERIC Educational Resources Information Center

    Young, Michelle D.

    2010-01-01

    Although it is common to hear people espouse the importance of education to ensuring a strong and vibrant democracy, the assumptions underlying such statements are rarely unpacked. Two of the most widespread, though not necessarily complimentary, assumptions include: (1) to truly participate in a democracy, citizens must be well educated; and (2)…

  4. Commentary on Coefficient Alpha: A Cautionary Tale

    ERIC Educational Resources Information Center

    Green, Samuel B.; Yang, Yanyun

    2009-01-01

    The general use of coefficient alpha to assess reliability should be discouraged on a number of grounds. The assumptions underlying coefficient alpha are unlikely to hold in practice, and violation of these assumptions can result in nontrivial negative or positive bias. Structural equation modeling was discussed as an informative process both to…

  5. Timber value—a matter of choice: a study of how end use assumptions affect timber values.

    Treesearch

    John H. Beuter

    1971-01-01

    The relationship between estimated timber values and actual timber prices is discussed. Timber values are related to how, where, and when the timber is used. An analysis demonstrates the relative values of a typical Douglas-fir stand under assumptions about timber use.

  6. Mexican-American Cultural Assumptions and Implications.

    ERIC Educational Resources Information Center

    Carranza, E. Lou

    The search for presuppositions of a people's thought is not new. Octavio Paz and Samuel Ramos have both attempted to describe the assumptions underlying the Mexican character. Paz described Mexicans as private, defensive, and stoic, characteristics taken to the extreme in the "pachuco." Ramos, on the other hand, described Mexicans as…

  7. Electrochemical oxidation of ampicillin antibiotic at boron-doped diamond electrodes and process optimization using response surface methodology.

    PubMed

    Körbahti, Bahadır K; Taşyürek, Selin

    2015-03-01

    Electrochemical oxidation and process optimization of ampicillin antibiotic at boron-doped diamond electrodes (BDD) were investigated in a batch electrochemical reactor. The influence of operating parameters, such as ampicillin concentration, electrolyte concentration, current density, and reaction temperature, on ampicillin removal, COD removal, and energy consumption was analyzed in order to optimize the electrochemical oxidation process under specified cost-driven constraints using response surface methodology. Quadratic models for the responses satisfied the assumptions of the analysis of variance well according to normal probability, studentized residuals, and outlier t residual plots. Residual plots followed a normal distribution, and outlier t values indicated that the approximations of the fitted models to the quadratic response surfaces were very good. Optimum operating conditions were determined at 618 mg/L ampicillin concentration, 3.6 g/L electrolyte concentration, 13.4 mA/cm(2) current density, and 36 °C reaction temperature. Under response surface optimized conditions, ampicillin removal, COD removal, and energy consumption were obtained as 97.1 %, 92.5 %, and 71.7 kWh/kg CODr, respectively.

  8. Sex allocation and investment into pre- and post-copulatory traits in simultaneous hermaphrodites: the role of polyandry and local sperm competition.

    PubMed

    Schärer, Lukas; Pen, Ido

    2013-03-05

    Sex allocation theory predicts the optimal allocation to male and female reproduction in sexual organisms. In animals, most work on sex allocation has focused on species with separate sexes and our understanding of simultaneous hermaphrodites is patchier. Recent theory predicts that sex allocation in simultaneous hermaphrodites should strongly be affected by post-copulatory sexual selection, while the role of pre-copulatory sexual selection is much less clear. Here, we review sex allocation and sexual selection theory for simultaneous hermaphrodites, and identify several strong and potentially unwarranted assumptions. We then present a model that treats allocation to sexually selected traits as components of sex allocation and explore patterns of allocation when some of these assumptions are relaxed. For example, when investment into a male sexually selected trait leads to skews in sperm competition, causing local sperm competition, this is expected to lead to a reduced allocation to sperm production. We conclude that understanding the evolution of sex allocation in simultaneous hermaphrodites requires detailed knowledge of the different sexual selection processes and their relative importance. However, little is currently known quantitatively about sexual selection in simultaneous hermaphrodites, about what the underlying traits are, and about what drives and constrains their evolution. Future work should therefore aim at quantifying sexual selection and identifying the underlying traits along the pre- to post-copulatory axis.

  9. Sex allocation and investment into pre- and post-copulatory traits in simultaneous hermaphrodites: the role of polyandry and local sperm competition

    PubMed Central

    Schärer, Lukas; Pen, Ido

    2013-01-01

    Sex allocation theory predicts the optimal allocation to male and female reproduction in sexual organisms. In animals, most work on sex allocation has focused on species with separate sexes and our understanding of simultaneous hermaphrodites is patchier. Recent theory predicts that sex allocation in simultaneous hermaphrodites should strongly be affected by post-copulatory sexual selection, while the role of pre-copulatory sexual selection is much less clear. Here, we review sex allocation and sexual selection theory for simultaneous hermaphrodites, and identify several strong and potentially unwarranted assumptions. We then present a model that treats allocation to sexually selected traits as components of sex allocation and explore patterns of allocation when some of these assumptions are relaxed. For example, when investment into a male sexually selected trait leads to skews in sperm competition, causing local sperm competition, this is expected to lead to a reduced allocation to sperm production. We conclude that understanding the evolution of sex allocation in simultaneous hermaphrodites requires detailed knowledge of the different sexual selection processes and their relative importance. However, little is currently known quantitatively about sexual selection in simultaneous hermaphrodites, about what the underlying traits are, and about what drives and constrains their evolution. Future work should therefore aim at quantifying sexual selection and identifying the underlying traits along the pre- to post-copulatory axis. PMID:23339243

  10. On Impedance Spectroscopy of Supercapacitors

    NASA Astrophysics Data System (ADS)

    Uchaikin, V. V.; Sibatov, R. T.; Ambrozevich, A. S.

    2016-10-01

    Supercapacitors are often characterized by responses measured by methods of impedance spectroscopy. In the frequency domain these responses have the form of power-law functions or their linear combinations. The inverse Fourier transform leads to relaxation equations with integro-differential operators of fractional order under assumption that the frequency response is independent of the working voltage. To compare long-term relaxation kinetics predicted by these equations with the observed one, charging-discharging of supercapacitors (with nominal capacitances of 0.22, 0.47, and 1.0 F) have been studied by means of registration of the current response to a step voltage signal. It is established that the reaction of devices under study to variations of the charging regime disagrees with the model of a homogeneous linear response. It is demonstrated that relaxation is well described by a fractional stretched exponent.

  11. Determining the near-surface current profile from measurements of the wave dispersion relation

    NASA Astrophysics Data System (ADS)

    Smeltzer, Benjamin; Maxwell, Peter; Aesøy, Eirik; Ellingsen, Simen

    2017-11-01

    The current-induced Doppler shifts of waves can yield information about the background mean flow, providing an attractive method of inferring the current profile in the upper layer of the ocean. We present measurements of waves propagating on shear currents in a laboratory water channel, as well as theoretical investigations of inversion techniques for determining the vertical current structure. Spatial and temporal measurements of the free surface profile obtained using a synthetic Schlieren method are analyzed to determine the wave dispersion relation and Doppler shifts as a function of wavelength. The vertical current profile can then be inferred from the Doppler shifts using an inversion algorithm. Most existing algorithms rely on a priori assumptions of the shape of the current profile, and developing a method that uses less stringent assumptions is a focus of this study, allowing for measurement of more general current profiles. The accuracy of current inversion algorithms are evaluated by comparison to measurements of the mean flow profile from particle image velocimetry (PIV), and a discussion of the sensitivity to errors in the Doppler shifts is presented.

  12. A new framework of statistical inferences based on the valid joint sampling distribution of the observed counts in an incomplete contingency table.

    PubMed

    Tian, Guo-Liang; Li, Hui-Qiong

    2017-08-01

    Some existing confidence interval methods and hypothesis testing methods in the analysis of a contingency table with incomplete observations in both margins entirely depend on an underlying assumption that the sampling distribution of the observed counts is a product of independent multinomial/binomial distributions for complete and incomplete counts. However, it can be shown that this independency assumption is incorrect and can result in unreliable conclusions because of the under-estimation of the uncertainty. Therefore, the first objective of this paper is to derive the valid joint sampling distribution of the observed counts in a contingency table with incomplete observations in both margins. The second objective is to provide a new framework for analyzing incomplete contingency tables based on the derived joint sampling distribution of the observed counts by developing a Fisher scoring algorithm to calculate maximum likelihood estimates of parameters of interest, the bootstrap confidence interval methods, and the bootstrap testing hypothesis methods. We compare the differences between the valid sampling distribution and the sampling distribution under the independency assumption. Simulation studies showed that average/expected confidence-interval widths of parameters based on the sampling distribution under the independency assumption are shorter than those based on the new sampling distribution, yielding unrealistic results. A real data set is analyzed to illustrate the application of the new sampling distribution for incomplete contingency tables and the analysis results again confirm the conclusions obtained from the simulation studies.

  13. Determination of current and rotational transform profiles in a current-carrying stellarator using soft x-ray emissivity measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, X.; Cianciosa, M. R.; Ennis, D. A.

    In this research, collimated soft X-ray (SXR) emissivity measurements from multi-channel cameras on the Compact Toroidal Hybrid (CTH) tokamak/torsatron device are incorporated in the 3D equilibrium reconstruction code V3FIT to reconstruct the shape of flux surfaces and infer the current distribution within the plasma. Equilibrium reconstructions of sawtoothing plasmas that use data from both SXR and external magnetic diagnostics show the central safety factor to be near unity under the assumption that SXR iso-emissivity contours lie on magnetic flux surfaces. The reconstruction results are consistent with those using the external magnetic data and a constraint on the location of qmore » = 1 surfaces determined from the sawtooth inversion surface extracted from SXR brightness profiles. The agreement justifies the use of approximating SXR emission as a flux function in CTH, at least within the core of the plasma, subject to the spatial resolution of the SXR diagnostics. Lastly, this improved reconstruction of the central current density indicates that the current profile peakedness decreases with increasing external transform and that the internal inductance is not a relevant measure of how peaked the current profile is in hybrid discharges.« less

  14. Determination of current and rotational transform profiles in a current-carrying stellarator using soft x-ray emissivity measurements

    NASA Astrophysics Data System (ADS)

    Ma, X.; Cianciosa, M. R.; Ennis, D. A.; Hanson, J. D.; Hartwell, G. J.; Herfindal, J. L.; Howell, E. C.; Knowlton, S. F.; Maurer, D. A.; Traverso, P. J.

    2018-01-01

    Collimated soft X-ray (SXR) emissivity measurements from multi-channel cameras on the Compact Toroidal Hybrid (CTH) tokamak/torsatron device are incorporated in the 3D equilibrium reconstruction code V3FIT to reconstruct the shape of flux surfaces and infer the current distribution within the plasma. Equilibrium reconstructions of sawtoothing plasmas that use data from both SXR and external magnetic diagnostics show the central safety factor to be near unity under the assumption that SXR iso-emissivity contours lie on magnetic flux surfaces. The reconstruction results are consistent with those using the external magnetic data and a constraint on the location of q = 1 surfaces determined from the sawtooth inversion surface extracted from SXR brightness profiles. The agreement justifies the use of approximating SXR emission as a flux function in CTH, at least within the core of the plasma, subject to the spatial resolution of the SXR diagnostics. This improved reconstruction of the central current density indicates that the current profile peakedness decreases with increasing external transform and that the internal inductance is not a relevant measure of how peaked the current profile is in hybrid discharges.

  15. Determination of current and rotational transform profiles in a current-carrying stellarator using soft x-ray emissivity measurements

    DOE PAGES

    Ma, X.; Cianciosa, M. R.; Ennis, D. A.; ...

    2018-01-31

    In this research, collimated soft X-ray (SXR) emissivity measurements from multi-channel cameras on the Compact Toroidal Hybrid (CTH) tokamak/torsatron device are incorporated in the 3D equilibrium reconstruction code V3FIT to reconstruct the shape of flux surfaces and infer the current distribution within the plasma. Equilibrium reconstructions of sawtoothing plasmas that use data from both SXR and external magnetic diagnostics show the central safety factor to be near unity under the assumption that SXR iso-emissivity contours lie on magnetic flux surfaces. The reconstruction results are consistent with those using the external magnetic data and a constraint on the location of qmore » = 1 surfaces determined from the sawtooth inversion surface extracted from SXR brightness profiles. The agreement justifies the use of approximating SXR emission as a flux function in CTH, at least within the core of the plasma, subject to the spatial resolution of the SXR diagnostics. Lastly, this improved reconstruction of the central current density indicates that the current profile peakedness decreases with increasing external transform and that the internal inductance is not a relevant measure of how peaked the current profile is in hybrid discharges.« less

  16. Potential of wind power projects under the Clean Development Mechanism in India

    PubMed Central

    Purohit, Pallav; Michaelowa, Axel

    2007-01-01

    Background So far, the cumulative installed capacity of wind power projects in India is far below their gross potential (≤ 15%) despite very high level of policy support, tax benefits, long term financing schemes etc., for more than 10 years etc. One of the major barriers is the high costs of investments in these systems. The Clean Development Mechanism (CDM) of the Kyoto Protocol provides industrialized countries with an incentive to invest in emission reduction projects in developing countries to achieve a reduction in CO2 emissions at lowest cost that also promotes sustainable development in the host country. Wind power projects could be of interest under the CDM because they directly displace greenhouse gas emissions while contributing to sustainable rural development, if developed correctly. Results Our estimates indicate that there is a vast theoretical potential of CO2 mitigation by the use of wind energy in India. The annual potential Certified Emissions Reductions (CERs) of wind power projects in India could theoretically reach 86 million. Under more realistic assumptions about diffusion of wind power projects based on past experiences with the government-run programmes, annual CER volumes by 2012 could reach 41 to 67 million and 78 to 83 million by 2020. Conclusion The projections based on the past diffusion trend indicate that in India, even with highly favorable assumptions, the dissemination of wind power projects is not likely to reach its maximum estimated potential in another 15 years. CDM could help to achieve the maximum utilization potential more rapidly as compared to the current diffusion trend if supportive policies are introduced. PMID:17663772

  17. Comparison of Two Methods for Detecting Alternative Splice Variants Using GeneChip® Exon Arrays

    PubMed Central

    Fan, Wenhong; Stirewalt, Derek L.; Radich, Jerald P.; Zhao, Lueping

    2011-01-01

    The Affymetrix GeneChip Exon Array can be used to detect alternative splice variants. Microarray Detection of Alternative Splicing (MIDAS) and Partek® Genomics Suite (Partek® GS) are among the most popular analytical methods used to analyze exon array data. While both methods utilize statistical significance for testing, MIDAS and Partek® GS could produce somewhat different results due to different underlying assumptions. Comparing MIDAS and Partek® GS is quite difficult due to their substantially different mathematical formulations and assumptions regarding alternative splice variants. For meaningful comparison, we have used the previously published generalized probe model (GPM) which encompasses both MIDAS and Partek® GS under different assumptions. We analyzed a colon cancer exon array data set using MIDAS, Partek® GS and GPM. MIDAS and Partek® GS produced quite different sets of genes that are considered to have alternative splice variants. Further, we found that GPM produced results similar to MIDAS as well as to Partek® GS under their respective assumptions. Within the GPM, we show how discoveries relating to alternative variants can be quite different due to different assumptions. MIDAS focuses on relative changes in expression values across different exons within genes and tends to be robust but less efficient. Partek® GS, however, uses absolute expression values of individual exons within genes and tends to be more efficient but more sensitive to the presence of outliers. From our observations, we conclude that MIDAS and Partek® GS produce complementary results, and discoveries from both analyses should be considered. PMID:23675234

  18. Generator localization by current source density (CSD): Implications of volume conduction and field closure at intracranial and scalp resolutions

    PubMed Central

    Tenke, Craig E.; Kayser, Jürgen

    2012-01-01

    The topographic ambiguity and reference-dependency that has plagued EEG/ERP research throughout its history are largely attributable to volume conduction, which may be concisely described by a vector form of Ohm’s Law. This biophysical relationship is common to popular algorithms that infer neuronal generators via inverse solutions. It may be further simplified as Poisson’s source equation, which identifies underlying current generators from estimates of the second spatial derivative of the field potential (Laplacian transformation). Intracranial current source density (CSD) studies have dissected the “cortical dipole” into intracortical sources and sinks, corresponding to physiologically-meaningful patterns of neuronal activity at a sublaminar resolution, much of which is locally cancelled (i.e., closed field). By virtue of the macroscopic scale of the scalp-recorded EEG, a surface Laplacian reflects the radial projections of these underlying currents, representing a unique, unambiguous measure of neuronal activity at scalp. Although the surface Laplacian requires minimal assumptions compared to complex, model-sensitive inverses, the resulting waveform topographies faithfully summarize and simplify essential constraints that must be placed on putative generators of a scalp potential topography, even if they arise from deep or partially-closed fields. CSD methods thereby provide a global empirical and biophysical context for generator localization, spanning scales from intracortical to scalp recordings. PMID:22796039

  19. Why Are Experts Correlated? Decomposing Correlations between Judges

    ERIC Educational Resources Information Center

    Broomell, Stephen B.; Budescu, David V.

    2009-01-01

    We derive an analytic model of the inter-judge correlation as a function of five underlying parameters. Inter-cue correlation and the number of cues capture our assumptions about the environment, while differentiations between cues, the weights attached to the cues, and (un)reliability describe assumptions about the judges. We study the relative…

  20. Contexts and Pragmatics Learning: Problems and Opportunities of the Study Abroad Research

    ERIC Educational Resources Information Center

    Taguchi, Naoko

    2018-01-01

    Despite different epistemologies and assumptions, all theories in second language (L2) acquisition emphasize the centrality of context in understanding L2 acquisition. Under the assumption that language emerges from use in context, the cognitivist approach focuses on distributions and properties of input to infer both learning objects and process…

  1. Marking and Moderation in the UK: False Assumptions and Wasted Resources

    ERIC Educational Resources Information Center

    Bloxham, Sue

    2009-01-01

    This article challenges a number of assumptions underlying marking of student work in British universities. It argues that, in developing rigorous moderation procedures, we have created a huge burden for markers which adds little to accuracy and reliability but creates additional work for staff, constrains assessment choices and slows down…

  2. 29 CFR 4010.8 - Plan actuarial information.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... Assumptions for decrements other than mortality and retirement (such as turnover or disability) used to... than 25 years of service. Employee A is an active participant who is age 40 and has completed 5 years... entitled under the assumption that A works until age 58. (2) Example 2. Employee B is also an active...

  3. An Assessment of Propensity Score Matching as a Nonexperimental Impact Estimator: Evidence from Mexico's PROGRESA Program

    ERIC Educational Resources Information Center

    Diaz, Juan Jose; Handa, Sudhanshu

    2006-01-01

    Not all policy questions can be addressed by social experiments. Nonexperimental evaluation methods provide an alternative to experimental designs but their results depend on untestable assumptions. This paper presents evidence on the reliability of propensity score matching (PSM), which estimates treatment effects under the assumption of…

  4. 29 CFR 4044.53 - Mortality assumptions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... assumptions. (a) General rule. Subject to paragraph (b) of this section (regarding certain death benefits...), and (g) of this section to value benefits under § 4044.52. (b) Certain death benefits. If an annuity for one person is in pay status on the valuation date, and if the payment of a death benefit after the...

  5. 29 CFR 4044.53 - Mortality assumptions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... assumptions. (a) General rule. Subject to paragraph (b) of this section (regarding certain death benefits...), and (g) of this section to value benefits under § 4044.52. (b) Certain death benefits. If an annuity for one person is in pay status on the valuation date, and if the payment of a death benefit after the...

  6. 29 CFR 4044.53 - Mortality assumptions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... assumptions. (a) General rule. Subject to paragraph (b) of this section (regarding certain death benefits...), and (g) of this section to value benefits under § 4044.52. (b) Certain death benefits. If an annuity for one person is in pay status on the valuation date, and if the payment of a death benefit after the...

  7. 29 CFR 4044.53 - Mortality assumptions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... assumptions. (a) General rule. Subject to paragraph (b) of this section (regarding certain death benefits...), and (g) of this section to value benefits under § 4044.52. (b) Certain death benefits. If an annuity for one person is in pay status on the valuation date, and if the payment of a death benefit after the...

  8. Under What Assumptions Do Site-by-Treatment Instruments Identify Average Causal Effects?

    ERIC Educational Resources Information Center

    Reardon, Sean F.; Raudenbush, Stephen W.

    2013-01-01

    The increasing availability of data from multi-site randomized trials provides a potential opportunity to use instrumental variables methods to study the effects of multiple hypothesized mediators of the effect of a treatment. We derive nine assumptions needed to identify the effects of multiple mediators when using site-by-treatment interactions…

  9. An identifiable model for informative censoring

    USGS Publications Warehouse

    Link, W.A.; Wegman, E.J.; Gantz, D.T.; Miller, J.J.

    1988-01-01

    The usual model for censored survival analysis requires the assumption that censoring of observations arises only due to causes unrelated to the lifetime under consideration. It is easy to envision situations in which this assumption is unwarranted, and in which use of the Kaplan-Meier estimator and associated techniques will lead to unreliable analyses.

  10. Biological control agents elevate hantavirus by subsidizing deer mouse populations

    Treesearch

    Dean E. Pearson; Ragan M. Callaway

    2006-01-01

    Biological control of exotic invasive plants using exotic insects is practiced under the assumption that biological control agents are safe if they do not directly attack non-target species. We tested this assumption by evaluating the potential for two host-specific biological control agents (Urophora spp.), widely established in North America for spotted...

  11. Assumptions Underlying Curriculum Decisions in Australia: An American Perspective.

    ERIC Educational Resources Information Center

    Willis, George

    An analysis of the cultural and historical context in which curriculum decisions are made in Australia and a comparison with educational assumptions in the United States is the purpose of this paper. Methodology is based on personal teaching experience and observation in Australia. Seven factors are identified upon which curricular decisions in…

  12. Parabolic Systems with p, q-Growth: A Variational Approach

    NASA Astrophysics Data System (ADS)

    Bögelein, Verena; Duzaar, Frank; Marcellini, Paolo

    2013-10-01

    We consider the evolution problem associated with a convex integrand {f : {R}^{Nn}to [0,infty)} satisfying a non-standard p, q-growth assumption. To establish the existence of solutions we introduce the concept of variational solutions. In contrast to weak solutions, that is, mappings {u\\colon Ω_T to {R}^n} which solve partial_tu-div Df(Du)=0 weakly in {Ω_T}, variational solutions exist under a much weaker assumption on the gap q - p. Here, we prove the existence of variational solutions provided the integrand f is strictly convex and 2n/n+2 < p le q < p+1. These variational solutions turn out to be unique under certain mild additional assumptions on the data. Moreover, if the gap satisfies the natural stronger assumption 2le p le q < p+ minbig \\{1,4/n big \\}, we show that variational solutions are actually weak solutions. This means that solutions u admit the necessary higher integrability of the spatial derivative Du to satisfy the parabolic system in the weak sense, that is, we prove that uin L^q_locbig(0,T; W^{1,q}_loc(Ω,{R}^N)big).

  13. A general method for handling missing binary outcome data in randomized controlled trials

    PubMed Central

    Jackson, Dan; White, Ian R; Mason, Dan; Sutton, Stephen

    2014-01-01

    Aims The analysis of randomized controlled trials with incomplete binary outcome data is challenging. We develop a general method for exploring the impact of missing data in such trials, with a focus on abstinence outcomes. Design We propose a sensitivity analysis where standard analyses, which could include ‘missing = smoking’ and ‘last observation carried forward’, are embedded in a wider class of models. Setting We apply our general method to data from two smoking cessation trials. Participants A total of 489 and 1758 participants from two smoking cessation trials. Measurements The abstinence outcomes were obtained using telephone interviews. Findings The estimated intervention effects from both trials depend on the sensitivity parameters used. The findings differ considerably in magnitude and statistical significance under quite extreme assumptions about the missing data, but are reasonably consistent under more moderate assumptions. Conclusions A new method for undertaking sensitivity analyses when handling missing data in trials with binary outcomes allows a wide range of assumptions about the missing data to be assessed. In two smoking cessation trials the results were insensitive to all but extreme assumptions. PMID:25171441

  14. Exploring REACH as a potential data source for characterizing ecotoxicity in life cycle assessment.

    PubMed

    Müller, Nienke; de Zwart, Dick; Hauschild, Michael; Kijko, Gaël; Fantke, Peter

    2017-02-01

    Toxicity models in life cycle impact assessment (LCIA) currently only characterize a small fraction of marketed substances, mostly because of limitations in the underlying ecotoxicity data. One approach to improve the current data situation in LCIA is to identify new data sources, such as the European Registration, Evaluation, Authorisation, and Restriction of Chemicals (REACH) database. The present study explored REACH as a potential data source for LCIA based on matching reported ecotoxicity data for substances that are currently also included in the United Nations Environment Programme/Society for Environmental Toxicology and Chemistry (UNEP/SETAC) scientific consensus model USEtox for characterizing toxicity impacts. Data are evaluated with respect to number of data points, reported reliability, and test duration, and are compared with data listed in USEtox at the level of hazardous concentration for 50% of the covered species per substance. The results emphasize differences between data available via REACH and in USEtox. The comparison of ecotoxicity data from REACH and USEtox shows potential for using REACH ecotoxicity data in LCIA toxicity characterization, but also highlights issues related to compliance of submitted data with REACH requirements as well as different assumptions underlying regulatory risk assessment under REACH versus data needed for LCIA. Thus, further research is required to address data quality, pre-processing, and applicability, before considering data submitted under REACH as a data source for use in LCIA, and also to explore additionally available data sources, published studies, and reports. Environ Toxicol Chem 2017;36:492-500. © 2016 SETAC. © 2016 SETAC.

  15. Causal analysis of ordinal treatments and binary outcomes under truncation by death.

    PubMed

    Wang, Linbo; Richardson, Thomas S; Zhou, Xiao-Hua

    2017-06-01

    It is common that in multi-arm randomized trials, the outcome of interest is "truncated by death," meaning that it is only observed or well-defined conditioning on an intermediate outcome. In this case, in addition to pairwise contrasts, the joint inference for all treatment arms is also of interest. Under a monotonicity assumption we present methods for both pairwise and joint causal analyses of ordinal treatments and binary outcomes in presence of truncation by death. We illustrate via examples the appropriateness of our assumptions in different scientific contexts.

  16. Robustness of location estimators under t-distributions: a literature review

    NASA Astrophysics Data System (ADS)

    Sumarni, C.; Sadik, K.; Notodiputro, K. A.; Sartono, B.

    2017-03-01

    The assumption of normality is commonly used in estimation of parameters in statistical modelling, but this assumption is very sensitive to outliers. The t-distribution is more robust than the normal distribution since the t-distributions have longer tails. The robustness measures of location estimators under t-distributions are reviewed and discussed in this paper. For the purpose of illustration we use the onion yield data which includes outliers as a case study and showed that the t model produces better fit than the normal model.

  17. A new code for Galileo

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1988-01-01

    Over the past six to eight years, an extensive research effort was conducted to investigate advanced coding techniques which promised to yield more coding gain than is available with current NASA standard codes. The delay in Galileo's launch due to the temporary suspension of the shuttle program provided the Galileo project with an opportunity to evaluate the possibility of including some version of the advanced codes as a mission enhancement option. A study was initiated last summer to determine if substantial coding gain was feasible for Galileo and, is so, to recommend a suitable experimental code for use as a switchable alternative to the current NASA-standard code. The Galileo experimental code study resulted in the selection of a code with constant length 15 and rate 1/4. The code parameters were chosen to optimize performance within cost and risk constraints consistent with retrofitting the new code into the existing Galileo system design and launch schedule. The particular code was recommended after a very limited search among good codes with the chosen parameters. It will theoretically yield about 1.5 dB enhancement under idealizing assumptions relative to the current NASA-standard code at Galileo's desired bit error rates. This ideal predicted gain includes enough cushion to meet the project's target of at least 1 dB enhancement under real, non-ideal conditions.

  18. Regulatory assessment of chemical mixtures: Requirements, current approaches and future perspectives.

    PubMed

    Kienzler, Aude; Bopp, Stephanie K; van der Linden, Sander; Berggren, Elisabet; Worth, Andrew

    2016-10-01

    This paper reviews regulatory requirements and recent case studies to illustrate how the risk assessment (RA) of chemical mixtures is conducted, considering both the effects on human health and on the environment. A broad range of chemicals, regulations and RA methodologies are covered, in order to identify mixtures of concern, gaps in the regulatory framework, data needs, and further work to be carried out. Also the current and potential future use of novel tools (Adverse Outcome Pathways, in silico tools, toxicokinetic modelling, etc.) in the RA of combined effects were reviewed. The assumptions made in the RA, predictive model specifications and the choice of toxic reference values can greatly influence the assessment outcome, and should therefore be specifically justified. Novel tools could support mixture RA mainly by providing a better understanding of the underlying mechanisms of combined effects. Nevertheless, their use is currently limited because of a lack of guidance, data, and expertise. More guidance is needed to facilitate their application. As far as the authors are aware, no prospective RA concerning chemicals related to various regulatory sectors has been performed to date, even though numerous chemicals are registered under several regulatory frameworks. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  19. Interpreting the concordance statistic of a logistic regression model: relation to the variance and odds ratio of a continuous explanatory variable.

    PubMed

    Austin, Peter C; Steyerberg, Ewout W

    2012-06-20

    When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population.

  20. Assessment of dietary exposure in the French population to 13 selected food colours, preservatives, antioxidants, stabilizers, emulsifiers and sweeteners.

    PubMed

    Bemrah, Nawel; Leblanc, Jean-Charles; Volatier, Jean-Luc

    2008-01-01

    The results of French intake estimates for 13 food additives prioritized by the methods proposed in the 2001 Report from the European Commission on Dietary Food Additive Intake in the European Union are reported. These 13 additives were selected using the first and second tiers of the three-tier approach. The first tier was based on theoretical food consumption data and the maximum permitted level of additives. The second tier used real individual food consumption data and the maximum permitted level of additives for the substances which exceeded the acceptable daily intakes (ADI) in the first tier. In the third tier reported in this study, intake estimates were calculated for the 13 additives (colours, preservatives, antioxidants, stabilizers, emulsifiers and sweeteners) according to two modelling assumptions corresponding to two different food habit scenarios (assumption 1: consumers consume foods that may or may not contain food additives, and assumption 2: consumers always consume foods that contain additives) when possible. In this approach, real individual food consumption data and the occurrence/use-level of food additives reported by the food industry were used. Overall, the results of the intake estimates are reassuring for the majority of additives studied since the risk of exceeding the ADI was low, except for nitrites, sulfites and annatto, whose ADIs were exceeded by either children or adult consumers or by both populations under one and/or two modelling assumptions. Under the first assumption, the ADI is exceeded for high consumers among adults for nitrites and sulfites (155 and 118.4%, respectively) and among children for nitrites (275%). Under the second assumption, the average nitrites dietary exposure in children exceeds the ADI (146.7%). For high consumers, adults exceed the nitrite and sulfite ADIs (223 and 156.4%, respectively) and children exceed the nitrite, annatto and sulfite ADIs (416.7, 124.6 and 130.6%, respectively).

  1. The Teacher, the Physician and the Person: Exploring Causal Connections between Teaching Performance and Role Model Types Using Directed Acyclic Graphs

    PubMed Central

    Boerebach, Benjamin C. M.; Lombarts, Kiki M. J. M. H.; Scherpbier, Albert J. J.; Arah, Onyebuchi A.

    2013-01-01

    Background In fledgling areas of research, evidence supporting causal assumptions is often scarce due to the small number of empirical studies conducted. In many studies it remains unclear what impact explicit and implicit causal assumptions have on the research findings; only the primary assumptions of the researchers are often presented. This is particularly true for research on the effect of faculty’s teaching performance on their role modeling. Therefore, there is a need for robust frameworks and methods for transparent formal presentation of the underlying causal assumptions used in assessing the causal effects of teaching performance on role modeling. This study explores the effects of different (plausible) causal assumptions on research outcomes. Methods This study revisits a previously published study about the influence of faculty’s teaching performance on their role modeling (as teacher-supervisor, physician and person). We drew eight directed acyclic graphs (DAGs) to visually represent different plausible causal relationships between the variables under study. These DAGs were subsequently translated into corresponding statistical models, and regression analyses were performed to estimate the associations between teaching performance and role modeling. Results The different causal models were compatible with major differences in the magnitude of the relationship between faculty’s teaching performance and their role modeling. Odds ratios for the associations between teaching performance and the three role model types ranged from 31.1 to 73.6 for the teacher-supervisor role, from 3.7 to 15.5 for the physician role, and from 2.8 to 13.8 for the person role. Conclusions Different sets of assumptions about causal relationships in role modeling research can be visually depicted using DAGs, which are then used to guide both statistical analysis and interpretation of results. Since study conclusions can be sensitive to different causal assumptions, results should be interpreted in the light of causal assumptions made in each study. PMID:23936020

  2. A statistical test of the stability assumption inherent in empirical estimates of economic depreciation.

    PubMed

    Shriver, K A

    1986-01-01

    Realistic estimates of economic depreciation are required for analyses of tax policy, economic growth and production, and national income and wealth. THe purpose of this paper is to examine the stability assumption underlying the econometric derivation of empirical estimates of economic depreciation for industrial machinery and and equipment. The results suggest that a reasonable stability of economic depreciation rates of decline may exist over time. Thus, the assumption of a constant rate of economic depreciation may be a reasonable approximation for further empirical economic analyses.

  3. Exploring super-Gaussianity toward robust information-theoretical time delay estimation.

    PubMed

    Petsatodis, Theodoros; Talantzis, Fotios; Boukis, Christos; Tan, Zheng-Hua; Prasad, Ramjee

    2013-03-01

    Time delay estimation (TDE) is a fundamental component of speaker localization and tracking algorithms. Most of the existing systems are based on the generalized cross-correlation method assuming gaussianity of the source. It has been shown that the distribution of speech, captured with far-field microphones, is highly varying, depending on the noise and reverberation conditions. Thus the performance of TDE is expected to fluctuate depending on the underlying assumption for the speech distribution, being also subject to multi-path reflections and competitive background noise. This paper investigates the effect upon TDE when modeling the source signal with different speech-based distributions. An information theoretical TDE method indirectly encapsulating higher order statistics (HOS) formed the basis of this work. The underlying assumption of Gaussian distributed source has been replaced by that of generalized Gaussian distribution that allows evaluating the problem under a larger set of speech-shaped distributions, ranging from Gaussian to Laplacian and Gamma. Closed forms of the univariate and multivariate entropy expressions of the generalized Gaussian distribution are derived to evaluate the TDE. The results indicate that TDE based on the specific criterion is independent of the underlying assumption for the distribution of the source, for the same covariance matrix.

  4. Comparison of parametric and bootstrap method in bioequivalence test.

    PubMed

    Ahn, Byung-Jin; Yim, Dong-Seok

    2009-10-01

    The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption.

  5. Comparison of Parametric and Bootstrap Method in Bioequivalence Test

    PubMed Central

    Ahn, Byung-Jin

    2009-01-01

    The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption. PMID:19915699

  6. The cost-effectiveness of training US primary care physicians to conduct colorectal cancer screening in family medicine residency programs.

    PubMed

    Edwardson, Nicholas; Bolin, Jane N; McClellan, David A; Nash, Philip P; Helduser, Janet W

    2016-04-01

    Demand for a wide array of colorectal cancer screening strategies continues to outpace supply. One strategy to reduce this deficit is to dramatically increase the number of primary care physicians who are trained and supportive of performing office-based colonoscopies or flexible sigmoidoscopies. This study evaluates the clinical and economic implications of training primary care physicians via family medicine residency programs to offer colorectal cancer screening services as an in-office procedure. Using previously established clinical and economic assumptions from existing literature and budget data from a local grant (2013), incremental cost-effectiveness ratios are calculated that incorporate the costs of a proposed national training program and subsequent improvements in patient compliance. Sensitivity analyses are also conducted. Baseline assumptions suggest that the intervention would produce 2394 newly trained residents who could perform 71,820 additional colonoscopies or 119,700 additional flexible sigmoidoscopies after ten years. Despite high costs associated with the national training program, incremental cost-effectiveness ratios remain well below standard willingness-to-pay thresholds under base case assumptions. Interestingly, the status quo hierarchy of preferred screening strategies is disrupted by the proposed intervention. A national overhaul of family medicine residency programs offering training for colorectal cancer screening yields satisfactory incremental cost-effectiveness ratios. However, the model places high expectations on primary care physicians to improve current compliance levels in the US. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Feature inference with uncertain categorization: Re-assessing Anderson's rational model.

    PubMed

    Konovalova, Elizaveta; Le Mens, Gaël

    2017-09-18

    A key function of categories is to help predictions about unobserved features of objects. At the same time, humans are often in situations where the categories of the objects they perceive are uncertain. In an influential paper, Anderson (Psychological Review, 98(3), 409-429, 1991) proposed a rational model for feature inferences with uncertain categorization. A crucial feature of this model is the conditional independence assumption-it assumes that the within category feature correlation is zero. In prior research, this model has been found to provide a poor fit to participants' inferences. This evidence is restricted to task environments inconsistent with the conditional independence assumption. Currently available evidence thus provides little information about how this model would fit participants' inferences in a setting with conditional independence. In four experiments based on a novel paradigm and one experiment based on an existing paradigm, we assess the performance of Anderson's model under conditional independence. We find that this model predicts participants' inferences better than competing models. One model assumes that inferences are based on just the most likely category. The second model is insensitive to categories but sensitive to overall feature correlation. The performance of Anderson's model is evidence that inferences were influenced not only by the more likely category but also by the other candidate category. Our findings suggest that a version of Anderson's model which relaxes the conditional independence assumption will likely perform well in environments characterized by within-category feature correlation.

  8. Development and evaluation of packet video schemes

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Y. C.; Hadenfeldt, A. C.

    1990-01-01

    Reflecting the two tasks proposed for the current year, namely a feasibility study of simulating the NASA network, and a study of progressive transmission schemes, are presented. The view of the NASA network, gleaned from the various technical reports made available to use, is provided. Also included is a brief overview of how the current simulator could be modified to accomplish the goal of simulating the NASA network. As the material in this section would be the basis for the actual simulation, it is important to make sure that it is an accurate reflection of the requirements on the simulator. Brief descriptions of the set of progressive transmission algorithms selected for the study are contained. The results available in the literature were obtained under a variety of different assumptions, not all of which are stated. As such, the only way to compare the efficiency and the implementational complexity of the various algorithms is to simulate them.

  9. Description of bipolar charge transport in polyethylene using a fluid model with a constant mobility: model prediction

    NASA Astrophysics Data System (ADS)

    LeRoy, S.; Segur, P.; Teyssedre, G.; Laurent, C.

    2004-01-01

    We present a conduction model aimed at describing bipolar transport and space charge phenomena in low density polyethylene under dc stress. In the first part we recall the basic requirements for the description of charge transport and charge storage in disordered media with emphasis on the case of polyethylene. A quick review of available conduction models is presented and our approach is compared with these models. Then, the bases of the model are described and related assumptions are discussed. Finally, results on external current, trapped and free space charge distributions, field distribution and recombination rate are presented and discussed, considering a constant dc voltage, a step-increase of the voltage, and a polarization-depolarization protocol for the applied voltage. It is shown that the model is able to describe the general features reported for external current, electroluminescence and charge distribution in polyethylene.

  10. Characteristics and instabilities of mode-locked quantum-dot diode lasers.

    PubMed

    Li, Yan; Lester, Luke F; Chang, Derek; Langrock, Carsten; Fejer, M M; Kane, Daniel J

    2013-04-08

    Current pulse measurement methods have proven inadequate to fully understand the characteristics of passively mode-locked quantum-dot diode lasers. These devices are very difficult to characterize because of their low peak powers, high bandwidth, large time-bandwidth product, and large timing jitter. In this paper, we discuss the origin for the inadequacies of current pulse measurement techniques while presenting new ways of examining frequency-resolved optical gating (FROG) data to provide insight into the operation of these devices. Under the assumptions of a partial coherence model for the pulsed laser, it is shown that simultaneous time-frequency characterization is a necessary and sufficient condition for characterization of mode-locking. Full pulse characterization of quantum dot passively mode-locked lasers (QD MLLs) was done using FROG in a collinear configuration using an aperiodically poled lithium niobate waveguide-based FROG pulse measurement system.

  11. The emergence of Electronic Democracy as an auxiliary to representational democracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noel, R.E.

    1994-06-01

    Electronic democracy as a system is defined, and the ways in which it may affect current systems of government is addressed. Electronic democracy`s achievements thus far in the United States at the community level are surveyed, and prospects for its expansion to state, national, and international systems are summarized. Central problems of electronic democracy are described, and its feasibility assessed (including safeguards against, and vulnerabilities to sabotage and abuse); the ways in which new and ongoing methods for information dissemination pose risks to current systems of government are discussed. One of electronic democracy`s underlying assumptions is challenged, namely that itsmore » direct, instant polling capability necessarily improves or refines governance. Further support is offered for the assertion that computer systems/networks should be used primarily to educate citizens and enhance awareness of issues, rather than as frameworks for direct decision making.« less

  12. Urban land teleconnections and sustainability

    PubMed Central

    Seto, Karen C.; Reenberg, Anette; Boone, Christopher G.; Fragkias, Michail; Haase, Dagmar; Langanke, Tobias; Marcotullio, Peter; Munroe, Darla K.; Olah, Branislav; Simon, David

    2012-01-01

    This paper introduces urban land teleconnections as a conceptual framework that explicitly links land changes to underlying urbanization dynamics. We illustrate how three key themes that are currently addressed separately in the urban sustainability and land change literatures can lead to incorrect conclusions and misleading results when they are not examined jointly: the traditional system of land classification that is based on discrete categories and reinforces the false idea of a rural–urban dichotomy; the spatial quantification of land change that is based on place-based relationships, ignoring the connections between distant places, especially between urban functions and rural land uses; and the implicit assumptions about path dependency and sequential land changes that underlie current conceptualizations of land transitions. We then examine several environmental “grand challenges” and discuss how urban land teleconnections could help research communities frame scientific inquiries. Finally, we point to existing analytical approaches that can be used to advance development and application of the concept. PMID:22550174

  13. Cost-effectiveness of human papillomavirus vaccination in the United States.

    PubMed

    Chesson, Harrell W; Ekwueme, Donatus U; Saraiya, Mona; Markowitz, Lauri E

    2008-02-01

    We describe a simplified model, based on the current economic and health effects of human papillomavirus (HPV), to estimate the cost-effectiveness of HPV vaccination of 12-year-old girls in the United States. Under base-case parameter values, the estimated cost per quality-adjusted life year gained by vaccination in the context of current cervical cancer screening practices in the United States ranged from $3,906 to $14,723 (2005 US dollars), depending on factors such as whether herd immunity effects were assumed; the types of HPV targeted by the vaccine; and whether the benefits of preventing anal, vaginal, vulvar, and oropharyngeal cancers were included. The results of our simplified model were consistent with published studies based on more complex models when key assumptions were similar. This consistency is reassuring because models of varying complexity will be essential tools for policy makers in the development of optimal HPV vaccination strategies.

  14. Direct numerical simulations of temporally developing hydrocarbon shear flames at elevated pressure: effects of the equation of state and the unity Lewis number assumption

    NASA Astrophysics Data System (ADS)

    Korucu, Ayse; Miller, Richard

    2016-11-01

    Direct numerical simulations (DNS) of temporally developing shear flames are used to investigate both equation of state (EOS) and unity-Lewis (Le) number assumption effects in hydrocarbon flames at elevated pressure. A reduced Kerosene / Air mechanism including a semi-global soot formation/oxidation model is used to study soot formation/oxidation processes in a temporarlly developing hydrocarbon shear flame operating at both atmospheric and elevated pressures for the cubic Peng-Robinson real fluid EOS. Results are compared to simulations using the ideal gas law (IGL). The results show that while the unity-Le number assumption with the IGL EOS under-predicts the flame temperature for all pressures, with the real fluid EOS it under-predicts the flame temperature for 1 and 35 atm and over-predicts the rest. The soot mass fraction, Ys, is only under-predicted for the 1 atm flame for both IGL and real gas fluid EOS models. While Ys is over-predicted for elevated pressures with IGL EOS, for the real gas EOS Ys's predictions are similar to results using a non-unity Le model derived from non-equilibrium thermodynamics and real diffusivities. Adopting the unity Le assumption is shown to cause misprediction of Ys, the flame temperature, and the mass fractions of CO, H and OH.

  15. Constructing inquiry: One school's journey to develop an inquiry-based school for teachers and students

    NASA Astrophysics Data System (ADS)

    Sisk-Hilton, Stephanie Lee

    This study examines the two way relationship between an inquiry-based professional development model and teacher enactors. The two year study follows a group of teachers enacting the emergent Supporting Knowledge Integration for Inquiry Practice (SKIIP) professional development model. This study seeks to: (a) identify activity structures in the model that interact with teachers' underlying assumptions regarding professional development and inquiry learning; (b) explain key decision points during implementation in terms of these underlying assumptions; and (c) examine the impact of key activity structures on individual teachers' stated belief structures regarding inquiry learning. Linn's knowledge integration framework facilitates description and analysis of teacher development. Three sets of tensions emerge as themes that describe and constrain participants' interaction with and learning through the model. These are: learning from the group vs. learning on one's own; choosing and evaluating evidence based on impressions vs. specific criteria; and acquiring new knowledge vs. maintaining feelings of autonomy and efficacy. In each of these tensions, existing group goals and operating assumptions initially fell at one end of the tension, while the professional development goals and forms fell at the other. Changes to the model occurred as participants reacted to and negotiated these points of tension. As the group engaged in and modified the SKIIP model, they had repeated opportunities to articulate goals and to make connections between goals and model activity structures. Over time, decisions to modify the model took into consideration an increasingly complex set of underlying assumptions and goals. Teachers identified and sought to balance these tensions. This led to more complex and nuanced decision making, which reflected growing capacity to consider multiple goals in choosing activity structures to enact. The study identifies key activity structures that scaffolded this process for teachers, and which ultimately promoted knowledge integration at both the group and individual levels. This study is an "extreme case" which examines implementation of the SKIIP model under very favorable conditions. Lessons learned regarding appropriate levels of model responsiveness, likely areas of conflict between model form and teacher underlying assumptions, and activity structures that scaffold knowledge integration provide a starting point for future, larger scale implementation.

  16. Temporal Overlap in the Linguistic Processing of Successive Words in Reading: Reply to Pollatsek, Reichle, and Rayner (2006a)

    ERIC Educational Resources Information Center

    Inhoff, Albrecht W.; Radach, Ralph; Eiter, Brianna

    2006-01-01

    A. Pollatsek, E. D. Reichle, and K. Rayner argue that the critical findings in A. W. Inhoff, B. M. Eiter, and R. Radach are in general agreement with core assumptions of sequential attention shift models if additional assumptions and facts are considered. The current authors critically discuss the hypothesized time line of processing and indicate…

  17. An assessment of the impact of FIA's default assumptions on the estimates of coarse woody debris volume and biomass

    Treesearch

    Vicente J. Monleon

    2009-01-01

    Currently, Forest Inventory and Analysis estimation procedures use Smalian's formula to compute coarse woody debris (CWD) volume and assume that logs lie horizontally on the ground. In this paper, the impact of those assumptions on volume and biomass estimates is assessed using 7 years of Oregon's Phase 2 data. Estimates of log volume computed using Smalian...

  18. Simulation of Wave and Current Processes Using Novel, Phase Resolving Models

    DTIC Science & Technology

    2013-09-30

    fundamental technical approach is to represent nearshore water wave systems by retaining Boussinesq scaling assumptions, but without any assumption of... Boussinesq approach that allows for much more freedom in determining the system properties. The resulting systems can have two forms: a classic...of a pressure-Poisson approach to Boussinesq systems . The wave generation-absorption system has now been shown to provide highly accurate results

  19. 46 CFR 111.52-3 - Systems below 1500 kilowatts.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...-GENERAL REQUIREMENTS Calculation of Short-Circuit Currents § 111.52-3 Systems below 1500 kilowatts. The following short-circuit assumptions must be made for a system with an aggregate generating capacity below... maximum short-circuit current of a direct current system must be assumed to be 10 times the aggregate...

  20. Standardized Effect Sizes for Moderated Conditional Fixed Effects with Continuous Moderator Variables

    PubMed Central

    Bodner, Todd E.

    2017-01-01

    Wilkinson and Task Force on Statistical Inference (1999) recommended that researchers include information on the practical magnitude of effects (e.g., using standardized effect sizes) to distinguish between the statistical and practical significance of research results. To date, however, researchers have not widely incorporated this recommendation into the interpretation and communication of the conditional effects and differences in conditional effects underlying statistical interactions involving a continuous moderator variable where at least one of the involved variables has an arbitrary metric. This article presents a descriptive approach to investigate two-way statistical interactions involving continuous moderator variables where the conditional effects underlying these interactions are expressed in standardized effect size metrics (i.e., standardized mean differences and semi-partial correlations). This approach permits researchers to evaluate and communicate the practical magnitude of particular conditional effects and differences in conditional effects using conventional and proposed guidelines, respectively, for the standardized effect size and therefore provides the researcher important supplementary information lacking under current approaches. The utility of this approach is demonstrated with two real data examples and important assumptions underlying the standardization process are highlighted. PMID:28484404

  1. Consequences of Violated Equating Assumptions under the Equivalent Groups Design

    ERIC Educational Resources Information Center

    Lyren, Per-Erik; Hambleton, Ronald K.

    2011-01-01

    The equal ability distribution assumption associated with the equivalent groups equating design was investigated in the context of a selection test for admission to higher education. The purpose was to assess the consequences for the test-takers in terms of receiving improperly high or low scores compared to their peers, and to find strong…

  2. Accountability Policies and Teacher Decision Making: Barriers to the Use of Data to Improve Practice

    ERIC Educational Resources Information Center

    Ingram, Debra; Louis, Karen Seashore; Schroeder, Roger G.

    2004-01-01

    One assumption underlying accountability policies is that results from standardized tests and other sources will be used to make decisions about school and classroom practice. We explore this assumption using data from a longitudinal study of nine high schools nominated as leading practitioners of Continuous Improvement (CI) practices. We use the…

  3. Examining Assumptions in Second Language Research: A Postmodern View. CLCS Occasional Paper No. 45.

    ERIC Educational Resources Information Center

    Masny, Diana

    In a review of literature on second language learning, an opinion is put forth that certain assumptions underlying the theory and the research have influenced researchers' attitudes about second language development and diminished the objectivity of the research. Furthermore the content of the research must then be examined within its…

  4. Reliability of Children's Testimony in the Era of Developmental Reversals

    ERIC Educational Resources Information Center

    Brainerd, C. J.; Reyna, V. F.

    2012-01-01

    A hoary assumption of the law is that children are more prone to false-memory reports than adults, and hence, their testimony is less reliable than adults'. Since the 1980s, that assumption has been buttressed by numerous studies that detected declines in false memory between early childhood and young adulthood under controlled conditions.…

  5. Chlamydia sequelae cost estimates used in current economic evaluations: does one-size-fit-all?

    PubMed

    Ong, Koh Jun; Soldan, Kate; Jit, Mark; Dunbar, J Kevin; Woodhall, Sarah C

    2017-02-01

    Current evidence suggests that chlamydia screening programmes can be cost-effective, conditional on assumptions within mathematical models. We explored differences in cost estimates used in published economic evaluations of chlamydia screening from seven countries (four papers each from UK and the Netherlands, two each from Sweden and Australia, and one each from Ireland, Canada and Denmark). From these studies, we extracted management cost estimates for seven major chlamydia sequelae. In order to compare the influence of different sequelae considered in each paper and their corresponding management costs on the total cost per case of untreated chlamydia, we applied reported unit sequelae management costs considered in each paper to a set of untreated infection to sequela progression probabilities. All costs were adjusted to 2013/2014 Great British Pound (GBP) values. Sequelae management costs ranged from £171 to £3635 (pelvic inflammatory disease); £953 to £3615 (ectopic pregnancy); £546 to £6752 (tubal factor infertility); £159 to £3341 (chronic pelvic pain); £22 to £1008 (epididymitis); £11 to £1459 (neonatal conjunctivitis) and £433 to £3992 (neonatal pneumonia). Total cost of sequelae per case of untreated chlamydia ranged from £37 to £412. There was substantial variation in cost per case of chlamydia sequelae used in published chlamydia screening economic evaluations, which likely arose from different assumptions about disease management pathways and the country perspectives taken. In light of this, when interpreting these studies, the reader should be satisfied that the cost estimates used sufficiently reflect the perspective taken and current disease management for their respective context. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  6. The Impact and Cost of Scaling up GeneXpert MTB/RIF in South Africa

    PubMed Central

    Meyer-Rath, Gesine; Schnippel, Kathryn; Long, Lawrence; MacLeod, William; Sanne, Ian; Stevens, Wendy; Pillay, Sagie; Pillay, Yogan; Rosen, Sydney

    2012-01-01

    Objective We estimated the incremental cost and impact on diagnosis and treatment uptake of national rollout of Xpert MTB/RIF technology (Xpert) for the diagnosis of pulmonary TB above the cost of current guidelines for the years 2011 to 2016 in South Africa. Methods We parameterised a population-level decision model with data from national-level TB databases (n = 199,511) and implementation studies. The model follows cohorts of TB suspects from diagnosis to treatment under current diagnostic guidelines or an algorithm that includes Xpert. Assumptions include the number of TB suspects, symptom prevalence of 5.5%, annual suspect growth rate of 10%, and 2010 public-sector salaries and drug and service delivery costs. Xpert test costs are based on data from an in-country pilot evaluation and assumptions about when global volumes allowing cartridge discounts will be reached. Results At full scale, Xpert will increase the number of TB cases diagnosed per year by 30%–37% and the number of MDR-TB cases diagnosed by 69%–71%. It will diagnose 81% of patients after the first visit, compared to 46% currently. The cost of TB diagnosis per suspect will increase by 55% to USD 60–61 and the cost of diagnosis and treatment per TB case treated by 8% to USD 797–873. The incremental capital cost of the Xpert scale-up will be USD 22 million and the incremental recurrent cost USD 287–316 million over six years. Conclusion Xpert will increase both the number of TB cases diagnosed and treated and the cost of TB diagnosis. These results do not include savings due to reduced transmission of TB as a result of earlier diagnosis and treatment initiation. PMID:22693561

  7. Unraveling the sub-processes of selective attention: insights from dynamic modeling and continuous behavior.

    PubMed

    Frisch, Simon; Dshemuchadse, Maja; Görner, Max; Goschke, Thomas; Scherbaum, Stefan

    2015-11-01

    Selective attention biases information processing toward stimuli that are relevant for achieving our goals. However, the nature of this bias is under debate: Does it solely rely on the amplification of goal-relevant information or is there a need for additional inhibitory processes that selectively suppress currently distracting information? Here, we explored the processes underlying selective attention with a dynamic, modeling-based approach that focuses on the continuous evolution of behavior over time. We present two dynamic neural field models incorporating the diverging theoretical assumptions. Simulations with both models showed that they make similar predictions with regard to response times but differ markedly with regard to their continuous behavior. Human data observed via mouse tracking as a continuous measure of performance revealed evidence for the model solely based on amplification but no indication of persisting selective distracter inhibition.

  8. The Nonproliferation Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    RAJEN,GAURAV; BIRINGER,KENT L.

    2000-07-28

    The aim of this paper is to understand the numerous nuclear-related agreements that involve India and Pakistan, and in so doing identify starting points for future confidence-creating and confidence-building projects. Existing nuclear-related agreements provide a framework under which various projects can be proposed that foster greater nuclear transparency and cooperation in South Asia. The basic assumptions and arguments underlying this paper can be summarized as follows: (1) Increased nuclear transparency between India and Pakistan is a worthwhile objective, as it will lead to the irreversibility of extant nuclear agreements, the prospects of future agreements; and the balance of opacity andmore » transparency required for stability in times of crises; (2) Given the current state of Indian and Pakistani relations, incremental progress in increased nuclear transparency is the most likely future outcome; and (3) Incremental progress can be achieved by enhancing the information exchange required by existing nuclear-related agreements.« less

  9. The Bottom Boundary Layer.

    PubMed

    Trowbridge, John H; Lentz, Steven J

    2018-01-03

    The oceanic bottom boundary layer extracts energy and momentum from the overlying flow, mediates the fate of near-bottom substances, and generates bedforms that retard the flow and affect benthic processes. The bottom boundary layer is forced by winds, waves, tides, and buoyancy and is influenced by surface waves, internal waves, and stratification by heat, salt, and suspended sediments. This review focuses on the coastal ocean. The main points are that (a) classical turbulence concepts and modern turbulence parameterizations provide accurate representations of the structure and turbulent fluxes under conditions in which the underlying assumptions hold, (b) modern sensors and analyses enable high-quality direct or near-direct measurements of the turbulent fluxes and dissipation rates, and (c) the remaining challenges include the interaction of waves and currents with the erodible seabed, the impact of layer-scale two- and three-dimensional instabilities, and the role of the bottom boundary layer in shelf-slope exchange.

  10. Dynamic Virtual Credit Card Numbers

    NASA Astrophysics Data System (ADS)

    Molloy, Ian; Li, Jiangtao; Li, Ninghui

    Theft of stored credit card information is an increasing threat to e-commerce. We propose a dynamic virtual credit card number scheme that reduces the damage caused by stolen credit card numbers. A user can use an existing credit card account to generate multiple virtual credit card numbers that are either usable for a single transaction or are tied with a particular merchant. We call the scheme dynamic because the virtual credit card numbers can be generated without online contact with the credit card issuers. These numbers can be processed without changing any of the infrastructure currently in place; the only changes will be at the end points, namely, the card users and the card issuers. We analyze the security requirements for dynamic virtual credit card numbers, discuss the design space, propose a scheme using HMAC, and prove its security under the assumption the underlying function is a PRF.

  11. Attheya armata along the European Atlantic coast - The turn of the screw on the causes of "surf diatom"

    NASA Astrophysics Data System (ADS)

    Carballeira, R.; Leira, M.; López-Rodríguez, M. C.; Otero, X. L.

    2018-05-01

    The "surf diatom" species Attheya armata (West) Crawford accumulations have been detected in the coasts of Galicia (NW Spain) in recent years. However, unlike in other parts of the world, the current knowledge of the phenomenon in European coasts remains disperse and scarce. A multiple approach has been used to monitor a sector of the Galician coast and to evaluate chemical and biological parameters in the environment, as well as under in vitro culture conditions, with the aim of studying the causes underlying these episodes. Contrary to the general assumption, our results indicate no direct relationship between the ephemeral accumulation episodes occurrences with the continental discharges or nutrient levels in beach waters. The isotopic reference values for coastal food web in Galicia allows to affirm with certainty that A. armata accumulations is dominate by the sediment dynamics.

  12. A systematic tale of two differing reviews: evaluating the evidence on public and private sector quality of primary care in low and middle income countries.

    PubMed

    Coarasa, Jorge; Das, Jishnu; Gummerson, Elizabeth; Bitton, Asaf

    2017-04-12

    Systematic reviews are powerful tools for summarizing vast amounts of data in controversial areas; but their utility is limited by methodological choices and assumptions. Two systematic reviews of literature on the quality of private sector primary care in low and middle income countries (LMIC), published in the same journal within a year, reached conflicting conclusions. The difference in findings reflects different review methodologies, but more importantly, a weak underlying body of literature. A detailed examination of the literature cited in both reviews shows that only one of the underlying studies met the gold standard for methodological robustness. Given the current policy momentum on universal health coverage and primary health care reform across the globe, there is an urgent need for high quality empirical evidence on the quality of private versus public sector primary health care in LMIC.

  13. The Bottom Boundary Layer

    NASA Astrophysics Data System (ADS)

    Trowbridge, John H.; Lentz, Steven J.

    2018-01-01

    The oceanic bottom boundary layer extracts energy and momentum from the overlying flow, mediates the fate of near-bottom substances, and generates bedforms that retard the flow and affect benthic processes. The bottom boundary layer is forced by winds, waves, tides, and buoyancy and is influenced by surface waves, internal waves, and stratification by heat, salt, and suspended sediments. This review focuses on the coastal ocean. The main points are that (a) classical turbulence concepts and modern turbulence parameterizations provide accurate representations of the structure and turbulent fluxes under conditions in which the underlying assumptions hold, (b) modern sensors and analyses enable high-quality direct or near-direct measurements of the turbulent fluxes and dissipation rates, and (c) the remaining challenges include the interaction of waves and currents with the erodible seabed, the impact of layer-scale two- and three-dimensional instabilities, and the role of the bottom boundary layer in shelf-slope exchange.

  14. Change-in-ratio methods for estimating population size

    USGS Publications Warehouse

    Udevitz, Mark S.; Pollock, Kenneth H.; McCullough, Dale R.; Barrett, Reginald H.

    2002-01-01

    Change-in-ratio (CIR) methods can provide an effective, low cost approach for estimating the size of wildlife populations. They rely on being able to observe changes in proportions of population subclasses that result from the removal of a known number of individuals from the population. These methods were first introduced in the 1940’s to estimate the size of populations with 2 subclasses under the assumption of equal subclass encounter probabilities. Over the next 40 years, closed population CIR models were developed to consider additional subclasses and use additional sampling periods. Models with assumptions about how encounter probabilities vary over time, rather than between subclasses, also received some attention. Recently, all of these CIR models have been shown to be special cases of a more general model. Under the general model, information from additional samples can be used to test assumptions about the encounter probabilities and to provide estimates of subclass sizes under relaxations of these assumptions. These developments have greatly extended the applicability of the methods. CIR methods are attractive because they do not require the marking of individuals, and subclass proportions often can be estimated with relatively simple sampling procedures. However, CIR methods require a carefully monitored removal of individuals from the population, and the estimates will be of poor quality unless the removals induce substantial changes in subclass proportions. In this paper, we review the state of the art for closed population estimation with CIR methods. Our emphasis is on the assumptions of CIR methods and on identifying situations where these methods are likely to be effective. We also identify some important areas for future CIR research.

  15. Fractal nature of aluminum alloys substructures under creep and its implications

    NASA Astrophysics Data System (ADS)

    Fernández, R.; Bruno, G.; González-Doncel, G.

    2018-04-01

    The present work offers an explanation for the variation of the power-law stress exponent, n, with the stress σ normalized to the shear modulus G in aluminum alloys. The approach is based on the assumption that the dislocation structure generated with deformation has a fractal nature. It fully explains the evolution of n with σ/G even beyond the so-called power law breakdown region. Creep data from commercially pure Al99.8%, Al-3.85%Mg, and ingot AA6061 alloy tested at different temperatures and stresses are used to validate the proposed ideas. Finally, it is also shown that the fractal description of the dislocation structure agrees well with current knowledge.

  16. Simulation of raw water and treatment parameters in support of the disinfection by-products regulatory impact analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Regli, S.; Cromwell, J.; Mosher, J.

    The U.S. EPA has undertaken an effort to model how the water supply industry may respond to possible rules and how those responses may affect human health risk. The model is referred to as the Disinfection By-Product Regulatory Analysis Model (DBPRAM), The paper is concerned primarily with presenting and discussing the methods, underlying data, assumptions, limitations and results for the first part of the model. This part of the model shows the creation of sets of simulated water supplies that are representative of the conditions currently encountered by public water supplies with respect to certain raw water quality and watermore » treatment characteristics.« less

  17. Single-Cell Genomics: Approaches and Utility in Immunology.

    PubMed

    Neu, Karlynn E; Tang, Qingming; Wilson, Patrick C; Khan, Aly A

    2017-02-01

    Single-cell genomics offers powerful tools for studying immune cells, which make it possible to observe rare and intermediate cell states that cannot be resolved at the population level. Advances in computer science and single-cell sequencing technology have created a data-driven revolution in immunology. The challenge for immunologists is to harness computing and turn an avalanche of quantitative data into meaningful discovery of immunological principles, predictive models, and strategies for therapeutics. Here, we review the current literature on computational analysis of single-cell RNA-sequencing data and discuss underlying assumptions, methods, and applications in immunology, and highlight important directions for future research. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Practical quantum digital signature

    NASA Astrophysics Data System (ADS)

    Yin, Hua-Lei; Fu, Yao; Chen, Zeng-Bing

    2016-03-01

    Guaranteeing nonrepudiation, unforgeability as well as transferability of a signature is one of the most vital safeguards in today's e-commerce era. Based on fundamental laws of quantum physics, quantum digital signature (QDS) aims to provide information-theoretic security for this cryptographic task. However, up to date, the previously proposed QDS protocols are impractical due to various challenging problems and most importantly, the requirement of authenticated (secure) quantum channels between participants. Here, we present the first quantum digital signature protocol that removes the assumption of authenticated quantum channels while remaining secure against the collective attacks. Besides, our QDS protocol can be practically implemented over more than 100 km under current mature technology as used in quantum key distribution.

  19. National health expenditures, 1986-2000

    PubMed Central

    1987-01-01

    Patterns of spending for health during 1986 and beyond reflect a mixture of adherence to and change from historical trends. From a level of $458 billion in 1986—10.9 percent of the GNP—national health expenditures are projected to reach $1.5 trillion by the year 2000—15.0 percent of the GNP. This article presents a provisional estimate of spending in 1986 and projections of spending (under the assumption of current law) through the year 2000. Also discussed are the effects of the demographic composition of the population on spending for health, and how spending would increase in the future simply as a result of the evolution of that composition. PMID:10312184

  20. Methodological problems in the use of indirect comparisons for evaluating healthcare interventions: survey of published systematic reviews.

    PubMed

    Song, Fujian; Loke, Yoon K; Walsh, Tanya; Glenny, Anne-Marie; Eastwood, Alison J; Altman, Douglas G

    2009-04-03

    To investigate basic assumptions and other methodological problems in the application of indirect comparison in systematic reviews of competing healthcare interventions. Survey of published systematic reviews. Inclusion criteria Systematic reviews published between 2000 and 2007 in which an indirect approach had been explicitly used. Identified reviews were assessed for comprehensiveness of the literature search, method for indirect comparison, and whether assumptions about similarity and consistency were explicitly mentioned. The survey included 88 review reports. In 13 reviews, indirect comparison was informal. Results from different trials were naively compared without using a common control in six reviews. Adjusted indirect comparison was usually done using classic frequentist methods (n=49) or more complex methods (n=18). The key assumption of trial similarity was explicitly mentioned in only 40 of the 88 reviews. The consistency assumption was not explicit in most cases where direct and indirect evidence were compared or combined (18/30). Evidence from head to head comparison trials was not systematically searched for or not included in nine cases. Identified methodological problems were an unclear understanding of underlying assumptions, inappropriate search and selection of relevant trials, use of inappropriate or flawed methods, lack of objective and validated methods to assess or improve trial similarity, and inadequate comparison or inappropriate combination of direct and indirect evidence. Adequate understanding of basic assumptions underlying indirect and mixed treatment comparison is crucial to resolve these methodological problems. APPENDIX 1: PubMed search strategy. APPENDIX 2: Characteristics of identified reports. APPENDIX 3: Identified studies. References of included studies.

  1. A SIGNIFICANCE TEST FOR THE LASSO1

    PubMed Central

    Lockhart, Richard; Taylor, Jonathan; Tibshirani, Ryan J.; Tibshirani, Robert

    2014-01-01

    In the sparse linear regression setting, we consider testing the significance of the predictor variable that enters the current lasso model, in the sequence of models visited along the lasso solution path. We propose a simple test statistic based on lasso fitted values, called the covariance test statistic, and show that when the true model is linear, this statistic has an Exp(1) asymptotic distribution under the null hypothesis (the null being that all truly active variables are contained in the current lasso model). Our proof of this result for the special case of the first predictor to enter the model (i.e., testing for a single significant predictor variable against the global null) requires only weak assumptions on the predictor matrix X. On the other hand, our proof for a general step in the lasso path places further technical assumptions on X and the generative model, but still allows for the important high-dimensional case p > n, and does not necessarily require that the current lasso model achieves perfect recovery of the truly active variables. Of course, for testing the significance of an additional variable between two nested linear models, one typically uses the chi-squared test, comparing the drop in residual sum of squares (RSS) to a χ12 distribution. But when this additional variable is not fixed, and has been chosen adaptively or greedily, this test is no longer appropriate: adaptivity makes the drop in RSS stochastically much larger than χ12 under the null hypothesis. Our analysis explicitly accounts for adaptivity, as it must, since the lasso builds an adaptive sequence of linear models as the tuning parameter λ decreases. In this analysis, shrinkage plays a key role: though additional variables are chosen adaptively, the coefficients of lasso active variables are shrunken due to the l1 penalty. Therefore, the test statistic (which is based on lasso fitted values) is in a sense balanced by these two opposing properties—adaptivity and shrinkage—and its null distribution is tractable and asymptotically Exp(1). PMID:25574062

  2. Selecting between-sample RNA-Seq normalization methods from the perspective of their assumptions.

    PubMed

    Evans, Ciaran; Hardin, Johanna; Stoebel, Daniel M

    2017-02-27

    RNA-Seq is a widely used method for studying the behavior of genes under different biological conditions. An essential step in an RNA-Seq study is normalization, in which raw data are adjusted to account for factors that prevent direct comparison of expression measures. Errors in normalization can have a significant impact on downstream analysis, such as inflated false positives in differential expression analysis. An underemphasized feature of normalization is the assumptions on which the methods rely and how the validity of these assumptions can have a substantial impact on the performance of the methods. In this article, we explain how assumptions provide the link between raw RNA-Seq read counts and meaningful measures of gene expression. We examine normalization methods from the perspective of their assumptions, as an understanding of methodological assumptions is necessary for choosing methods appropriate for the data at hand. Furthermore, we discuss why normalization methods perform poorly when their assumptions are violated and how this causes problems in subsequent analysis. To analyze a biological experiment, researchers must select a normalization method with assumptions that are met and that produces a meaningful measure of expression for the given experiment. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Ozone chemical equilibrium in the extended mesopause under the nighttime conditions

    NASA Astrophysics Data System (ADS)

    Belikovich, M. V.; Kulikov, M. Yu.; Grygalashvyly, M.; Sonnemann, G. R.; Ermakova, T. S.; Nechaev, A. A.; Feigin, A. M.

    2018-01-01

    For retrieval of atomic oxygen and atomic hydrogen via ozone observations in the extended mesopause region (∼70-100 km) under nighttime conditions, an assumption on photochemical equilibrium of ozone is often used in research. In this work, an assumption on chemical equilibrium of ozone near mesopause region during nighttime is proofed. We examine 3D chemistry-transport model (CTM) annual calculations and determine the ratio between the correct (modeled) distributions of the O3 density and its equilibrium values depending on the altitude, latitude, and season. The results show that the retrieval of atomic oxygen and atomic hydrogen distributions using an assumption on ozone chemical equilibrium may lead to large errors below ∼81-87 km. We give simple and clear semi-empirical criterion for practical utilization of the lower boundary of the area with ozone's chemical equilibrium near mesopause.

  4. Modelling heterogeneity variances in multiple treatment comparison meta-analysis--are informative priors the better solution?

    PubMed

    Thorlund, Kristian; Thabane, Lehana; Mills, Edward J

    2013-01-11

    Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the 'common variance' assumption). This approach 'borrows strength' for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice.

  5. Existence of Torsional Solitons in a Beam Model of Suspension Bridge

    NASA Astrophysics Data System (ADS)

    Benci, Vieri; Fortunato, Donato; Gazzola, Filippo

    2017-11-01

    This paper studies the existence of solitons, namely stable solitary waves, in an idealized suspension bridge. The bridge is modeled as an unbounded degenerate plate, that is, a central beam with cross sections, and displays two degrees of freedom: the vertical displacement of the beam and the torsional angles of the cross sections. Under fairly general assumptions, we prove the existence of solitons. Under the additional assumption of large tension in the sustaining cables, we prove that these solitons have a nontrivial torsional component. This appears relevant for security since several suspension bridges collapsed due to torsional oscillations.

  6. Comparison of Factor Simplicity Indices for Dichotomous Data: DETECT R, Bentler's Simplicity Index, and the Loading Simplicity Index

    ERIC Educational Resources Information Center

    Finch, Holmes; Stage, Alan Kirk; Monahan, Patrick

    2008-01-01

    A primary assumption underlying several of the common methods for modeling item response data is unidimensionality, that is, test items tap into only one latent trait. This assumption can be assessed several ways, using nonlinear factor analysis and DETECT, a method based on the item conditional covariances. When multidimensionality is identified,…

  7. Idiographic versus Nomothetic Approaches to Research in Organizations.

    DTIC Science & Technology

    1981-07-01

    alternative methodologic assumption based on intensive examination of one or a few cases under the theoretic assumption of dynamic interactionism is, with...phenomenological studies the researcher may not enter the actual setting but instead examines symbolic meanings as they constitute themselves in...B. Interactionism in personality from a historical perspective. Psychological Bulletin, 1974, 81, 1026-l148. Elashoff, J.D.; & Thoresen, C.E

  8. Exploring the Estimation of Examinee Locations Using Multidimensional Latent Trait Models under Different Distributional Assumptions

    ERIC Educational Resources Information Center

    Jang, Hyesuk

    2014-01-01

    This study aims to evaluate a multidimensional latent trait model to determine how well the model works in various empirical contexts. Contrary to the assumption of these latent trait models that the traits are normally distributed, situations in which the latent trait is not shaped with a normal distribution may occur (Sass et al, 2008; Woods…

  9. Consequences of Assumption Violations Revisited: A Quantitative Review of Alternatives to the One-Way Analysis of Variance "F" Test.

    ERIC Educational Resources Information Center

    Lix, Lisa M.; And Others

    1996-01-01

    Meta-analytic techniques were used to summarize the statistical robustness literature on Type I error properties of alternatives to the one-way analysis of variance "F" test. The James (1951) and Welch (1951) tests performed best under violations of the variance homogeneity assumption, although their use is not always appropriate. (SLD)

  10. A general method for handling missing binary outcome data in randomized controlled trials.

    PubMed

    Jackson, Dan; White, Ian R; Mason, Dan; Sutton, Stephen

    2014-12-01

    The analysis of randomized controlled trials with incomplete binary outcome data is challenging. We develop a general method for exploring the impact of missing data in such trials, with a focus on abstinence outcomes. We propose a sensitivity analysis where standard analyses, which could include 'missing = smoking' and 'last observation carried forward', are embedded in a wider class of models. We apply our general method to data from two smoking cessation trials. A total of 489 and 1758 participants from two smoking cessation trials. The abstinence outcomes were obtained using telephone interviews. The estimated intervention effects from both trials depend on the sensitivity parameters used. The findings differ considerably in magnitude and statistical significance under quite extreme assumptions about the missing data, but are reasonably consistent under more moderate assumptions. A new method for undertaking sensitivity analyses when handling missing data in trials with binary outcomes allows a wide range of assumptions about the missing data to be assessed. In two smoking cessation trials the results were insensitive to all but extreme assumptions. © 2014 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.

  11. Order information and free recall: evaluating the item-order hypothesis.

    PubMed

    Mulligan, Neil W; Lozito, Jeffrey P

    2007-05-01

    The item-order hypothesis proposes that order information plays an important role in recall from long-term memory, and it is commonly used to account for the moderating effects of experimental design in memory research. Recent research (Engelkamp, Jahn, & Seiler, 2003; McDaniel, DeLosh, & Merritt, 2000) raises questions about the assumptions underlying the item-order hypothesis. Four experiments tested these assumptions by examining the relationship between free recall and order memory for lists of varying length (8, 16, or 24 unrelated words or pictures). Some groups were given standard free-recall instructions, other groups were explicitly instructed to use order information in free recall, and other groups were given free-recall tests intermixed with tests of order memory (order reconstruction). The results for short lists were consistent with the assumptions of the item-order account. For intermediate-length lists, explicit order instructions and intermixed order tests made recall more reliant on order information, but under standard conditions, order information played little role in recall. For long lists, there was little evidence that order information contributed to recall. In sum, the assumptions of the item-order account held for short lists, received mixed support with intermediate lists, and received no support for longer lists.

  12. Farms, Families, and Markets: New Evidence on Completeness of Markets in Agricultural Settings

    PubMed Central

    LaFave, Daniel; Thomas, Duncan

    2016-01-01

    The farm household model has played a central role in improving the understanding of small-scale agricultural households and non-farm enterprises. Under the assumptions that all current and future markets exist and that farmers treat all prices as given, the model simplifies households’ simultaneous production and consumption decisions into a recursive form in which production can be treated as independent of preferences of household members. These assumptions, which are the foundation of a large literature in labor and development, have been tested and not rejected in several important studies including Benjamin (1992). Using multiple waves of longitudinal survey data from Central Java, Indonesia, this paper tests a key prediction of the recursive model: demand for farm labor is unrelated to the demographic composition of the farm household. The prediction is unambiguously rejected. The rejection cannot be explained by contamination due to unobserved heterogeneity that is fixed at the farm level, local area shocks or farm-specific shocks that affect changes in household composition and farm labor demand. We conclude that the recursive form of the farm household model is not consistent with the data. Developing empirically tractable models of farm households when markets are incomplete remains an important challenge. PMID:27688430

  13. Useful global-change scenarios: current issues and challenges

    NASA Astrophysics Data System (ADS)

    Parson, E. A.

    2008-10-01

    Scenarios are increasingly used to inform global-change debates, but their connection to decisions has been weak and indirect. This reflects the greater number and variety of potential users and scenario needs, relative to other decision domains where scenario use is more established. Global-change scenario needs include common elements, e.g., model-generated projections of emissions and climate change, needed by many users but in different ways and with different assumptions. For these common elements, the limited ability to engage diverse global-change users in scenario development requires extreme transparency in communicating underlying reasoning and assumptions, including probability judgments. Other scenario needs are specific to users, requiring a decentralized network of scenario and assessment organizations to disseminate and interpret common elements and add elements requiring local context or expertise. Such an approach will make global-change scenarios more useful for decisions, but not less controversial. Despite predictable attacks, scenario-based reasoning is necessary for responsible global-change decisions because decision-relevant uncertainties cannot be specified scientifically. The purpose of scenarios is not to avoid speculation, but to make the required speculation more disciplined, more anchored in relevant scientific knowledge when available, and more transparent.

  14. Redundancy and divergence in the amyloid precursor protein family.

    PubMed

    Shariati, S Ali M; De Strooper, Bart

    2013-06-27

    Gene duplication provides genetic material required for functional diversification. An interesting example is the amyloid precursor protein (APP) protein family. The APP gene family has experienced both expansion and contraction during evolution. The three mammalian members have been studied quite extensively in combined knock out models. The underlying assumption is that APP, amyloid precursor like protein 1 and 2 (APLP1, APLP2) are functionally redundant. This assumption is primarily supported by the similarities in biochemical processing of APP and APLPs and on the fact that the different APP genes appear to genetically interact at the level of the phenotype in combined knockout mice. However, unique features in each member of the APP family possibly contribute to specification of their function. In the current review, we discuss the evolution and the biology of the APP protein family with special attention to the distinct properties of each homologue. We propose that the functions of APP, APLP1 and APLP2 have diverged after duplication to contribute distinctly to different neuronal events. Our analysis reveals that APLP2 is significantly diverged from APP and APLP1. Copyright © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.

  15. Early myeloid lineage choice is not initiated by random PU.1 to GATA1 protein ratios.

    PubMed

    Hoppe, Philipp S; Schwarzfischer, Michael; Loeffler, Dirk; Kokkaliaris, Konstantinos D; Hilsenbeck, Oliver; Moritz, Nadine; Endele, Max; Filipczyk, Adam; Gambardella, Adriana; Ahmed, Nouraiz; Etzrodt, Martin; Coutu, Daniel L; Rieger, Michael A; Marr, Carsten; Strasser, Michael K; Schauberger, Bernhard; Burtscher, Ingo; Ermakova, Olga; Bürger, Antje; Lickert, Heiko; Nerlov, Claus; Theis, Fabian J; Schroeder, Timm

    2016-07-14

    The mechanisms underlying haematopoietic lineage decisions remain disputed. Lineage-affiliated transcription factors with the capacity for lineage reprogramming, positive auto-regulation and mutual inhibition have been described as being expressed in uncommitted cell populations. This led to the assumption that lineage choice is cell-intrinsically initiated and determined by stochastic switches of randomly fluctuating cross-antagonistic transcription factors. However, this hypothesis was developed on the basis of RNA expression data from snapshot and/or population-averaged analyses. Alternative models of lineage choice therefore cannot be excluded. Here we use novel reporter mouse lines and live imaging for continuous single-cell long-term quantification of the transcription factors GATA1 and PU.1 (also known as SPI1). We analyse individual haematopoietic stem cells throughout differentiation into megakaryocytic-erythroid and granulocytic-monocytic lineages. The observed expression dynamics are incompatible with the assumption that stochastic switching between PU.1 and GATA1 precedes and initiates megakaryocytic-erythroid versus granulocytic-monocytic lineage decision-making. Rather, our findings suggest that these transcription factors are only executing and reinforcing lineage choice once made. These results challenge the current prevailing model of early myeloid lineage choice.

  16. Linking assumptions in amblyopia

    PubMed Central

    LEVI, DENNIS M.

    2017-01-01

    Over the last 35 years or so, there has been substantial progress in revealing and characterizing the many interesting and sometimes mysterious sensory abnormalities that accompany amblyopia. A goal of many of the studies has been to try to make the link between the sensory losses and the underlying neural losses, resulting in several hypotheses about the site, nature, and cause of amblyopia. This article reviews some of these hypotheses, and the assumptions that link the sensory losses to specific physiological alterations in the brain. Despite intensive study, it turns out to be quite difficult to make a simple linking hypothesis, at least at the level of single neurons, and the locus of the sensory loss remains elusive. It is now clear that the simplest notion—that reduced contrast sensitivity of neurons in cortical area V1 explains the reduction in contrast sensitivity—is too simplistic. Considerations of noise, noise correlations, pooling, and the weighting of information also play a critically important role in making perceptual decisions, and our current models of amblyopia do not adequately take these into account. Indeed, although the reduction of contrast sensitivity is generally considered to reflect “early” neural changes, it seems plausible that it reflects changes at many stages of visual processing. PMID:23879956

  17. Improving Gastric Cancer Outcome Prediction Using Single Time-Point Artificial Neural Network Models

    PubMed Central

    Nilsaz-Dezfouli, Hamid; Abu-Bakar, Mohd Rizam; Arasan, Jayanthi; Adam, Mohd Bakri; Pourhoseingholi, Mohamad Amin

    2017-01-01

    In cancer studies, the prediction of cancer outcome based on a set of prognostic variables has been a long-standing topic of interest. Current statistical methods for survival analysis offer the possibility of modelling cancer survivability but require unrealistic assumptions about the survival time distribution or proportionality of hazard. Therefore, attention must be paid in developing nonlinear models with less restrictive assumptions. Artificial neural network (ANN) models are primarily useful in prediction when nonlinear approaches are required to sift through the plethora of available information. The applications of ANN models for prognostic and diagnostic classification in medicine have attracted a lot of interest. The applications of ANN models in modelling the survival of patients with gastric cancer have been discussed in some studies without completely considering the censored data. This study proposes an ANN model for predicting gastric cancer survivability, considering the censored data. Five separate single time-point ANN models were developed to predict the outcome of patients after 1, 2, 3, 4, and 5 years. The performance of ANN model in predicting the probabilities of death is consistently high for all time points according to the accuracy and the area under the receiver operating characteristic curve. PMID:28469384

  18. The Vocational Turn in Adult Literacy Education and the Impact of the International Adult Literacy Survey

    NASA Astrophysics Data System (ADS)

    Druine, Nathalie; Wildemeersch, Danny

    2000-09-01

    The authors critically examine some of the underlying epistemological and theoretical assumptions of the IALS. In doing so, they distinguish among two basic orientations towards literacy. First, the standard approach (of which IALS is an example) subscribes to the possibility of measuring literacy as abstract, cognitive skills, and endorses the claim that there is an important relationship between literacy skills and economic success in the so-called 'knowledge society.' The second, called a socio-cultural approach, insists on the contextual and power-related character of people's literacy practices. The authors further illustrate that the assumptions of the IALS are rooted in a neo-liberal ideology that forces all members of society to adjust to the exigencies of the globalised economy. In the current, contingent conditions of the risk society, however, it does not seem very wise to limit the learning of adults to enhancing labour-market competencies. Adult education should relate to the concrete literacy practices people already have in their lives. It should make its learners co-responsible actors of their own learning process and participants in a democratic debate on defining the kind of society people want to build.

  19. Are Prescription Opioids Driving the Opioid Crisis? Assumptions vs Facts.

    PubMed

    Rose, Mark Edmund

    2018-04-01

    Sharp increases in opioid prescriptions, and associated increases in overdose deaths in the 2000s, evoked widespread calls to change perceptions of opioid analgesics. Medical literature discussions of opioid analgesics began emphasizing patient and public health hazards. Repetitive exposure to this information may influence physician assumptions. While highly consequential to patients with pain whose function and quality of life may benefit from opioid analgesics, current assumptions about prescription opioid analgesics, including their role in the ongoing opioid overdose epidemic, have not been scrutinized. Information was obtained by searching PubMed, governmental agency websites, and conference proceedings. Opioid analgesic prescribing and associated overdose deaths both peaked around 2011 and are in long-term decline; the sharp overdose increase recorded in 2014 was driven by illicit fentanyl and heroin. Nonmethadone prescription opioid analgesic deaths, in the absence of co-ingested benzodiazepines, alcohol, or other central nervous system/respiratory depressants, are infrequent. Within five years of initial prescription opioid misuse, 3.6% initiate heroin use. The United States consumes 80% of the world opioid supply, but opioid access is nonexistent for 80% and severely restricted for 4.1% of the global population. Many current assumptions about opioid analgesics are ill-founded. Illicit fentanyl and heroin, not opioid prescribing, now fuel the current opioid overdose epidemic. National discussion has often neglected the potentially devastating effects of uncontrolled chronic pain. Opioid analgesic prescribing and related overdoses are in decline, at great cost to patients with pain who have benefited or may benefit from, but cannot access, opioid analgesic therapy.

  20. Testing electrostatic equilibrium in the ionosphere by detailed comparison of ground magnetic deflection and incoherent scatter radar.

    NASA Astrophysics Data System (ADS)

    Cosgrove, R. B.; Schultz, A.; Imamura, N.

    2016-12-01

    Although electrostatic equilibrium is always assumed in the ionosphere, there is no good theoretical or experimental justification for the assumption. In fact, recent theoretical investigations suggest that the electrostatic assumption may be grossly in error. If true, many commonly used modeling methods are placed in doubt. For example, the accepted method for calculating ionospheric conductance??field line integration??may be invalid. In this talk we briefly outline the theoretical research that places the electrostatic assumption in doubt, and then describe how comparison of ground magnetic field data with incoherent scatter radar (ISR) data can be used to test the electrostatic assumption in the ionosphere. We describe a recent experiment conducted for the purpose, where an array of magnetometers was temporalily installed under the Poker Flat AMISR.

  1. Cut-off characterisation of energy spectra of bright fermi sources: Current instrument limits and future possibilities

    NASA Astrophysics Data System (ADS)

    Romoli, C.; Taylor, A. M.; Aharonian, F.

    2017-02-01

    In this paper some of the brightest GeV sources observed by the Fermi-LAT were analysed, focusing on their spectral cut-off region. The sources chosen for this investigation were the brightest blazar flares of 3C 454.3 and 3C 279 and the Vela pulsar with a reanalysis with the latest Fermi-LAT software. For the study of the spectral cut-off we first explored the Vela pulsar spectrum, whose statistics in the time interval of the 3FGL catalog allowed strong constraints to be obtained on the parameters. We subsequently performed a new analysis of the flaring blazar SEDs. For these sources we obtained constraints on the cut-off parameters under the assumption that their underlying spectral distribution is described by a power-law with a stretched exponential cut-off. We then highlighted the significant potential improvements on such constraints by observations with next generation ground based Cherenkov telescopes, represented in our study by the Cherenkov Telescope Array (CTA). Adopting currently available simulations for this future observatory, we demonstrate the considerable improvement in cut-off constraints achievable by observations with this new instrument when compared with that achievable by satellite observations.

  2. The US-DOE ARM/ASR Effort in Quantifying Uncertainty in Ground-Based Cloud Property Retrievals (Invited)

    NASA Astrophysics Data System (ADS)

    Xie, S.; Protat, A.; Zhao, C.

    2013-12-01

    One primary goal of the US Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) program is to obtain and retrieve cloud microphysical properties from detailed cloud observations using ground-based active and passive remote sensors. However, there is large uncertainty in the retrieved cloud property products. Studies have shown that the uncertainty could arise from instrument limitations, measurement errors, sampling errors, retrieval algorithm deficiencies in assumptions, as well as inconsistent input data and constraints used by different algorithms. To quantify the uncertainty in cloud retrievals, a scientific focus group, Quantification of Uncertainties In Cloud Retrievals (QUICR), was recently created by the DOE Atmospheric System Research (ASR) program. This talk will provide an overview of the recent research activities conducted within QUICR and discuss its current collaborations with the European cloud retrieval community and future plans. The goal of QUICR is to develop a methodology for characterizing and quantifying uncertainties in current and future ARM cloud retrievals. The Work at LLNL was performed under the auspices of the U. S. Department of Energy (DOE), Office of Science, Office of Biological and Environmental Research by Lawrence Livermore National Laboratory under contract No. DE-AC52-07NA27344. LLNL-ABS-641258.

  3. pyBadlands: A framework to simulate sediment transport, landscape dynamics and basin stratigraphic evolution through space and time

    PubMed Central

    2018-01-01

    Understanding Earth surface responses in terms of sediment dynamics to climatic variability and tectonics forcing is hindered by limited ability of current models to simulate long-term evolution of sediment transfer and associated morphological changes. This paper presents pyBadlands, an open-source python-based framework which computes over geological time (1) sediment transport from landmasses to coasts, (2) reworking of marine sediments by longshore currents and (3) development of coral reef systems. pyBadlands is cross-platform, distributed under the GPLv3 license and available on GitHub (http://github.com/badlands-model). Here, we describe the underlying physical assumptions behind the simulated processes and the main options already available in the numerical framework. Along with the source code, a list of hands-on examples is provided that illustrates the model capabilities. In addition, pre and post-processing classes have been built and are accessible as a companion toolbox which comprises a series of workflows to efficiently build, quantify and explore simulation input and output files. While the framework has been primarily designed for research, its simplicity of use and portability makes it a great tool for teaching purposes. PMID:29649301

  4. Best practices for evaluating the capability of nondestructive evaluation (NDE) and structural health monitoring (SHM) techniques for damage characterization

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Annis, Charles; Sabbagh, Harold A.; Lindgren, Eric A.

    2016-02-01

    A comprehensive approach to NDE and SHM characterization error (CE) evaluation is presented that follows the framework of the `ahat-versus-a' regression analysis for POD assessment. Characterization capability evaluation is typically more complex with respect to current POD evaluations and thus requires engineering and statistical expertise in the model-building process to ensure all key effects and interactions are addressed. Justifying the statistical model choice with underlying assumptions is key. Several sizing case studies are presented with detailed evaluations of the most appropriate statistical model for each data set. The use of a model-assisted approach is introduced to help assess the reliability of NDE and SHM characterization capability under a wide range of part, environmental and damage conditions. Best practices of using models are presented for both an eddy current NDE sizing and vibration-based SHM case studies. The results of these studies highlight the general protocol feasibility, emphasize the importance of evaluating key application characteristics prior to the study, and demonstrate an approach to quantify the role of varying SHM sensor durability and environmental conditions on characterization performance.

  5. Reduced-Order Direct Numerical Simulation of Solute Transport in Porous Media

    NASA Astrophysics Data System (ADS)

    Mehmani, Yashar; Tchelepi, Hamdi

    2017-11-01

    Pore-scale models are an important tool for analyzing fluid dynamics in porous materials (e.g., rocks, soils, fuel cells). Current direct numerical simulation (DNS) techniques, while very accurate, are computationally prohibitive for sample sizes that are statistically representative of the porous structure. Reduced-order approaches such as pore-network models (PNM) aim to approximate the pore-space geometry and physics to remedy this problem. Predictions from current techniques, however, have not always been successful. This work focuses on single-phase transport of a passive solute under advection-dominated regimes and delineates the minimum set of approximations that consistently produce accurate PNM predictions. Novel network extraction (discretization) and particle simulation techniques are developed and compared to high-fidelity DNS simulations for a wide range of micromodel heterogeneities and a single sphere pack. Moreover, common modeling assumptions in the literature are analyzed and shown that they can lead to first-order errors under advection-dominated regimes. This work has implications for optimizing material design and operations in manufactured (electrodes) and natural (rocks) porous media pertaining to energy systems. This work was supported by the Stanford University Petroleum Research Institute for Reservoir Simulation (SUPRI-B).

  6. Trends in Mediation Analysis in Nursing Research: Improving Current Practice.

    PubMed

    Hertzog, Melody

    2018-06-01

    The purpose of this study was to describe common approaches used by nursing researchers to test mediation models and evaluate them within the context of current methodological advances. MEDLINE was used to locate studies testing a mediation model and published from 2004 to 2015 in nursing journals. Design (experimental/correlation, cross-sectional/longitudinal, model complexity) and analysis (method, inclusion of test of mediated effect, violations/discussion of assumptions, sample size/power) characteristics were coded for 456 studies. General trends were identified using descriptive statistics. Consistent with findings of reviews in other disciplines, evidence was found that nursing researchers may not be aware of the strong assumptions and serious limitations of their analyses. Suggestions for strengthening the rigor of such studies and an overview of current methods for testing more complex models, including longitudinal mediation processes, are presented.

  7. The Ptolemaic Approach to Ionospheric Electrodynamics

    NASA Astrophysics Data System (ADS)

    Vasyliunas, V. M.

    2010-12-01

    The conventional treatment of ionospheric electrodynamics (as expounded in standard textbooks and tutorial publications) consists of a set of equations, plus verbal descriptions of the physical processes supposedly represented by the equations. Key assumptions underlying the equations are: electric field equal to the gradient of a potential, electric current driven by an Ohm's law (with both electric-field and neutral-wind terms), continuity of current then giving a second-order elliptic differential equation for calculating the potential; as a separate assumption, ion and electron bulk flows are determined by ExB drifts plus collision effects. The verbal descriptions are in several respects inconsistent with the equations; furthermore, both the descriptions and the equations are not compatible with the more rigorous physical understanding derived from the complete plasma and Maxwell's equations. The conventional ionospheric equations are applicable under restricted conditions, corresponding to a quasi-steady-state equilibrium limit, and are thus intrinsically incapable of answering questions about causal relations or dynamic developments. Within their limited range of applicability, however, the equations are in most cases adequate to explain the observations, despite the deficient treatment of plasma physics. (A historical precedent that comes to mind is that of astronomical theory at the time of Copernicus and for some decades afterwards, when the Ptolemaic scheme could explain the observations at least as well if not better than the Copernican. Some of the verbal descriptions in conventional ionospheric electrodynamics might be considered Ptolemaic also in the more literal sense of being formulated exclusively in terms of a fixed Earth.) I review the principal differences between the two approaches, point out some questions where the conventional ionospheric theory does not provide unambiguous answers even within its range of validity (e.g., topside and bottomside boundary conditions on electrodynamics), and illustrate with some simple examples of how a neutral-wind dynamo really develops.

  8. Quantifying Current and Future Groundwater Storage in Snowmelt Dominated High Elevation Meadows of the Sierra Nevada Mountains, CA

    NASA Astrophysics Data System (ADS)

    Lowry, C.; Ciruzzi, D. M.

    2016-12-01

    In a warming climate, snowmelt dominated mountain systems such as the Sierra Nevada Mountains of California have limited water storage potential. Receding glaciers and recent drought in the Sierra Nevada Mountains has resulted in reduced stream flow, restricting water availability for mountain vegetation. These geologic settings provide limited opportunities for groundwater storage due to a thin soil layer overlying expansive granitic bedrock. Yet high elevation meadows, which have formed in small depressions within the granitic bedrock, represent the only long-term storage reservoirs for water within the region. Through the use of field observations and numerical modeling this research investigates the role of meadow geometry, sediment properties, and topographic gradient to retain snowmelt derived groundwater recharge. These controlling factors affecting groundwater storage dynamics and surface-water outflows are evaluated under both current and dryer climatic conditions. Results show differential changes in seasonal storage of snowmelt and surface-water outflow under varying climate scenarios. The magnitude and timing of water storage and release is highly dependent on bedrock geometry and position within the watershed. Results show decrease of up to 20% in groundwater storage under dryer future climates resulting in a shift from long-term storage to steady release of water from these meadows. Testing of prior assumptions, such as uniform thickness, on meadow groundwater storage are shown to overestimate storage, resulting in higher volumes of water being released to streams earlier than observed in previous simulations. These results have implications for predicting water availability for downstream users as well as providing water for root water uptake of meadow vegetation under both current and future conditions.

  9. An experimental study of nonlinear dynamic system identification

    NASA Technical Reports Server (NTRS)

    Stry, Greselda I.; Mook, D. Joseph

    1990-01-01

    A technique for robust identification of nonlinear dynamic systems is developed and illustrated using both simulations and analog experiments. The technique is based on the Minimum Model Error optimal estimation approach. A detailed literature review is included in which fundamental differences between the current approach and previous work is described. The most significant feature of the current work is the ability to identify nonlinear dynamic systems without prior assumptions regarding the form of the nonlinearities, in constrast to existing nonlinear identification approaches which usually require detailed assumptions of the nonlinearities. The example illustrations indicate that the method is robust with respect to prior ignorance of the model, and with respect to measurement noise, measurement frequency, and measurement record length.

  10. A more powerful test based on ratio distribution for retention noninferiority hypothesis.

    PubMed

    Deng, Ling; Chen, Gang

    2013-03-11

    Rothmann et al. ( 2003 ) proposed a method for the statistical inference of fraction retention noninferiority (NI) hypothesis. A fraction retention hypothesis is defined as a ratio of the new treatment effect verse the control effect in the context of a time to event endpoint. One of the major concerns using this method in the design of an NI trial is that with a limited sample size, the power of the study is usually very low. This makes an NI trial not applicable particularly when using time to event endpoint. To improve power, Wang et al. ( 2006 ) proposed a ratio test based on asymptotic normality theory. Under a strong assumption (equal variance of the NI test statistic under null and alternative hypotheses), the sample size using Wang's test was much smaller than that using Rothmann's test. However, in practice, the assumption of equal variance is generally questionable for an NI trial design. This assumption is removed in the ratio test proposed in this article, which is derived directly from a Cauchy-like ratio distribution. In addition, using this method, the fundamental assumption used in Rothmann's test, that the observed control effect is always positive, that is, the observed hazard ratio for placebo over the control is greater than 1, is no longer necessary. Without assuming equal variance under null and alternative hypotheses, the sample size required for an NI trial can be significantly reduced if using the proposed ratio test for a fraction retention NI hypothesis.

  11. Avoided economic impacts of energy demand changes by 1.5 and 2 °C climate stabilization

    NASA Astrophysics Data System (ADS)

    Park, Chan; Fujimori, Shinichiro; Hasegawa, Tomoko; Takakura, Jun’ya; Takahashi, Kiyoshi; Hijioka, Yasuaki

    2018-04-01

    Energy demand associated with space heating and cooling is expected to be affected by climate change. There are several global projections of space heating and cooling use that take into consideration climate change, but a comprehensive uncertainty of socioeconomic and climate conditions, including a 1.5 °C global mean temperature change, has never been assessed. This paper shows the economic impact of changes in energy demand for space heating and cooling under multiple socioeconomic and climatic conditions. We use three shared socioeconomic pathways as socioeconomic conditions. For climate conditions, we use two representative concentration pathways that correspond to 4.0 °C and 2.0 °C scenarios, and a 1.5 °C scenario driven from the 2.0 °C scenario with assumption in conjunction with five general circulation models. We find that the economic impacts of climate change are largely affected by socioeconomic assumptions, and global GDP change rates range from +0.21% to ‑2.01% in 2100 under the 4.0 °C scenario, depending on the socioeconomic condition. Sensitivity analysis that differentiates the thresholds of heating and cooling degree days clarifies that the threshold is a strong factor that generates these differences. Meanwhile, the impact of the 1.5 °C is small regardless of socioeconomic assumptions (‑0.02% to ‑0.06%). The economic loss caused by differences in socioeconomic assumption under the 1.5 °C scenario is much smaller than that under the 2 °C scenario, which implies that stringent climate mitigation can work as a risk hedge to socioeconomic development diversity.

  12. The Army’s Institutional Values: Current Doctrine and the Army’s Values Training Strategy

    DTIC Science & Technology

    2001-06-01

    laws to the military over the course of 20 years, at first to combat racism and sexism , had opened the door to endless litigation . . . Writers of...consistent with organizational values construction theory ? 7. Are the Army’s values training initiatives consistent with Army training doctrine? The ...doctrine attempts to provide the systematic framework for good leader- theory . Assumptions This study begins with several assumptions. These

  13. A simple theory of back surface field /BSF/ solar cells

    NASA Technical Reports Server (NTRS)

    Von Roos, O.

    1978-01-01

    A theory of an n-p-p/+/ junction is developed, entirely based on Shockley's depletion layer approximation. Under the further assumption of uniform doping the electrical characteristics of solar cells as a function of all relevant parameters (cell thickness, diffusion lengths, etc.) can quickly be ascertained with a minimum of computer time. Two effects contribute to the superior performance of a BSF cell (n-p-p/+/ junction) as compared to an ordinary solar cell (n-p junction). The sharing of the applied voltage among the two junctions (the n-p and the p-p/+/ junction) decreases the dark current and the reflection of minority carriers by the builtin electron field of the p-p/+/ junction increases the short-circuit current. The theory predicts an increase in the open-circuit voltage (Voc) with a decrease in cell thickness. Although the short-circuit current decreases at the same time, the efficiency of the cell is virtually unaltered in going from a thickness of 200 microns to a thickness of 50 microns. The importance of this fact for space missions where large power-to-weight ratios are required is obvious.

  14. Development of an automated ammunition processing system for battlefield use

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Speaks, D.M.; Chesser, J.B.; Lloyd, P.D.

    1995-03-01

    The Future Armored Resupply Vehicle (FARV) will be the companion ammunition resupply vehicle to the Advanced Field Artillery System (AFAS). These systems are currently being investigated by the US Army for future acquisition. The FARV will sustain the AFAS with ammunition and fuel and will significantly increase capabilities over current resupply vehicles. Currently ammunition is transferred to field artillery almost entirely by hand. The level of automation to be included into the FARV is still under consideration. At the request of the US Army`s Project Manager, AFAS/FARV, Oak Ridge National Laboratory (ORNL) identified and evaluated various concepts for the automatedmore » upload, processing, storage, and delivery equipment for the FARV. ORNL, working with the sponsor, established basic requirements and assumptions for concept development and the methodology for concept selection. A preliminary concept has been selected, and the associated critical technologies have been identified. ORNL has provided technology demonstrations of many of these critical technologies. A technology demonstrator which incorporates all individual components into a total process demonstration is planned for late FY 1995.« less

  15. Evaluation of wet-line depth-correction methods for cable-suspended current meters

    USGS Publications Warehouse

    Coon, W.F.; Futrell, James C.

    1986-01-01

    Wet-line depth corrections for cable-suspended current meter and weight not perpendicular to the water surface have been evaluated using cable-suspended weights towed by a boat in still water. A fathometer was used to track a Columbus sounding weight and to record its actual depth for several apparent depths, weight sizes, and towed velocities. Cable strumming, tension, and weight veer are noted. Results of this study suggest possible differences between observed depth corrections and corrections obtained from the wet-line correction table currently in use. These differences may have resulted from test conditions which deviated from the inherent assumptions of the wet-line table: (1) drag on the weight in the sounding position at the bottom of a stream can be neglected; and (2) the distribution of horizontal drag on the sounding line is in accordance with the variation of velocity with depth. Observed depth corrections were compared to wet-line table values used for determining the 0.8-depth position of the sounding weight under these conditions; the results indicate that questionable differences exist. (Lantz-PTT)

  16. Intonation in neurogenic foreign accent syndrome.

    PubMed

    Kuschmann, Anja; Lowit, Anja; Miller, Nick; Mennen, Ineke

    2012-01-01

    Foreign accent syndrome (FAS) is a motor speech disorder in which changes to segmental as well as suprasegmental aspects lead to the perception of a foreign accent in speech. This paper focuses on one suprasegmental aspect, namely that of intonation. It provides an in-depth analysis of the intonation system of four speakers with FAS with the aim of establishing the intonational changes that have taken place as well as their underlying origin. Using the autosegmental-metrical framework of intonational analysis, four different levels of intonation, i.e., inventory, distribution, realisation and function, were examined in short sentences. Results revealed that the speakers with FAS had the same structural inventory at their disposal as the control speakers, but that they differed from the latter in relation to the distribution, implementation and functional use of their inventory. The current results suggest that these intonational changes cannot be entirely attributed to an underlying intonation deficit but reflect secondary manifestations of physiological constraints affecting speech support systems and compensatory strategies. These findings have implications for the debate surrounding intonational deficits in FAS, advocating a reconsideration of current assumptions regarding the underlying nature of intonation impairment in FAS. The reader will be able to (1) explain the relevance of intonation in defining foreign accent syndrome; (2) describe the process of intonation analysis within the autosegmental-metrical (AM) framework; and (3) discuss the manifestation of intonation changes in FAS at the different levels of intonation and their potential underlying nature. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. An Economic Evaluation of Food Safety Education Interventions: Estimates and Critical Data Gaps.

    PubMed

    Zan, Hua; Lambea, Maria; McDowell, Joyce; Scharff, Robert L

    2017-08-01

    The economic evaluation of food safety interventions is an important tool that practitioners and policy makers use to assess the efficacy of their efforts. These evaluations are built on models that are dependent on accurate estimation of numerous input variables. In many cases, however, there is no data available to determine input values and expert opinion is used to generate estimates. This study uses a benefit-cost analysis of the food safety component of the adult Expanded Food and Nutrition Education Program (EFNEP) in Ohio as a vehicle for demonstrating how results based on variable values that are not objectively determined may be sensitive to alternative assumptions. In particular, the focus here is on how reported behavioral change is translated into economic benefits. Current gaps in the literature make it impossible to know with certainty how many people are protected by the education (what are the spillover effects?), the length of time education remains effective, and the level of risk reduction from change in behavior. Based on EFNEP survey data, food safety education led 37.4% of participants to improve their food safety behaviors. Under reasonable default assumptions, benefits from this improvement significantly outweigh costs, yielding a benefit-cost ratio of between 6.2 and 10.0. Incorporation of a sensitivity analysis using alternative estimates yields a greater range of estimates (0.2 to 56.3), which highlights the importance of future research aimed at filling these research gaps. Nevertheless, most reasonable assumptions lead to estimates of benefits that justify their costs.

  18. Determining the Impact of Steady-State PV Fault Current Injections on Distribution Protection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seuss, John; Reno, Matthew J.; Broderick, Robert Joseph

    This report investigates the fault current contribution from a single large PV system and the impact it has on existing distribution overcurrent protection devices. Assumptions are made about the modeling of the PV system under fault to perform exhaustive steady - state fault analyses throughout distribution feeder models. Each PV interconnection location is tested to determine how the size of the PV system affects the fault current measured by each protection device. This data is then searched for logical conditions that indicate whether a protection device has operated in a manner that will cause more customer outages due to themore » addition of the PV system. This is referred to as a protection issue , and there are four unique types of issues that have been identified in the study. The PV system size at which any issues occur are recorded to determine the feeder's PV hosting capacity limitations due to interference with protection settings. The analysis is carried out on six feeder models. The report concludes with a discussion of the prevalence and cause of each protection issue caused by PV system fault current.« less

  19. A dynamic analysis of the radiation excitation from the activation of a current collecting system in space

    NASA Technical Reports Server (NTRS)

    Wang, J.; Hastings, D. E.

    1991-01-01

    Current collecting systems moving in the ionosphere will induce electromagnetic wave radiation. The commonly used static analysis is incapable of studying the situation when such systems undergo transient processes. A dynamic analysis has been developed, and the radiation excitation processes are studied. This dynamic analysis is applied to study the temporal wave radiation from the activation of current collecting systems in space. The global scale electrodynamic interactions between a space-station-like structure and the ionospheric plasma are studied. The temporal evolution and spatial propagation of the electric wave field after the activation are described. The wave excitations by tethered systems are also studied. The dependencies of the temporal Alfven wave and lower hybrid wave radiation on the activation time and the space system structure are discussed. It is shown that the characteristics of wave radiation are determined by the matching of two sets of characteristic frequencies, and a rapid change in the current collection can give rise to substantial transient radiation interference. The limitations of the static and linear analysis are examined, and the condition under which the static assumption is valid is obtained.

  20. An Experimental Evaluation of Blockage Corrections for Current Turbines

    NASA Astrophysics Data System (ADS)

    Ross, Hannah; Polagye, Brian

    2017-11-01

    Flow confinement has been shown to significantly alter the performance of turbines that extract power from water currents. These performance effects are related to the degree of constraint, defined by the ratio of turbine projected area to channel cross-sectional area. This quantity is referred to as the blockage ratio. Because it is often desirable to adjust experimental observations in water channels to unconfined conditions, analytical corrections for both wind and current turbines have been derived. These are generally based on linear momentum actuator disk theory but have been applied to turbines without experimental validation. This work tests multiple blockage corrections on performance and thrust data from a cross-flow turbine and porous plates (experimental analogues to actuator disks) collected in laboratory flumes at blockage ratios ranging between 10 and 35%. To isolate the effects of blockage, the Reynolds number, Froude number, and submergence depth were held constant while the channel width was varied. Corrected performance data are compared to performance in a towing tank at a blockage ratio of less than 5%. In addition to examining the accuracy of each correction, underlying assumptions are assessed to determine why some corrections perform better than others. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1256082 and the Naval Facilities Engineering Command (NAVFAC).

  1. International Education: Putting Up or Shutting Up.

    ERIC Educational Resources Information Center

    Hayden, Rose L.

    The current status, problems, and future trends of education for global awareness are outlined. Currently, global realities and interdependencies are such that traditional assumptions about international affairs and education are no longer operative. Nor is international education as a discipline conceptually or structurally responding to…

  2. Interpreting the concordance statistic of a logistic regression model: relation to the variance and odds ratio of a continuous explanatory variable

    PubMed Central

    2012-01-01

    Background When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. Methods An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Results Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. Conclusions The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population. PMID:22716998

  3. The competing risks Cox model with auxiliary case covariates under weaker missing-at-random cause of failure.

    PubMed

    Nevo, Daniel; Nishihara, Reiko; Ogino, Shuji; Wang, Molin

    2017-08-04

    In the analysis of time-to-event data with multiple causes using a competing risks Cox model, often the cause of failure is unknown for some of the cases. The probability of a missing cause is typically assumed to be independent of the cause given the time of the event and covariates measured before the event occurred. In practice, however, the underlying missing-at-random assumption does not necessarily hold. Motivated by colorectal cancer molecular pathological epidemiology analysis, we develop a method to conduct valid analysis when additional auxiliary variables are available for cases only. We consider a weaker missing-at-random assumption, with missing pattern depending on the observed quantities, which include the auxiliary covariates. We use an informative likelihood approach that will yield consistent estimates even when the underlying model for missing cause of failure is misspecified. The superiority of our method over naive methods in finite samples is demonstrated by simulation study results. We illustrate the use of our method in an analysis of colorectal cancer data from the Nurses' Health Study cohort, where, apparently, the traditional missing-at-random assumption fails to hold.

  4. Cognitive neuroenhancement: false assumptions in the ethical debate.

    PubMed

    Heinz, Andreas; Kipke, Roland; Heimann, Hannah; Wiesing, Urban

    2012-06-01

    The present work critically examines two assumptions frequently stated by supporters of cognitive neuroenhancement. The first, explicitly methodological, assumption is the supposition of effective and side effect-free neuroenhancers. However, there is an evidence-based concern that the most promising drugs currently used for cognitive enhancement can be addictive. Furthermore, this work describes why the neuronal correlates of key cognitive concepts, such as learning and memory, are so deeply connected with mechanisms implicated in the development and maintenance of addictive behaviour so that modification of these systems may inevitably run the risk of addiction to the enhancing drugs. Such a potential risk of addiction could only be falsified by in-depth empirical research. The second, implicit, assumption is that research on neuroenhancement does not pose a serious moral problem. However, the potential for addiction, along with arguments related to research ethics and the potential social impact of neuroenhancement, could invalidate this assumption. It is suggested that ethical evaluation needs to consider the empirical data as well as the question of whether and how such empirical knowledge can be obtained.

  5. Quasi-experimental study designs series-paper 7: assessing the assumptions.

    PubMed

    Bärnighausen, Till; Oldenburg, Catherine; Tugwell, Peter; Bommer, Christian; Ebert, Cara; Barreto, Mauricio; Djimeu, Eric; Haber, Noah; Waddington, Hugh; Rockers, Peter; Sianesi, Barbara; Bor, Jacob; Fink, Günther; Valentine, Jeffrey; Tanner, Jeffrey; Stanley, Tom; Sierra, Eduardo; Tchetgen, Eric Tchetgen; Atun, Rifat; Vollmer, Sebastian

    2017-09-01

    Quasi-experimental designs are gaining popularity in epidemiology and health systems research-in particular for the evaluation of health care practice, programs, and policy-because they allow strong causal inferences without randomized controlled experiments. We describe the concepts underlying five important quasi-experimental designs: Instrumental Variables, Regression Discontinuity, Interrupted Time Series, Fixed Effects, and Difference-in-Differences designs. We illustrate each of the designs with an example from health research. We then describe the assumptions required for each of the designs to ensure valid causal inference and discuss the tests available to examine the assumptions. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Slipping Anchor? Testing the Vignettes Approach to Identification and Correction of Reporting Heterogeneity

    PubMed Central

    d’Uva, Teresa Bago; Lindeboom, Maarten; O’Donnell, Owen; van Doorslaer, Eddy

    2011-01-01

    We propose tests of the two assumptions under which anchoring vignettes identify heterogeneity in reporting of categorical evaluations. Systematic variation in the perceived difference between any two vignette states is sufficient to reject vignette equivalence. Response consistency - the respondent uses the same response scale to evaluate the vignette and herself – is testable given sufficiently comprehensive objective indicators that independently identify response scales. Both assumptions are rejected for reporting of cognitive and physical functioning in a sample of older English individuals, although a weaker test resting on less stringent assumptions does not reject response consistency for cognition. PMID:22184479

  7. In Pursuit of Improving Airburst and Ground Damage Predictions: Recent Advances in Multi-Body Aerodynamic Testing and Computational Tools Validation

    NASA Technical Reports Server (NTRS)

    Venkatapathy, Ethiraj; Gulhan, Ali; Aftosmis, Michael; Brock, Joseph; Mathias, Donovan; Need, Dominic; Rodriguez, David; Seltner, Patrick; Stern, Eric; Wiles, Sebastian

    2017-01-01

    An airburst from a large asteroid during entry can cause significant ground damage. The damage depends on the energy and the altitude of airburst. Breakup of asteroids into fragments and their lateral spread have been observed. Modeling the underlying physics of fragmented bodies interacting at hypersonic speeds and the spread of fragments is needed for a true predictive capability. Current models use heuristic arguments and assumptions such as pancaking or point source explosive energy release at pre-determined altitude or an assumed fragmentation spread rate to predict airburst damage. A multi-year collaboration between German Aerospace Center (DLR) and NASA has been established to develop validated computational tools to address the above challenge.

  8. Mass Function of Galaxy Clusters in Relativistic Inhomogeneous Cosmology

    NASA Astrophysics Data System (ADS)

    Ostrowski, Jan J.; Buchert, Thomas; Roukema, Boudewijn F.

    The current cosmological model (ΛCDM) with the underlying FLRW metric relies on the assumption of local isotropy, hence homogeneity of the Universe. Difficulties arise when one attempts to justify this model as an average description of the Universe from first principles of general relativity, since in general, the Einstein tensor built from the averaged metric is not equal to the averaged stress-energy tensor. In this context, the discrepancy between these quantities is called "cosmological backreaction" and has been the subject of scientific debate among cosmologists and relativists for more than 20 years. Here we present one of the methods to tackle this problem, i.e. averaging the scalar parts of the Einstein equations, together with its application, the cosmological mass function of galaxy clusters.

  9. Evaluation of Two Crew Module Boilerplate Tests Using Newly Developed Calibration Metrics

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.

    2012-01-01

    The paper discusses a application of multi-dimensional calibration metrics to evaluate pressure data from water drop tests of the Max Launch Abort System (MLAS) crew module boilerplate. Specifically, three metrics are discussed: 1) a metric to assess the probability of enveloping the measured data with the model, 2) a multi-dimensional orthogonality metric to assess model adequacy between test and analysis, and 3) a prediction error metric to conduct sensor placement to minimize pressure prediction errors. Data from similar (nearly repeated) capsule drop tests shows significant variability in the measured pressure responses. When compared to expected variability using model predictions, it is demonstrated that the measured variability cannot be explained by the model under the current uncertainty assumptions.

  10. A differential equation for the Generalized Born radii.

    PubMed

    Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro

    2013-06-28

    The Generalized Born (GB) model offers a convenient way of representing electrostatics in complex macromolecules like proteins or nucleic acids. The computation of atomic GB radii is currently performed by different non-local approaches involving volume or surface integrals. Here we obtain a non-linear second-order partial differential equation for the Generalized Born radius, which may be solved using local iterative algorithms. The equation is derived under the assumption that the usual GB approximation to the reaction field obeys Laplace's equation. The equation admits as particular solutions the correct GB radii for the sphere and the plane. The tests performed on a set of 55 different proteins show an overall agreement with other reference GB models and "perfect" Poisson-Boltzmann based values.

  11. Evolution of regulatory targets for drinking water quality.

    PubMed

    Sinclair, Martha; O'Toole, Joanne; Gibney, Katherine; Leder, Karin

    2015-06-01

    The last century has been marked by major advances in the understanding of microbial disease risks from water supplies and significant changes in expectations of drinking water safety. The focus of drinking water quality regulation has moved progressively from simple prevention of detectable waterborne outbreaks towards adoption of health-based targets that aim to reduce infection and disease to a level well below detection limits at the community level. This review outlines the changes in understanding of community disease and waterborne risks that prompted development of these targets, and also describes their underlying assumptions and current context. Issues regarding the appropriateness of selected target values, and how continuing changes in knowledge and practice may influence their evolution, are also discussed.

  12. Portable Life Support Subsystem Thermal Hydraulic Performance Analysis

    NASA Technical Reports Server (NTRS)

    Barnes, Bruce; Pinckney, John; Conger, Bruce

    2010-01-01

    This paper presents the current state of the thermal hydraulic modeling efforts being conducted for the Constellation Space Suit Element (CSSE) Portable Life Support Subsystem (PLSS). The goal of these efforts is to provide realistic simulations of the PLSS under various modes of operation. The PLSS thermal hydraulic model simulates the thermal, pressure, flow characteristics, and human thermal comfort related to the PLSS performance. This paper presents modeling approaches and assumptions as well as component model descriptions. Results from the models are presented that show PLSS operations at steady-state and transient conditions. Finally, conclusions and recommendations are offered that summarize results, identify PLSS design weaknesses uncovered during review of the analysis results, and propose areas for improvement to increase model fidelity and accuracy.

  13. Modularity of logarithmic parafermion vertex algebras

    NASA Astrophysics Data System (ADS)

    Auger, Jean; Creutzig, Thomas; Ridout, David

    2018-05-01

    The parafermionic cosets Ck = {Com} ( H , Lk(sl2) ) are studied for negative admissible levels k, as are certain infinite-order simple current extensions Bk of Ck . Under the assumption that the tensor theory considerations of Huang, Lepowsky and Zhang apply to Ck , irreducible Ck - and Bk -modules are obtained from those of Lk(sl2) . Assuming the validity of a certain Verlinde-type formula likewise gives the Grothendieck fusion rules of these irreducible modules. Notably, there are only finitely many irreducible Bk -modules. The irreducible Ck - and Bk -characters are computed and the latter are shown, when supplemented by pseudotraces, to carry a finite-dimensional representation of the modular group. The natural conjecture then is that the Bk are C_2 -cofinite vertex operator algebras.

  14. Sensitivity to imputation models and assumptions in receiver operating characteristic analysis with incomplete data

    PubMed Central

    Karakaya, Jale; Karabulut, Erdem; Yucel, Recai M.

    2015-01-01

    Modern statistical methods using incomplete data have been increasingly applied in a wide variety of substantive problems. Similarly, receiver operating characteristic (ROC) analysis, a method used in evaluating diagnostic tests or biomarkers in medical research, has also been increasingly popular problem in both its development and application. While missing-data methods have been applied in ROC analysis, the impact of model mis-specification and/or assumptions (e.g. missing at random) underlying the missing data has not been thoroughly studied. In this work, we study the performance of multiple imputation (MI) inference in ROC analysis. Particularly, we investigate parametric and non-parametric techniques for MI inference under common missingness mechanisms. Depending on the coherency of the imputation model with the underlying data generation mechanism, our results show that MI generally leads to well-calibrated inferences under ignorable missingness mechanisms. PMID:26379316

  15. Speed-of-light limitations in passive linear media

    NASA Astrophysics Data System (ADS)

    Welters, Aaron; Avniel, Yehuda; Johnson, Steven G.

    2014-08-01

    We prove that well-known speed-of-light restrictions on electromagnetic energy velocity can be extended to a new level of generality, encompassing even nonlocal chiral media in periodic geometries, while at the same time weakening the underlying assumptions to only passivity and linearity of the medium (either with a transparency window or with dissipation). As was also shown by other authors under more limiting assumptions, passivity alone is sufficient to guarantee causality and positivity of the energy density (with no thermodynamic assumptions). Our proof is general enough to include a very broad range of material properties, including anisotropy, bianisotropy (chirality), nonlocality, dispersion, periodicity, and even delta functions or similar generalized functions. We also show that the "dynamical energy density" used by some previous authors in dissipative media reduces to the standard Brillouin formula for dispersive energy density in a transparency window. The results in this paper are proved by exploiting deep results from linear-response theory, harmonic analysis, and functional analysis that had previously not been brought together in the context of electrodynamics.

  16. Statistical Issues for Calculating Reentry Hazards

    NASA Technical Reports Server (NTRS)

    Matney, Mark; Bacon, John

    2016-01-01

    A number of statistical tools have been developed over the years for assessing the risk of reentering object to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. This information, combined with information on the expected ground path of the reentry, is used to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of this analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper builds on previous IAASS work to re-examine many of these theoretical assumptions, including the mathematical basis for the hazard calculations, and outlining the conditions under which the simplifying assumptions hold. This study also employs empirical and theoretical information to test these assumptions, and makes recommendations how to improve the accuracy of these calculations in the future.

  17. Analysis of the value of battery storage with wind and photovoltaic generation to the Sacramento Municipal Utility District

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zaininger, H.W.

    1998-08-01

    This report describes the results of an analysis to determine the economic and operational value of battery storage to wind and photovoltaic (PV) generation technologies to the Sacramento Municipal Utility District (SMUD) system. The analysis approach consisted of performing a benefit-cost economic assessment using established SMUD financial parameters, system expansion plans, and current system operating procedures. This report presents the results of the analysis. Section 2 describes expected wind and PV plant performance. Section 3 describes expected benefits to SMUD associated with employing battery storage. Section 4 presents preliminary benefit-cost results for battery storage added at the Solano wind plantmore » and the Hedge PV plant. Section 5 presents conclusions and recommendations resulting from this analysis. The results of this analysis should be reviewed subject to the following caveat. The assumptions and data used in developing these results were based on reports available from and interaction with appropriate SMUD operating, planning, and design personnel in 1994 and early 1995 and are compatible with financial assumptions and system expansion plans as of that time. Assumptions and SMUD expansion plans have changed since then. In particular, SMUD did not install the additional 45 MW of wind that was planned for 1996. Current SMUD expansion plans and assumptions should be obtained from appropriate SMUD personnel.« less

  18. Questioning the "big assumptions". Part I: addressing personal contradictions that impede professional development.

    PubMed

    Bowe, Constance M; Lahey, Lisa; Armstrong, Elizabeth; Kegan, Robert

    2003-08-01

    The ultimate success of recent medical curriculum reforms is, in large part, dependent upon the faculty's ability to adopt and sustain new attitudes and behaviors. However, like many New Year's resolutions, sincere intent to change may be short lived and followed by a discouraging return to old behaviors. Failure to sustain the initial resolve to change can be misinterpreted as a lack of commitment to one's original goals and eventually lead to greater effort expended in rationalizing the status quo rather than changing it. The present article outlines how a transformative process that has proven to be effective in managing personal change, Questioning the Big Assumptions, was successfully used in an international faculty development program for medical educators to enhance individual personal satisfaction and professional effectiveness. This process systematically encouraged participants to explore and proactively address currently operative mechanisms that could stall their attempts to change at the professional level. The applications of the Big Assumptions process in faculty development helped individuals to recognize and subsequently utilize unchallenged and deep rooted personal beliefs to overcome unconscious resistance to change. This approach systematically led participants away from circular griping about what was not right in their current situation to identifying the actions that they needed to take to realize their individual goals. By thoughtful testing of personal Big Assumptions, participants designed behavioral changes that could be broadly supported and, most importantly, sustained.

  19. Contemporary Inventional Theory: An Aristotelian Model.

    ERIC Educational Resources Information Center

    Skopec, Eric W.

    Contemporary rhetoricians are concerned with the re-examination of classical doctrines in the hope of finding solutions to current problems. In this study, the author presents a methodological perspective consistent with current interests, by re-examining the assumptions that underlie each classical precept. He outlines an inventional system based…

  20. Single neuron computation: from dynamical system to feature detector.

    PubMed

    Hong, Sungho; Agüera y Arcas, Blaise; Fairhall, Adrienne L

    2007-12-01

    White noise methods are a powerful tool for characterizing the computation performed by neural systems. These methods allow one to identify the feature or features that a neural system extracts from a complex input and to determine how these features are combined to drive the system's spiking response. These methods have also been applied to characterize the input-output relations of single neurons driven by synaptic inputs, simulated by direct current injection. To interpret the results of white noise analysis of single neurons, we would like to understand how the obtained feature space of a single neuron maps onto the biophysical properties of the membrane, in particular, the dynamics of ion channels. Here, through analysis of a simple dynamical model neuron, we draw explicit connections between the output of a white noise analysis and the underlying dynamical system. We find that under certain assumptions, the form of the relevant features is well defined by the parameters of the dynamical system. Further, we show that under some conditions, the feature space is spanned by the spike-triggered average and its successive order time derivatives.

  1. A numerical study of diurnally varying surface temperature on flow patterns and pollutant dispersion in street canyons

    NASA Astrophysics Data System (ADS)

    Tan, Zijing; Dong, Jingliang; Xiao, Yimin; Tu, Jiyuan

    2015-03-01

    The impacts of the diurnal variation of surface temperature on street canyon flow pattern and pollutant dispersion are investigated based on a two-dimensional street canyon model under different thermal stratifications. Uneven distributed street temperature conditions and a user-defined wall function representing the heat transfer between the air and the street canyon are integrated into the current numerical model. The prediction accuracy of this model is successfully validated against a published wind tunnel experiment. Then, a series of numerical simulations representing four time scenarios (Morning, Afternoon, Noon and Night) are performed at different Bulk Richardson number (Rb). The results demonstrate that uneven distributed street temperature conditions significantly alters street canyon flow structure and pollutant dispersion characteristics compared with conventional uniform street temperature assumption, especially for the morning event. Moreover, air flow patterns and pollutant dispersion are greatly influenced by diurnal variation of surface temperature under unstable stratification conditions. Furthermore, the residual pollutant in near-ground-zone decreases as Rb increases in noon, afternoon and night events under all studied stability conditions.

  2. Steady-state heat conduction in quiescent fluids: Incompleteness of the Navier-Stokes-Fourier equations

    NASA Astrophysics Data System (ADS)

    Brenner, Howard

    2011-10-01

    Linear irreversible thermodynamic principles are used to demonstrate, by counterexample, the existence of a fundamental incompleteness in the basic pre-constitutive mass, momentum, and energy equations governing fluid mechanics and transport phenomena in continua. The demonstration is effected by addressing the elementary case of steady-state heat conduction (and transport processes in general) occurring in quiescent fluids. The counterexample questions the universal assumption of equality of the four physically different velocities entering into the basic pre-constitutive mass, momentum, and energy conservation equations. Explicitly, it is argued that such equality is an implicit constitutive assumption rather than an established empirical fact of unquestioned authority. Such equality, if indeed true, would require formal proof of its validity, currently absent from the literature. In fact, our counterexample shows the assumption of equality to be false. As the current set of pre-constitutive conservation equations appearing in textbooks are regarded as applicable both to continua and noncontinua (e.g., rarefied gases), our elementary counterexample negating belief in the equality of all four velocities impacts on all aspects of fluid mechanics and transport processes, continua and noncontinua alike.

  3. Reevaluation of Performance of Electric Double-layer Capacitors from Constant-current Charge/Discharge and Cyclic Voltammetry

    PubMed Central

    Allagui, Anis; Freeborn, Todd J.; Elwakil, Ahmed S.; Maundy, Brent J.

    2016-01-01

    The electric characteristics of electric-double layer capacitors (EDLCs) are determined by their capacitance which is usually measured in the time domain from constant-current charging/discharging and cyclic voltammetry tests, and from the frequency domain using nonlinear least-squares fitting of spectral impedance. The time-voltage and current-voltage profiles from the first two techniques are commonly treated by assuming ideal SsC behavior in spite of the nonlinear response of the device, which in turn provides inaccurate values for its characteristic metrics. In this paper we revisit the calculation of capacitance, power and energy of EDLCs from the time domain constant-current step response and linear voltage waveform, under the assumption that the device behaves as an equivalent fractional-order circuit consisting of a resistance Rs in series with a constant phase element (CPE(Q, α), with Q being a pseudocapacitance and α a dispersion coefficient). In particular, we show with the derived (Rs, Q, α)-based expressions, that the corresponding nonlinear effects in voltage-time and current-voltage can be encompassed through nonlinear terms function of the coefficient α, which is not possible with the classical RsC model. We validate our formulae with the experimental measurements of different EDLCs. PMID:27934904

  4. Reevaluation of Performance of Electric Double-layer Capacitors from Constant-current Charge/Discharge and Cyclic Voltammetry

    NASA Astrophysics Data System (ADS)

    Allagui, Anis; Freeborn, Todd J.; Elwakil, Ahmed S.; Maundy, Brent J.

    2016-12-01

    The electric characteristics of electric-double layer capacitors (EDLCs) are determined by their capacitance which is usually measured in the time domain from constant-current charging/discharging and cyclic voltammetry tests, and from the frequency domain using nonlinear least-squares fitting of spectral impedance. The time-voltage and current-voltage profiles from the first two techniques are commonly treated by assuming ideal SsC behavior in spite of the nonlinear response of the device, which in turn provides inaccurate values for its characteristic metrics. In this paper we revisit the calculation of capacitance, power and energy of EDLCs from the time domain constant-current step response and linear voltage waveform, under the assumption that the device behaves as an equivalent fractional-order circuit consisting of a resistance Rs in series with a constant phase element (CPE(Q, α), with Q being a pseudocapacitance and α a dispersion coefficient). In particular, we show with the derived (Rs, Q, α)-based expressions, that the corresponding nonlinear effects in voltage-time and current-voltage can be encompassed through nonlinear terms function of the coefficient α, which is not possible with the classical RsC model. We validate our formulae with the experimental measurements of different EDLCs.

  5. Reevaluation of Performance of Electric Double-layer Capacitors from Constant-current Charge/Discharge and Cyclic Voltammetry.

    PubMed

    Allagui, Anis; Freeborn, Todd J; Elwakil, Ahmed S; Maundy, Brent J

    2016-12-09

    The electric characteristics of electric-double layer capacitors (EDLCs) are determined by their capacitance which is usually measured in the time domain from constant-current charging/discharging and cyclic voltammetry tests, and from the frequency domain using nonlinear least-squares fitting of spectral impedance. The time-voltage and current-voltage profiles from the first two techniques are commonly treated by assuming ideal R s C behavior in spite of the nonlinear response of the device, which in turn provides inaccurate values for its characteristic metrics [corrected]. In this paper we revisit the calculation of capacitance, power and energy of EDLCs from the time domain constant-current step response and linear voltage waveform, under the assumption that the device behaves as an equivalent fractional-order circuit consisting of a resistance R s in series with a constant phase element (CPE(Q, α), with Q being a pseudocapacitance and α a dispersion coefficient). In particular, we show with the derived (R s , Q, α)-based expressions, that the corresponding nonlinear effects in voltage-time and current-voltage can be encompassed through nonlinear terms function of the coefficient α, which is not possible with the classical R s C model. We validate our formulae with the experimental measurements of different EDLCs.

  6. A Study of Crowd Ability and its Influence on Crowdsourced Evaluation of Design Concepts

    DTIC Science & Technology

    2014-05-01

    identifies the experts from the crowd, under the assumptions that ( 1 ) experts do exist and (2) only experts have consistent evaluations. These assumptions...for design evaluation tasks . Keywords: crowdsourcing, design evaluation, sparse evaluation ability, machine learning ∗Corresponding author. 1 ...intelligence” of a much larger crowd of people with diverse backgrounds [ 1 ]. Crowdsourced evaluation, or the delegation of an eval- uation task to a

  7. A utility-theoretic model for QALYs and willingness to pay.

    PubMed

    Klose, Thomas

    2003-01-01

    Despite the widespread use of quality-adjusted life years (QALY) in economic evaluation studies, their utility-theoretic foundation remains unclear. A model for preferences over health, money, and time is presented in this paper. Under the usual assumptions of the original QALY-model, an additive separable presentation of the utilities in different periods exists. In contrast to the usual assumption that QALY-weights do solely depend on aspects of health-related quality of life, wealth-standardized QALY-weights might vary with the wealth level in the presented extension of the original QALY-model resulting in an inconsistent measurement of QALYs. Further assumptions are presented to make the measurement of QALYs consistent with lifetime preferences over health and money. Even under these strict assumptions, QALYs and WTP (which also can be defined in this utility-theoretic model) are not equivalent preference-based measures of the effects of health technologies on an individual level. The results suggest that the individual WTP per QALY can depend on the magnitude of the QALY-gain as well as on the disease burden, when health influences the marginal utility of wealth. Further research seems to be indicated on this structural aspect of preferences over health and wealth and to quantify its impact. Copyright 2002 John Wiley & Sons, Ltd.

  8. Unsaturation of vapour pressure inside leaves of two conifer species

    DOE PAGES

    Cernusak, Lucas A.; Ubierna, Nerea; Jenkins, Michael W.; ...

    2018-05-16

    Stomatal conductance (g s) impacts both photosynthesis and transpiration, and is therefore fundamental to the global carbon and water cycles, food production, and ecosystem services. Mathematical models provide the primary means of analysing this important leaf gas exchange parameter. A nearly universal assumption in such models is that the vapour pressure inside leaves (e i) remains saturated under all conditions. The validity of this assumption has not been well tested, because so far e i cannot be measured directly. Here, we test this assumption using a novel technique, based on coupled measurements of leaf gas exchange and the stable isotopemore » compositions of CO 2 and water vapour passing over the leaf. We applied this technique to mature individuals of two semiarid conifer species. In both species, e i routinely dropped below saturation when leaves were exposed to moderate to high air vapour pressure deficits. Typical values of relative humidity in the intercellular air spaces were as low 0.9 in Juniperus monosperma and 0.8 in Pinus edulis. These departures of e i from saturation caused significant biases in calculations of g s and the intercellular CO 2 concentration. Thus, our results refute the longstanding assumption of saturated vapour pressure in plant leaves under all conditions.« less

  9. The Embedding Problem for Markov Models of Nucleotide Substitution

    PubMed Central

    Verbyla, Klara L.; Yap, Von Bing; Pahwa, Anuj; Shao, Yunli; Huttley, Gavin A.

    2013-01-01

    Continuous-time Markov processes are often used to model the complex natural phenomenon of sequence evolution. To make the process of sequence evolution tractable, simplifying assumptions are often made about the sequence properties and the underlying process. The validity of one such assumption, time-homogeneity, has never been explored. Violations of this assumption can be found by identifying non-embeddability. A process is non-embeddable if it can not be embedded in a continuous time-homogeneous Markov process. In this study, non-embeddability was demonstrated to exist when modelling sequence evolution with Markov models. Evidence of non-embeddability was found primarily at the third codon position, possibly resulting from changes in mutation rate over time. Outgroup edges and those with a deeper time depth were found to have an increased probability of the underlying process being non-embeddable. Overall, low levels of non-embeddability were detected when examining individual edges of triads across a diverse set of alignments. Subsequent phylogenetic reconstruction analyses demonstrated that non-embeddability could impact on the correct prediction of phylogenies, but at extremely low levels. Despite the existence of non-embeddability, there is minimal evidence of violations of the local time homogeneity assumption and consequently the impact is likely to be minor. PMID:23935949

  10. Maths for medications: an analytical exemplar of the social organization of nurses' knowledge.

    PubMed

    Dyjur, Louise; Rankin, Janet; Lane, Annette

    2011-07-01

    Within the literature that circulates in the discourses organizing nursing education, there are embedded assumptions that link student performance on maths examinations to safe medication practices. These assumptions are rooted historically. They fundamentally shape educational approaches assumed to support safe practice and protect patients from nursing error. Here, we apply an institutional ethnographic lens to the body of literature that both supports and critiques the emphasis on numeracy skills and medication safety. We use this form of inquiry to open an alternate interrogation of these practices. Our main argument posits that numeracy skills serve as powerful distraction for both students and teachers. We suggest that they operate under specious claims of safety and objectivity. As nurse educators, we are captured by taken-for-granted understandings of practices intended to produce safety. We contend that some of these practices are not congruent with how competency actually unfolds in the everyday world of nursing practice. Ontologically grounded in the materiality of work processes, we suggest there is a serious disjuncture between educators' assessment and evaluation work where it links into broad nursing assumptions about medication work. These underlying assumptions and work processes produce contradictory tensions for students, teachers and nurses in direct practice. © 2011 Blackwell Publishing Ltd.

  11. Unsaturation of vapour pressure inside leaves of two conifer species

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cernusak, Lucas A.; Ubierna, Nerea; Jenkins, Michael W.

    Stomatal conductance (g s) impacts both photosynthesis and transpiration, and is therefore fundamental to the global carbon and water cycles, food production, and ecosystem services. Mathematical models provide the primary means of analysing this important leaf gas exchange parameter. A nearly universal assumption in such models is that the vapour pressure inside leaves (e i) remains saturated under all conditions. The validity of this assumption has not been well tested, because so far e i cannot be measured directly. Here, we test this assumption using a novel technique, based on coupled measurements of leaf gas exchange and the stable isotopemore » compositions of CO 2 and water vapour passing over the leaf. We applied this technique to mature individuals of two semiarid conifer species. In both species, e i routinely dropped below saturation when leaves were exposed to moderate to high air vapour pressure deficits. Typical values of relative humidity in the intercellular air spaces were as low 0.9 in Juniperus monosperma and 0.8 in Pinus edulis. These departures of e i from saturation caused significant biases in calculations of g s and the intercellular CO 2 concentration. Thus, our results refute the longstanding assumption of saturated vapour pressure in plant leaves under all conditions.« less

  12. Latent class instrumental variables: A clinical and biostatistical perspective

    PubMed Central

    Baker, Stuart G.; Kramer, Barnett S.; Lindeman, Karen S.

    2015-01-01

    In some two-arm randomized trials, some participants receive the treatment assigned to the other arm as a result of technical problems, refusal of a treatment invitation, or a choice of treatment in an encouragement design. In some before-and-after studies, the availability of a new treatment changes from one time period to this next. Under assumptions that are often reasonable, the latent class instrumental variable (IV) method estimates the effect of treatment received in the aforementioned scenarios involving all-or-none compliance and all-or-none availability. Key aspects are four initial latent classes (sometimes called principal strata) based on treatment received if in each randomization group or time period, the exclusion restriction assumption (in which randomization group or time period is an instrumental variable), the monotonicity assumption (which drops an implausible latent class from the analysis), and the estimated effect of receiving treatment in one latent class (sometimes called efficacy, the local average treatment effect, or the complier average causal effect). Since its independent formulations in the biostatistics and econometrics literatures, the latent class IV method (which has no well-established name) has gained increasing popularity. We review the latent class IV method from a clinical and biostatistical perspective, focusing on underlying assumptions, methodological extensions, and applications in our fields of obstetrics and cancer research. PMID:26239275

  13. Model error in covariance structure models: Some implications for power and Type I error

    PubMed Central

    Coffman, Donna L.

    2010-01-01

    The present study investigated the degree to which violation of the parameter drift assumption affects the Type I error rate for the test of close fit and power analysis procedures proposed by MacCallum, Browne, and Sugawara (1996) for both the test of close fit and the test of exact fit. The parameter drift assumption states that as sample size increases both sampling error and model error (i.e. the degree to which the model is an approximation in the population) decrease. Model error was introduced using a procedure proposed by Cudeck and Browne (1992). The empirical power for both the test of close fit, in which the null hypothesis specifies that the Root Mean Square Error of Approximation (RMSEA) ≤ .05, and the test of exact fit, in which the null hypothesis specifies that RMSEA = 0, is compared with the theoretical power computed using the MacCallum et al. (1996) procedure. The empirical power and theoretical power for both the test of close fit and the test of exact fit are nearly identical under violations of the assumption. The results also indicated that the test of close fit maintains the nominal Type I error rate under violations of the assumption. PMID:21331302

  14. A quantitative evaluation of a qualitative risk assessment framework: Examining the assumptions and predictions of the Productivity Susceptibility Analysis (PSA)

    PubMed Central

    2018-01-01

    Qualitative risk assessment frameworks, such as the Productivity Susceptibility Analysis (PSA), have been developed to rapidly evaluate the risks of fishing to marine populations and prioritize management and research among species. Despite being applied to over 1,000 fish populations, and an ongoing debate about the most appropriate method to convert biological and fishery characteristics into an overall measure of risk, the assumptions and predictive capacity of these approaches have not been evaluated. Several interpretations of the PSA were mapped to a conventional age-structured fisheries dynamics model to evaluate the performance of the approach under a range of assumptions regarding exploitation rates and measures of biological risk. The results demonstrate that the underlying assumptions of these qualitative risk-based approaches are inappropriate, and the expected performance is poor for a wide range of conditions. The information required to score a fishery using a PSA-type approach is comparable to that required to populate an operating model and evaluating the population dynamics within a simulation framework. In addition to providing a more credible characterization of complex system dynamics, the operating model approach is transparent, reproducible and can evaluate alternative management strategies over a range of plausible hypotheses for the system. PMID:29856869

  15. Impact and cost-effectiveness of chlamydia testing in Scotland: a mathematical modelling study.

    PubMed

    Looker, Katharine J; Wallace, Lesley A; Turner, Katherine M E

    2015-01-15

    Chlamydia is the most common sexually transmitted bacterial infection in Scotland, and is associated with potentially serious reproductive outcomes, including pelvic inflammatory disease (PID) and tubal factor infertility (TFI) in women. Chlamydia testing in Scotland is currently targeted towards symptomatic individuals, individuals at high risk of existing undetected infection, and young people. The cost-effectiveness of testing and treatment to prevent PID and TFI in Scotland is uncertain. A compartmental deterministic dynamic model of chlamydia infection in 15-24 year olds in Scotland was developed. The model was used to estimate the impact of a change in testing strategy from baseline (16.8% overall testing coverage; 0.4 partners notified and tested/treated per treated positive index) on PID and TFI cases. Cost-effectiveness calculations informed by best-available estimates of the quality-adjusted life years (QALYs) lost due to PID and TFI were also performed. Increasing overall testing coverage by 50% from baseline to 25.2% is estimated to result in 21% fewer cases in young women each year (PID: 703 fewer; TFI: 88 fewer). A 50% decrease to 8.4% would result in 20% more PID (669 additional) and TFI (84 additional) cases occurring annually. The cost per QALY gained of current testing activities compared to no testing is £40,034, which is above the £20,000-£30,000 cost-effectiveness threshold. However, calculations are hampered by lack of reliable data. Any increase in partner notification from baseline would be cost-effective (incremental cost per QALY gained for a partner notification efficacy of 1 compared to baseline: £5,119), and would increase the cost-effectiveness of current testing strategy compared to no testing, with threshold cost-effectiveness reached at a partner notification efficacy of 1.5. However, there is uncertainty in the extent to which partner notification is currently done, and hence the amount by which it could potentially be increased. Current chlamydia testing strategy in Scotland is not cost-effective under the conservative model assumptions applied. However, with better data enabling some of these assumptions to be relaxed, current coverage could be cost-effective. Meanwhile, increasing partner notification efficacy on its own would be a cost-effective way of preventing PID and TFI from current strategy.

  16. Obesity--a neuropsychological disease? Systematic review and neuropsychological model.

    PubMed

    Jauch-Chara, Kamila; Oltmanns, Kerstin M

    2014-03-01

    Obesity is a global epidemic associated with a series of secondary complications and comorbid diseases such as diabetes mellitus, cardiovascular disease, sleep-breathing disorders, and certain forms of cancer. On the surface, it seems that obesity is simply the phenotypic manifestation of deliberately flawed food intake behavior with the consequence of dysbalanced energy uptake and expenditure and can easily be reversed by caloric restriction and exercise. Notwithstanding this assumption, the disappointing outcomes of long-term clinical studies based on this assumption show that the problem is much more complex. Obviously, recent studies render that specific neurocircuits involved in appetite regulation are etiologically integrated in the pathomechanism, suggesting obesity should be regarded as a neurobiological disease rather than the consequence of detrimental food intake habits. Moreover, apart from the physical manifestation of overeating, a growing body of evidence suggests a close relationship with psychological components comprising mood disturbances, altered reward perception and motivation, or addictive behavior. Given that current dietary and pharmacological strategies to overcome the burgeoning threat of the obesity problem are of limited efficacy, bear the risk of adverse side-effects, and in most cases are not curative, new concepts integratively focusing on the fundamental neurobiological and psychological mechanisms underlying overeating are urgently required. This new approach to develop preventive and therapeutic strategies would justify assigning obesity to the spectrum of neuropsychological diseases. Our objective is to give an overview on the current literature that argues for this view and, on the basis of this knowledge, to deduce an integrative model for the development of obesity originating from disturbed neuropsychological functioning. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Ring Current-Electromagnetic Ion Cyclotron Waves Coupling

    NASA Technical Reports Server (NTRS)

    Khazanov, G. V.

    2005-01-01

    The effect of Electromagnetic Ion Cyclotron (EMIC) waves, generated by ion temperature anisotropy in Earth s ring current (RC), is the best known example of wave- particle interaction in the magnetosphere. Also, there is much controversy over the importance of EMIC waves on RC depletion. Under certain conditions, relativistic electrons, with energies 21 MeV, can be removed from the outer radiation belt (RB) by EMIC wave scattering during a magnetic storm. That is why the calculation of EMIC waves must be a very critical part of the space weather studies. The new RC model that we have developed and present for the first time has several new features that we have combine together in a one single model: (a) several lower frequency cold plasma wave modes are taken into account; (b) wave tracing of these wave has been incorporated in the energy EMIC wave equation; (c) no assumptions regarding wave shape spectra have been made; (d) no assumptions regarding the shape of particle distribution have been made to calculate the growth rate; (e) pitch-angle, energy, and mix diffusions are taken into account together for the first time; (f) the exact loss-cone RC analytical solution has been found and coupled with bounce-averaged numerical solution of kinetic equation; (g) the EMIC waves saturation due to their modulation instability and LHW generation are included as an additional factor that contributes to this process; and (h) the hot ions were included in the real part of dielectric permittivity tensor. We compare our theoretical results with the different EMIC waves models as well as RC experimental data.

  18. Embedded binaries and their dense cores

    NASA Astrophysics Data System (ADS)

    Sadavoy, Sarah I.; Stahler, Steven W.

    2017-08-01

    We explore the relationship between young, embedded binaries and their parent cores, using observations within the Perseus Molecular Cloud. We combine recently published Very Large Array observations of young stars with core properties obtained from Submillimetre Common-User Bolometer Array 2 observations at 850 μm. Most embedded binary systems are found towards the centres of their parent cores, although several systems have components closer to the core edge. Wide binaries, defined as those systems with physical separations greater than 500 au, show a tendency to be aligned with the long axes of their parent cores, whereas tight binaries show no preferred orientation. We test a number of simple, evolutionary models to account for the observed populations of Class 0 and I sources, both single and binary. In the model that best explains the observations, all stars form initially as wide binaries. These binaries either break up into separate stars or else shrink into tighter orbits. Under the assumption that both stars remain embedded following binary break-up, we find a total star formation rate of 168 Myr-1. Alternatively, one star may be ejected from the dense core due to binary break-up. This latter assumption results in a star formation rate of 247 Myr-1. Both production rates are in satisfactory agreement with current estimates from other studies of Perseus. Future observations should be able to distinguish between these two possibilities. If our model continues to provide a good fit to other star-forming regions, then the mass fraction of dense cores that becomes stars is double what is currently believed.

  19. Partitioning uncertainty in streamflow projections under nonstationary model conditions

    NASA Astrophysics Data System (ADS)

    Chawla, Ila; Mujumdar, P. P.

    2018-02-01

    Assessing the impacts of Land Use (LU) and climate change on future streamflow projections is necessary for efficient management of water resources. However, model projections are burdened with significant uncertainty arising from various sources. Most of the previous studies have considered climate models and scenarios as major sources of uncertainty, but uncertainties introduced by land use change and hydrologic model assumptions are rarely investigated. In this paper an attempt is made to segregate the contribution from (i) general circulation models (GCMs), (ii) emission scenarios, (iii) land use scenarios, (iv) stationarity assumption of the hydrologic model, and (v) internal variability of the processes, to overall uncertainty in streamflow projections using analysis of variance (ANOVA) approach. Generally, most of the impact assessment studies are carried out with unchanging hydrologic model parameters in future. It is, however, necessary to address the nonstationarity in model parameters with changing land use and climate. In this paper, a regression based methodology is presented to obtain the hydrologic model parameters with changing land use and climate scenarios in future. The Upper Ganga Basin (UGB) in India is used as a case study to demonstrate the methodology. The semi-distributed Variable Infiltration Capacity (VIC) model is set-up over the basin, under nonstationary conditions. Results indicate that model parameters vary with time, thereby invalidating the often-used assumption of model stationarity. The streamflow in UGB under the nonstationary model condition is found to reduce in future. The flows are also found to be sensitive to changes in land use. Segregation results suggest that model stationarity assumption and GCMs along with their interactions with emission scenarios, act as dominant sources of uncertainty. This paper provides a generalized framework for hydrologists to examine stationarity assumption of models before considering them for future streamflow projections and segregate the contribution of various sources to the uncertainty.

  20. Data needs for X-ray astronomy satellites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kallman, T.

    I review the current status of atomic data for X-ray astronomy satellites. This includes some of the astrophysical issues which can be addressed, current modeling and analysis techniques, computational tools, the limitations imposed by currently available atomic data, and the validity of standard assumptions. I also discuss the future: challenges associated with future missions and goals for atomic data collection.

  1. Cost effectiveness of self-monitoring of blood glucose (SMBG) for patients with type 2 diabetes and not on insulin: impact of modelling assumptions on recent Canadian findings.

    PubMed

    Tunis, Sandra L

    2011-11-01

    Canadian patients, healthcare providers and payers share interest in assessing the value of self-monitoring of blood glucose (SMBG) for individuals with type 2 diabetes but not on insulin. Using the UKPDS (UK Prospective Diabetes Study) model, the Canadian Optimal Prescribing and Utilization Service (COMPUS) conducted an SMBG cost-effectiveness analysis. Based on the results, COMPUS does not recommend routine strip use for most adults with type 2 diabetes who are not on insulin. Cost-effectiveness studies require many assumptions regarding cohort, clinical effect, complication costs, etc. The COMPUS evaluation included several conservative assumptions that negatively impacted SMBG cost effectiveness. Current objectives were to (i) review key, impactful COMPUS assumptions; (ii) illustrate how alternative inputs can lead to more favourable results for SMBG cost effectiveness; and (iii) provide recommendations for assessing its long-term value. A summary of COMPUS methods and results was followed by a review of assumptions (for trial-based glycosylated haemoglobin [HbA(1c)] effect, patient characteristics, costs, simulation pathway) and their potential impact. The UKPDS model was used for a 40-year cost-effectiveness analysis of SMBG (1.29 strips per day) versus no SMBG in the Canadian payer setting. COMPUS assumptions for patient characteristics (e.g. HbA(1c) 8.4%), SMBG HbA(1c) advantage (-0.25%) and costs were retained. As with the COMPUS analysis, UKPDS HbA(1c) decay curves were incorporated into SMBG and no-SMBG pathways. An important difference was that SMBG HbA(1c) benefits in the current study could extend beyond the initial simulation period. Sensitivity analyses examined SMBG HbA(1c) advantage, adherence, complication history and cost inputs. Outcomes (discounted at 5%) included QALYs, complication rates, total costs (year 2008 values) and incremental cost-effectiveness ratios (ICERs). The base-case ICER was $Can63 664 per QALY gained; approximately 56% of the COMPUS base-case ICER. SMBG was associated with modest risk reductions (0.10-0.70%) for six of seven complications. Assuming an SMBG advantage of -0.30% decreased the current base-case ICER by over $Can10 000 per QALY gained. With adherence of 66% and 87%, ICERs were (respectively) $Can39 231 and $Can54 349 per QALY gained. Incorporating a more representative complication history and 15% complication cost increase resulted in an ICER of $Can49 743 per QALY gained. These results underscore the importance of modelling assumptions regarding the duration of HbA(1c) effect. The current study shares several COMPUS limitations relating to the UKPDS model being designed for newly diagnosed patients, and to randomized controlled trial monitoring rates. Neither study explicitly examined the impact of varying the duration of initial HbA(1c) effects, or of medication or other treatment changes. Because the COMPUS research will potentially influence clinical practice and reimbursement policy in Canada, understanding the impact of assumptions on cost-effectiveness results seems especially important. Demonstrating that COMPUS ICERs were greatly reduced through variations in a small number of inputs may encourage additional clinical research designed to measure SMBG effects within the context of optimal disease management. It may also encourage additional economic evaluations that incorporate lessons learned and best practices for assessing the overall value of SMBG for type 2 diabetes in insulin-naive patients.

  2. Is animal cruelty a "red flag" for family violence? Investigating co-occurring violence toward children, partners, and pets.

    PubMed

    Degue, Sarah; Dilillo, David

    2009-06-01

    Cross-reporting legislation, which permits child and animal welfare investigators to refer families with substantiated child maltreatment or animal cruelty for investigation by parallel agencies, has recently been adopted in several U.S. jurisdictions. The current study sheds light on the underlying assumption of these policies-that animal cruelty and family violence commonly co-occur. Exposure to family violence and animal cruelty is retrospectively assessed using a sample of 860 college students. Results suggest that animal abuse may be a red flag indicative of family violence in the home. Specifically, about 60% of participants who have witnessed or perpetrated animal cruelty as a child also report experiences with child maltreatment or domestic violence. Differential patterns of association were revealed between childhood victimization experiences and the type of animal cruelty exposure reported. This study extends current knowledge of the links between animal- and human-directed violence and provides initial support for the premise of cross-reporting legislation.

  3. Transforming medical imaging applications into collaborative PACS-based telemedical systems

    NASA Astrophysics Data System (ADS)

    Maani, Rouzbeh; Camorlinga, Sergio; Arnason, Neil

    2011-03-01

    Telemedical systems are not practical for use in a clinical workflow unless they are able to communicate with the Picture Archiving and Communications System (PACS). On the other hand, there are many medical imaging applications that are not developed as telemedical systems. Some medical imaging applications do not support collaboration and some do not communicate with the PACS and therefore limit their usability in clinical workflows. This paper presents a general architecture based on a three-tier architecture model. The architecture and the components developed within it, transform medical imaging applications into collaborative PACS-based telemedical systems. As a result, current medical imaging applications that are not telemedical, not supporting collaboration, and not communicating with PACS, can be enhanced to support collaboration among a group of physicians, be accessed remotely, and be clinically useful. The main advantage of the proposed architecture is that it does not impose any modification to the current medical imaging applications and does not make any assumptions about the underlying architecture or operating system.

  4. Converging technologies: a critical analysis of cognitive enhancement for public policy application.

    PubMed

    Makridis, Christos

    2013-09-01

    This paper investigates cognitive enhancement, specifically biological cognitive enhancement (BCE), as a converging technology, and its implications for public policy. With an increasing rate of technological advancements, the legal, social, and economic frameworks lag behind the scientific advancements that they support. This lag poses significant challenges for policymakers if it is not dealt with sufficiently within the right analytical context. Therefore, the driving question behind this paper is, "What contingencies inform the advancement of biological cognitive enhancement, and what would society look like under this set of assumptions?" The paper is divided into five components: (1) defining the current policy context for BCEs, (2) analyzing the current social and economic outcomes to BCEs, (3) investigating the context of cost-benefit arguments in relation to BCEs, (4) proposing an analytical model for evaluating contingencies for BCE development, and (5) evaluating a simulated policy, social, technological, and economic context given the contingencies. In order to manage the risk and uncertainty inherent in technological change, BCEs' drivers must be scrutinized and evaluated.

  5. Overview of current TEFs as it relates to current PCB exposures: What is needed?

    EPA Science Inventory

    The toxic equivalency factor (TEF) approach is one of the ways to assess the risk associated with exposure to complex mixture of polychlorinated biphenyls (PCBs) and structurally related chemicals. This method is based on mode of action with the assumption that all chemicals in ...

  6. Hidden Curriculum as One of Current Issue of Curriculum

    ERIC Educational Resources Information Center

    Alsubaie, Merfat Ayesh

    2015-01-01

    There are several issues in the education system, especially in the curriculum field that affect education. Hidden curriculum is one of current controversial curriculum issues. Many hidden curricular issues are the result of assumptions and expectations that are not formally communicated, established, or conveyed within the learning environment.…

  7. Verification of GCM-generated regional seasonal precipitation for current climate and of statistical downscaling estimates under changing climate conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Busuioc, A.; Storch, H. von; Schnur, R.

    Empirical downscaling procedures relate large-scale atmospheric features with local features such as station rainfall in order to facilitate local scenarios of climate change. The purpose of the present paper is twofold: first, a downscaling technique is used as a diagnostic tool to verify the performance of climate models on the regional scale; second, a technique is proposed for verifying the validity of empirical downscaling procedures in climate change applications. The case considered is regional seasonal precipitation in Romania. The downscaling model is a regression based on canonical correlation analysis between observed station precipitation and European-scale sea level pressure (SLP). Themore » climate models considered here are the T21 and T42 versions of the Hamburg ECHAM3 atmospheric GCM run in time-slice mode. The climate change scenario refers to the expected time of doubled carbon dioxide concentrations around the year 2050. Generally, applications of statistical downscaling to climate change scenarios have been based on the assumption that the empirical link between the large-scale and regional parameters remains valid under a changed climate. In this study, a rationale is proposed for this assumption by showing the consistency of the 2 x CO{sub 2} GCM scenarios in winter, derived directly from the gridpoint data, with the regional scenarios obtained through empirical downscaling. Since the skill of the GCMs in regional terms is already established, it is concluded that the downscaling technique is adequate for describing climatically changing regional and local conditions, at least for precipitation in Romania during winter.« less

  8. Economic Implications of Widespread Expansion of Frozen Section Margin Analysis to Guide Surgical Resection in Women With Breast Cancer Undergoing Breast-Conserving Surgery.

    PubMed

    Boughey, Judy C; Keeney, Gary L; Radensky, Paul; Song, Christine P; Habermann, Elizabeth B

    2016-04-01

    In the current health care environment, cost effectiveness is critically important in policy setting and care of patients. This study performed a health economic analysis to assess the implications to providers and payers of expanding the use of frozen section margin analysis to minimize reoperations for patients undergoing breast cancer lumpectomy. A health care economic impact model was built to assess annual costs associated with breast lumpectomy procedures with and without frozen section margin analysis to avoid reoperation. If frozen section margin analysis is used in 20% of breast lumpectomies and under a baseline assumption that 35% of initial lumpectomies without frozen section analysis result in reoperations, the potential annual cost savings are $18.2 million to payers and $0.4 million to providers. Under the same baseline assumption, if 100% of all health care facilities adopted the use of frozen section margin analysis for breast lumpectomy procedures, the potential annual cost savings are $90.9 million to payers and $1.8 million to providers. On the basis of 10,000 simulations, use of intraoperative frozen section margin analysis yields cost saving for payers and is cost neutral to slightly cost saving for providers. This economic analysis indicates that widespread use of frozen section margin evaluation intraoperatively to guide surgical resection in breast lumpectomy cases and minimize reoperations would be beneficial to cost savings not only for the patient but also for payers and, in most cases, for providers. Copyright © 2016 by American Society of Clinical Oncology.

  9. Why would we use the Sediment Isotope Tomography (SIT) model to establish a 210Pb-based chronology in recent-sediment cores?

    PubMed

    Abril Hernández, José-María

    2015-05-01

    After half a century, the use of unsupported (210)Pb ((210)Pbexc) is still far off from being a well established dating tool for recent sediments with widespread applicability. Recent results from the statistical analysis of time series of fluxes, mass sediment accumulation rates (SAR), and initial activities, derived from varved sediments, place serious constraints to the assumption of constant fluxes, which is widely used in dating models. The Sediment Isotope Tomography (SIT) model, under the assumption of non post-depositional redistribution, is used for dating recent sediments in scenarios in that fluxes and SAR are uncorrelated and both vary with time. By using a simple graphical analysis, this paper shows that under the above assumptions, any given (210)Pbexc profile, even with the restriction of a discrete set of reference points, is compatible with an infinite number of chronological lines, and thus generating an infinite number of mathematically exact solutions for histories of initial activity concentrations, SAR and fluxes onto the SWI, with these two last ranging from zero up to infinity. Particularly, SIT results, without additional assumptions, cannot contain any statistically significant difference with respect to the exact solutions consisting in intervals of constant SAR or constant fluxes (both being consistent with the reference points). Therefore, there is not any benefit in its use as a dating tool without the explicit introduction of additional restrictive assumptions about fluxes, SAR and/or their interrelationship. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Economics in "Global Health 2035": a sensitivity analysis of the value of a life year estimates.

    PubMed

    Chang, Angela Y; Robinson, Lisa A; Hammitt, James K; Resch, Stephen C

    2017-06-01

    In "Global health 2035: a world converging within a generation," The Lancet Commission on Investing in Health (CIH) adds the value of increased life expectancy to the value of growth in gross domestic product (GDP) when assessing national well-being. To value changes in life expectancy, the CIH relies on several strong assumptions to bridge gaps in the empirical research. It finds that the value of a life year (VLY) averages 2.3 times GDP per capita for low- and middle-income countries (LMICs) assuming the changes in life expectancy they experienced from 2000 to 2011 are permanent. The CIH VLY estimate is based on a specific shift in population life expectancy and includes a 50 percent reduction for children ages 0 through 4. We investigate the sensitivity of this estimate to the underlying assumptions, including the effects of income, age, and life expectancy, and the sequencing of the calculations. We find that reasonable alternative assumptions regarding the effects of income, age, and life expectancy may reduce the VLY estimates to 0.2 to 2.1 times GDP per capita for LMICs. Removing the reduction for young children increases the VLY, while reversing the sequencing of the calculations reduces the VLY. Because the VLY is sensitive to the underlying assumptions, analysts interested in applying this approach elsewhere must tailor the estimates to the impacts of the intervention and the characteristics of the affected population. Analysts should test the sensitivity of their conclusions to reasonable alternative assumptions. More work is needed to investigate options for improving the approach.

  11. The vulnerabilities of teenage mothers: challenging prevailing assumptions.

    PubMed

    SmithBattle, L

    2000-09-01

    The belief that early childbearing leads to poverty permeates our collective understanding. However, recent findings reveal that for many teens, mothering makes sense of the limited life options that precede their pregnancies. The author challenges several assumptions about teenage mothers and offers an alternative to the modern view of the unencumbered self that drives current responses to teen childbearing. This alternative perspective entails a situated view of the self and a broader notion of parenting and citizenship that supports teen mothers and affirms our mutual interdependence.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauböck, Michi; Psaltis, Dimitrios; Özel, Feryal, E-mail: mbaubock@email.arizona.edu

    We calculate the effects of spot size on pulse profiles of moderately rotating neutron stars. Specifically, we quantify the bias introduced in radius measurements from the common assumption that spots are infinitesimally small. We find that this assumption is reasonable for spots smaller than 10°–18° and leads to errors that are ≤10% in the radius measurement, depending on the location of the spot and the inclination of the observer. We consider the implications of our results for neutron star radius measurements with the upcoming and planned X-ray missions NICER and LOFT. We calculate the expected spot size for different classesmore » of sources and investigate the circumstances under which the assumption of a small spot is justified.« less

  13. Adaptive windowing and windowless approaches to estimate dynamic functional brain connectivity

    NASA Astrophysics Data System (ADS)

    Yaesoubi, Maziar; Calhoun, Vince D.

    2017-08-01

    In this work, we discuss estimation of dynamic dependence of a multi-variate signal. Commonly used approaches are often based on a locality assumption (e.g. sliding-window) which can miss spontaneous changes due to blurring with local but unrelated changes. We discuss recent approaches to overcome this limitation including 1) a wavelet-space approach, essentially adapting the window to the underlying frequency content and 2) a sparse signal-representation which removes any locality assumption. The latter is especially useful when there is no prior knowledge of the validity of such assumption as in brain-analysis. Results on several large resting-fMRI data sets highlight the potential of these approaches.

  14. What do we know about the role and regulation of stored non-structural carbon compounds in trees?

    NASA Astrophysics Data System (ADS)

    Sala, A.; Martinez-Vilalta, J.; Lloret, F.

    2012-12-01

    Despite the critical role of forests on the global C cycle and recent increases in drought-induced forest mortality, remarkable knowledge gaps exist to accurately predict tree growth and survival under climate change. In particular, storage of non-structural carbon compounds (NSCC) is thought to be critical for tree survival under drought but its regulation and function is the least understood of the tree's C budget components. Our current understanding of the role and regulation of stored NSCC relies on several assumptions. First, stored NSCC is generally assumed to be a passive buffer between source and sink demand for growth and respiration and, therefore, is an integrator of the tree C balance. Second, most process-based models commonly assume that C availability drives growth and ignore storage and environmental regulation of sink activity. Third, trees under C deficits are assumed to rely on stored C until normal conditions are restored or reserves are exhausted, whichever comes first. Implicit is this is that stored NSCC increases survival under drought, and that access to stored NSCC is unlimited. For the most part, these assumptions have not been experimentally tested, and increasing evidence suggests that some of them are not necessarily correct. Here we assess the validity of some of the assumptions above from a review of the published data. Several studies so far are consistent with the notion that stored NSCC serve as a passive buffer between C assimilation and C demand for growth and respiration. In contrast, other studies indicate that C may be partitioned to storage at the expense of growth. In any case, unequivocal evidence of whether and when C is or is not partitioned to storage at the expense of growth in woody plants is lacking, leaving a critical void in our knowledge. Many studies in woody plants indicate that growth is more sensitive to water availability than photosynthesis, and that NSCC accumulate as a result. This indicates that growth is not solely driven by C assimilation as most process-based models currently assume, and that sink activity is directly affected by environmental conditions. These results also suggest that fluctuations in NSCC storage also arise as a passive response to imbalances between C assimilation and growth. Differences in relative sensitivities of growth and photosynthesis to N availability are much less clear than those to water availability. Data also suggests that woody plants rarely fully deplete their pool of stored NSCC, tentatively suggesting that they regulate their storage pool to maintain certain minimums that are well above those needed to maintain cell turgor. Most often, studies reporting nearly exhausted storage pools as a consequence of drought also report imminent death of trees. This suggests that most of the stored NSCC pool remains available for use, particularly under extreme conditions. Whether the turning point for death is when C availability is insufficient to sustain cell metabolism or when stored NSCC pools are depleted below minimum thresholds is unclear. In the latter case, the point of no recovery could occur well before their NSCC pools are exhausted, although visual death symptoms may lag behind and occur at the point of NSCC exhaustion.

  15. An analysis and implications of alternative methods of deriving the density (WPL) terms for eddy covariance flux measurements

    Treesearch

    W. J. Massman; J. -P. Tuovinen

    2006-01-01

    We explore some of the underlying assumptions used to derive the density or WPL terms (Webb et al. (1980) Quart J RoyMeteorol Soc 106:85-100) required for estimating the surface exchange fluxes by eddy covariance. As part of this effort we recast the origin of the density terms as an assumption regarding the density fluctuations rather than as a (dry air) flux...

  16. Key rate for calibration robust entanglement based BB84 quantum key distribution protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gittsovich, O.; Moroder, T.

    2014-12-04

    We apply the approach of verifying entanglement, which is based on the sole knowledge of the dimension of the underlying physical system to the entanglement based version of the BB84 quantum key distribution protocol. We show that the familiar one-way key rate formula holds already if one assumes the assumption that one of the parties is measuring a qubit and no further assumptions about the measurement are needed.

  17. A hybrid Dantzig-Wolfe, Benders decomposition and column generation procedure for multiple diet production planning under uncertainties

    NASA Astrophysics Data System (ADS)

    Udomsungworagul, A.; Charnsethikul, P.

    2018-03-01

    This article introduces methodology to solve large scale two-phase linear programming with a case of multiple time period animal diet problems under both nutrients in raw materials and finished product demand uncertainties. Assumption of allowing to manufacture multiple product formulas in the same time period and assumption of allowing to hold raw materials and finished products inventory have been added. Dantzig-Wolfe decompositions, Benders decomposition and Column generations technique has been combined and applied to solve the problem. The proposed procedure was programmed using VBA and Solver tool in Microsoft Excel. A case study was used and tested in term of efficiency and effectiveness trade-offs.

  18. Recollective performance advantages for implicit memory tasks.

    PubMed

    Sheldon, Signy A M; Moscovitch, Morris

    2010-10-01

    A commonly held assumption is that processes underlying explicit and implicit memory are distinct. Recent evidence, however, suggests that they may interact more than previously believed. Using the remember-know procedure the current study examines the relation between recollection, a process thought to be exclusive to explicit memory, and performance on two implicit memory tasks, lexical decision and word stem completion. We found that, for both implicit tasks, words that were recollected were associated with greater priming effects than were words given a subsequent familiarity rating or words that had been studied but were not recognised (misses). Broadly, our results suggest that non-voluntary processes underlying explicit memory also benefit priming, a measure of implicit memory. More specifically, given that this benefit was due to a particular aspect of explicit memory (recollection), these results are consistent with some strength models of memory and with Moscovitch's (2008) proposal that recollection is a two-stage process, one rapid and unconscious and the other more effortful and conscious.

  19. Modeling of a self-healing process in blast furnace slag cement exposed to accelerated carbonation

    NASA Astrophysics Data System (ADS)

    Zemskov, Serguey V.; Ahmad, Bilal; Copuroglu, Oguzhan; Vermolen, Fred J.

    2013-02-01

    In the current research, a mathematical model for the post-damage improvement of the carbonated blast furnace slag cement (BFSC) exposed to accelerated carbonation is constructed. The study is embedded within the framework of investigating the effect of using lightweight expanded clay aggregate, which is incorporated into the impregnation of the sodium mono-fluorophosphate (Na-MFP) solution. The model of the self-healing process is built under the assumption that the position of the carbonation front changes in time where the rate of diffusion of Na-MFP into the carbonated cement matrix and the reaction rates of the free phosphate and fluorophosphate with the components of the cement are comparable to the speed of the carbonation front under accelerated carbonation conditions. The model is based on an initial-boundary value problem for a system of partial differential equations which is solved using a Galerkin finite element method. The results obtained are discussed and generalized to a three-dimensional case.

  20. Genome-Wide Association Analysis of Adaptation Using Environmentally Predicted Traits.

    PubMed

    van Heerwaarden, Joost; van Zanten, Martijn; Kruijer, Willem

    2015-10-01

    Current methods for studying the genetic basis of adaptation evaluate genetic associations with ecologically relevant traits or single environmental variables, under the implicit assumption that natural selection imposes correlations between phenotypes, environments and genotypes. In practice, observed trait and environmental data are manifestations of unknown selective forces and are only indirectly associated with adaptive genetic variation. In theory, improved estimation of these forces could enable more powerful detection of loci under selection. Here we present an approach in which we approximate adaptive variation by modeling phenotypes as a function of the environment and using the predicted trait in multivariate and univariate genome-wide association analysis (GWAS). Based on computer simulations and published flowering time data from the model plant Arabidopsis thaliana, we find that environmentally predicted traits lead to higher recovery of functional loci in multivariate GWAS and are more strongly correlated to allele frequencies at adaptive loci than individual environmental variables. Our results provide an example of the use of environmental data to obtain independent and meaningful information on adaptive genetic variation.

  1. Extended Huygens-Fresnel principle and optical waves propagation in turbulence: discussion.

    PubMed

    Charnotskii, Mikhail

    2015-07-01

    Extended Huygens-Fresnel principle (EHF) currently is the most common technique used in theoretical studies of the optical propagation in turbulence. A recent review paper [J. Opt. Soc. Am. A31, 2038 (2014)JOAOD60740-323210.1364/JOSAA.31.002038] cites several dozens of papers that are exclusively based on the EHF principle. We revisit the foundations of the EHF, and show that it is burdened by very restrictive assumptions that make it valid only under weak scintillation conditions. We compare the EHF to the less-restrictive Markov approximation and show that both theories deliver identical results for the second moment of the field, rendering the EHF essentially worthless. For the fourth moment of the field, the EHF principle is accurate under weak scintillation conditions, but is known to provide erroneous results for strong scintillation conditions. In addition, since the EHF does not obey the energy conservation principle, its results cannot be accurate for scintillations of partially coherent beam waves.

  2. The Promise of the Internet of Things in Healthcare: How Hard Is It to Keep?

    PubMed

    Marques, Rita; Gregório, João; Mira Da Silva, Miguel; Lapão, Luís Velez

    2016-01-01

    Internet of Things is starting to be implemented in healthcare. An example is the automated monitoring systems that are currently being used to provide healthcare workers with feedback regarding their hand hygiene compliance. These solutions seem effective in promoting healthcare workers self-awareness and action regarding their hand hygiene performance, which is still far from desired. Underlying these systems, an indoor positioning component (following Internet of Things paradigm) is used to collect data from the ward regarding healthcare workers' position, which will be later used to make some assumptions about the usage of alcohol-based handrub dispensers and sinks. We found that building such a system under the scope of the healthcare field is not a trivial task and it must be subject to several considerations, which are presented, analyzed and discussed in this paper. The limitations of present Internet of Things technologies are not yet ready to address the demanding field of healthcare.

  3. Ultrafast Chemistry under Nonequilibrium Conditions and the Shock to Deflagration Transition at the Nanoscale

    DOE PAGES

    Wood, Mitchell A.; Cherukara, Mathew J.; Kober, Edward M.; ...

    2015-06-13

    We use molecular dynamics simulations to describe the chemical reactions following shock-induced collapse of cylindrical pores in the high-energy density material RDX. For shocks with particle velocities of 2 km/s we find that the collapse of a 40 nm diameter pore leads to a deflagration wave. Molecular collisions during the collapse lead to ultrafast, multistep chemical reactions that occur under nonequilibrium conditions. WE found that exothermic products formed during these first few picoseconds prevent the nanoscale hotspot from quenching. Within 30 ps, a local deflagration wave develops. It propagates at 0.25 km/s and consists of an ultrathin reaction zone ofmore » only ~5 nm, thus involving large temperature and composition gradients. Contrary to the assumptions in current models, a static thermal hotspot matching the dynamical one in size and thermodynamic conditions fails to produce a deflagration wave indicating the importance of nonequilibrium loading in the criticality of nanoscale hot spots. These results provide insight into the initiation of reactive decomposition.« less

  4. Shortage of cardiothoracic surgeons is likely by 2020.

    PubMed

    Grover, Atul; Gorman, Karyn; Dall, Timothy M; Jonas, Richard; Lytle, Bruce; Shemin, Richard; Wood, Douglas; Kron, Irving

    2009-08-11

    Even as the burden of cardiovascular disease in the United States is increasing as the population grows and ages, the number of active cardiothoracic surgeons has fallen for the first time in 20 years. Meanwhile, the treatment of patients with coronary artery disease continues to evolve amid uncertain changes in technology. This study evaluates current and future requirements for cardiothoracic surgeons in light of decreasing rates of coronary artery bypass grafting procedures. Projections of supply and demand for cardiothoracic surgeons are based on analysis of population, physician office, hospital, and physician data sets to estimate current patterns of healthcare use and delivery. Using a simulation model, we project the future supply of cardiothoracic surgeons under alternative assumptions about the number of new fellows trained each year. Future demand is modeled, taking into account patient demographics, under current and alternative use rates that include the elimination of open revascularization. By 2025, the demand for cardiothoracic surgeons could increase by 46% on the basis of population growth and aging if current healthcare use and service delivery patterns continue. Even with complete elimination of coronary artery bypass grafting, there is a projected shortfall of cardiothoracic surgeons because the active supply is projected to decrease 21% over the same time period as a result of retirement and declining entrants. The United States is facing a shortage of cardiothoracic surgeons within the next 10 years, which could diminish quality of care if non-board-certified physicians expand their role in cardiothoracic surgery or if patients must delay appropriate care because of a shortage of well-trained surgeons.

  5. The future prospects of supply and demand for urologists in Korea

    PubMed Central

    2017-01-01

    Purpose The purpose of this study was to forecast the future supply and demand for urologists and to discuss the possible policy implications. Materials and Methods A demographic utilization-based model was used to calculate the total urologist requirements for Korea. Utilization rates for ambulatory and inpatient genitourinary specialty services were estimated according to age, sex, and insurance status. These rates were used to estimate genitourinary specialty-specific total service utilization expressed in patient care minutes for future populations and converted to genitourinary physician requirements by applying per-genitourinary-physician productivity estimates. An in-and-out movement model for urologist supply was used. Results Depending on assumptions about data at each step in the method, the supply of urologic surgeons is expected to exceed the demand by 2025 under the current enrollment rate of specialists (43.5% in 2012) when comparing the results of the projections under demand scenarios 3 and 4. However, if the current enrollment rate persists, the imbalance in supply and demand will be not severe by 2030. The degree of imbalance can be alleviated by 2030 by maintaining the current occupancy rate of urologic residents of 43.5%. Conclusions This study shows that the number of residents needs to be reduced according to the supply and demand for urologic surgeons. Moreover, a policy should be established to maintain the current occupancy rate of residents. The factors affecting the supply and demand of urologic surgeons are complicated. Thus, comprehensive policies encompassing these factors should be established with appropriate solutions. PMID:29124238

  6. The impact of individual-level heterogeneity on estimated infectious disease burden: a simulation study.

    PubMed

    McDonald, Scott A; Devleesschauwer, Brecht; Wallinga, Jacco

    2016-12-08

    Disease burden is not evenly distributed within a population; this uneven distribution can be due to individual heterogeneity in progression rates between disease stages. Composite measures of disease burden that are based on disease progression models, such as the disability-adjusted life year (DALY), are widely used to quantify the current and future burden of infectious diseases. Our goal was to investigate to what extent ignoring the presence of heterogeneity could bias DALY computation. Simulations using individual-based models for hypothetical infectious diseases with short and long natural histories were run assuming either "population-averaged" progression probabilities between disease stages, or progression probabilities that were influenced by an a priori defined individual-level frailty (i.e., heterogeneity in disease risk) distribution, and DALYs were calculated. Under the assumption of heterogeneity in transition rates and increasing frailty with age, the short natural history disease model predicted 14% fewer DALYs compared with the homogenous population assumption. Simulations of a long natural history disease indicated that assuming homogeneity in transition rates when heterogeneity was present could overestimate total DALYs, in the present case by 4% (95% quantile interval: 1-8%). The consequences of ignoring population heterogeneity should be considered when defining transition parameters for natural history models and when interpreting the resulting disease burden estimates.

  7. Managing technology licensing for stochastic R&D: from the perspective of an enterprise information system

    NASA Astrophysics Data System (ADS)

    Hong, Xianpei; Zhao, Dan; Wang, Zongjun

    2016-10-01

    Enterprise information technology (IT) plays an important role in technology innovation management for high-tech enterprises. However, to date most studies on enterprise technology innovation have assumed that the research and development (R&D) outcome is certain. This assumption does not always hold in practice. Motivated by the current practice of some IT industries, we establish a three-stage duopoly game model, including the R&D stage, the licensing stage and the output stage, to investigate the influence of bargaining power and technology spillover on the optimal licensing policy for the innovating enterprise when the outcome of R&D is uncertain. Our results demonstrate that (1) if the licensor has low (high) bargaining power, fixed-fee (royalty) licensing is always superior to royalty (fixed-fee) licensing to the licensor regardless of technology spillover; (2) if the licensor has moderate bargaining power and technology spillover is low (high) as well, fixed-fee (royalty) licensing is superior to royalty (fixed-fee) licensing; (3) under two-part tariff licensing and the assumption of licensors with full bargaining power, if a negative prepaid fixed fee is not allowed, two-part tariff licensing is equivalent to royalty licensing which is the optimal licensing policy; if negative prepaid fixed fee is allowed, the optimal policy is two-part tariff licensing.

  8. IQ of four-year-olds who go on to develop dyslexia.

    PubMed

    van Bergen, Elsje; de Jong, Peter F; Maassen, Ben; Krikhaar, Evelien; Plakas, Anna; van der Leij, Aryan

    2014-01-01

    Do children who go on to develop dyslexia show normal verbal and nonverbal development before reading onset? According to the aptitude-achievement discrepancy model, dyslexia is defined as a discrepancy between intelligence and reading achievement. One of the underlying assumptions is that the general cognitive development of children who fail to learn to read has been normal. The current study tests this assumption. In addition, we investigated whether possible IQ deficits are uniquely related to later reading or are also related to arithmetic. Four-year-olds (N = 212) with and without familial risk for dyslexia were assessed on 10 IQ subtests. Reading and arithmetic skills were measured 4 years later, at the end of Grade 2. Relative to the controls, the at-risk group without dyslexia had subtle impairments only in the verbal domain, whereas the at-risk group with dyslexia lagged behind across IQ tasks. Nonverbal IQ was associated with both reading and arithmetic, whereas verbal IQ was uniquely related to later reading. The children who went on to develop dyslexia performed relatively poorly in both verbal and nonverbal abilities at age 4, which challenges the discrepancy model. Furthermore, we discuss possible causal and epiphenomenal models explaining the links between early IQ and later reading. © Hammill Institute on Disabilities 2013.

  9. Detecting associated single-nucleotide polymorphisms on the X chromosome in case control genome-wide association studies.

    PubMed

    Chen, Zhongxue; Ng, Hon Keung Tony; Li, Jing; Liu, Qingzhong; Huang, Hanwen

    2017-04-01

    In the past decade, hundreds of genome-wide association studies have been conducted to detect the significant single-nucleotide polymorphisms that are associated with certain diseases. However, most of the data from the X chromosome were not analyzed and only a few significant associated single-nucleotide polymorphisms from the X chromosome have been identified from genome-wide association studies. This is mainly due to the lack of powerful statistical tests. In this paper, we propose a novel statistical approach that combines the information of single-nucleotide polymorphisms on the X chromosome from both males and females in an efficient way. The proposed approach avoids the need of making strong assumptions about the underlying genetic models. Our proposed statistical test is a robust method that only makes the assumption that the risk allele is the same for both females and males if the single-nucleotide polymorphism is associated with the disease for both genders. Through simulation study and a real data application, we show that the proposed procedure is robust and have excellent performance compared to existing methods. We expect that many more associated single-nucleotide polymorphisms on the X chromosome will be identified if the proposed approach is applied to current available genome-wide association studies data.

  10. Drugs in space: Pharmacokinetics and pharmacodynamics in astronauts.

    PubMed

    Kast, Johannes; Yu, Yichao; Seubert, Christoph N; Wotring, Virginia E; Derendorf, Hartmut

    2017-11-15

    Space agencies are working intensely to push the current boundaries of human spaceflight by sending astronauts deeper into space than ever before, including missions to Mars and asteroids. Spaceflight alters human physiology due to fluid shifts, muscle and bone loss, immune system dysregulation, and changes in the gastrointestinal tract and metabolic enzymes. These alterations may change the pharmacokinetics and/or pharmacodynamics of medications used by astronauts and subsequently might impact drug efficacy and safety. Most commonly, medications are administered during space missions to treat sleep disturbances, allergies, space motion sickness, pain, and sinus congestion. These medications are administered under the assumption that they act in a similar way as on Earth, an assumption that has not been investigated systematically yet. Few inflight pharmacokinetic data have been published, and pharmacodynamic and pharmacokinetic/pharmacodynamic studies during spaceflight are also lacking. Therefore, bed-rest models are often used to simulate physiological changes observed during microgravity. In addition to pharmacokinetic/pharmacodynamic changes, decreased drug and formulation stability in space could also influence efficacy and safety of medications. These alterations along with physiological changes and their resulting pharmacokinetic and pharmacodynamic effects must to be considered to determine their ultimate impact on medication efficacy and safety during spaceflight. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Effects of sorption kinetics on the fate and transport of pharmaceuticals in estuaries.

    PubMed

    Liu, Dong; Lung, Wu-Seng; Colosi, Lisa M

    2013-08-01

    Many current fate and transport models based on the assumption of instantaneous sorption equilibrium of contaminants in the water column may not be valid for certain pharmaceuticals with long times to reach sorption equilibrium. In this study, a sorption kinetics model was developed and incorporated into a water quality model for the Patuxent River Estuary to evaluate the effect of sorption kinetics. Model results indicate that the assumption of instantaneous sorption equilibrium results in significant under-prediction of water column concentrations for some pharmaceuticals. The relative difference between predicted concentrations for the instantaneous versus kinetic approach is as large as 150% at upstream locations in the Patuxent Estuary. At downstream locations, where sorption processes have had sufficient time to reach equilibrium, the relative difference decreases to roughly 25%. This indicates that sorption kinetics affect a model's ability to capture accumulation of pharmaceuticals into riverbeds and the transport of pharmaceuticals in estuaries. These results offer strong evidence that chemicals are not removed from the water column as rapidly as has been assumed on the basis of equilibrium-based analyses. The findings are applicable not only for pharmaceutical compounds, but also for diverse contaminants that reach sorption equilibrium slowly. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Determining Electrical Properties Based on B1 Fields Measured in an MR Scanner Using a Multi-channel Transmit/Receive Coil: a General Approach

    PubMed Central

    Liu, Jiaen; Zhang, Xiaotong; Van de Moortele, Pierre-Francois; Schmitter, Sebastian

    2013-01-01

    Electrical Property Tomography (EPT) is a recently developed noninvasive technology to image the electrical conductivity and permittivity of biological tissues at Larmor frequency in Magnetic Resonance (MR) scanners. The absolute phase of the complex radio-frequency (RF) magnetic field (B1) is necessary for electrical property calculation. However, due to the lack of practical methods to directly measure the absolute B1 phases, current EPT techniques have been achieved with B1 phase estimation based on certain assumptions on object anatomy, coil structure and/or electromagnetic wave behavior associated with the main magnetic field, limiting EPT from a larger variety of applications. In this study, using a multi-channel transmit/receive coil, the framework of a new general approach for EPT has been introduced, which is independent on the assumptions utilized in previous studies. Using a human head model with realistic geometry, a series of computer simulations at 7T were conducted to evaluate the proposed method under different noise levels. Results showed that the proposed method can be used to reconstruct the conductivity and permittivity images with noticeable accuracy and stability. The feasibility of this approach was further evaluated in a phantom experiment at 7T. PMID:23743673

  13. Intermediate Band Gap Solar Cells: The Effect of Resonant Tunneling on Delocalization

    NASA Astrophysics Data System (ADS)

    William, Reid; Mathew, Doty; Sanwli, Shilpa; Gammon, Dan; Bracker, Allan

    2011-03-01

    Quantum dots (QD's) have many unique properties, including tunable discrete energy levels, that make them suitable for a variety of next generation photovoltaic applications. One application is an intermediate band solar cell (IBSC); in which QD's are incorporated into the bulk material. The QD's are tuned to absorb low energy photons that would otherwise be wasted because their energy is less than the solar cell's bulk band gap. Current theory concludes that identical QD's should be arranged in a superlattice to form a completely delocalized intermediate band maximizing absorption of low energy photons while minimizing the decrease in the efficiency of the bulk material. We use a T-matrix model to assess the feasibility of forming a delocalized band given that real QD ensembles have an inhomogeneous distribution of energy levels. Our results suggest that formation of a band delocalized through a large QD superlattice is challenging; suggesting that the assumptions underlying present IBSC theory require reexamination. We use time-resolved photoluminescence of coupled QD's to probe the effect of delocalized states on the dynamics of absorption, energy transport, and nonradiative relaxation. These results will allow us to reexamine the theoretical assumptions and determine the degree of delocalization necessary to create an efficient quantum dot-based IBSC.

  14. Modelling heterogeneity variances in multiple treatment comparison meta-analysis – Are informative priors the better solution?

    PubMed Central

    2013-01-01

    Background Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the ‘common variance’ assumption). This approach ‘borrows strength’ for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. Methods In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. Results In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. Conclusions MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice. PMID:23311298

  15. Artifacts, assumptions, and ambiguity: Pitfalls in comparing experimental results to numerical simulations when studying electrical stimulation of the heart.

    PubMed

    Roth, Bradley J.

    2002-09-01

    Insidious experimental artifacts and invalid theoretical assumptions complicate the comparison of numerical predictions and observed data. Such difficulties are particularly troublesome when studying electrical stimulation of the heart. During unipolar stimulation of cardiac tissue, the artifacts include nonlinearity of membrane dyes, optical signals blocked by the stimulating electrode, averaging of optical signals with depth, lateral averaging of optical signals, limitations of the current source, and the use of excitation-contraction uncouplers. The assumptions involve electroporation, membrane models, electrode size, the perfusing bath, incorrect model parameters, the applicability of a continuum model, and tissue damage. Comparisons of theory and experiment during far-field stimulation are limited by many of these same factors, plus artifacts from plunge and epicardial recording electrodes and assumptions about the fiber angle at an insulating boundary. These pitfalls must be overcome in order to understand quantitatively how the heart responds to an electrical stimulus. (c) 2002 American Institute of Physics.

  16. Extending and expanding the Darwinian synthesis: the role of complex systems dynamics.

    PubMed

    Weber, Bruce H

    2011-03-01

    Darwinism is defined here as an evolving research tradition based upon the concepts of natural selection acting upon heritable variation articulated via background assumptions about systems dynamics. Darwin's theory of evolution was developed within a context of the background assumptions of Newtonian systems dynamics. The Modern Evolutionary Synthesis, or neo-Darwinism, successfully joined Darwinian selection and Mendelian genetics by developing population genetics informed by background assumptions of Boltzmannian systems dynamics. Currently the Darwinian Research Tradition is changing as it incorporates new information and ideas from molecular biology, paleontology, developmental biology, and systems ecology. This putative expanded and extended synthesis is most perspicuously deployed using background assumptions from complex systems dynamics. Such attempts seek to not only broaden the range of phenomena encompassed by the Darwinian Research Tradition, such as neutral molecular evolution, punctuated equilibrium, as well as developmental biology, and systems ecology more generally, but to also address issues of the emergence of evolutionary novelties as well as of life itself. Copyright © 2010 Elsevier Ltd. All rights reserved.

  17. Development of a Liner Design Methodology and Relevant Results of Acoustic Suppression in the Farfield for Mixer-Ejector Nozzles

    NASA Technical Reports Server (NTRS)

    Salikuddin, M.

    2006-01-01

    We have developed a process to predict noise field interior to the ejector and in the farfield for any liner design for a mixer-ejector of arbitrary scale factor. However, a number of assumptions, not verified for the current application, utilized in this process, introduce uncertainties in the final result, especially, on a quantitative basis. The normal impedance model for bulk with perforated facesheet is based on homogeneous foam materials of low resistivity. The impact of flow conditions for HSCT application as well as the impact of perforated facesheet on predicted impedance is not properly accounted. Based on the measured normal impedance for deeper bulk samples (i.e., 2.0 in.) the predicted reactance is much higher compared to the data at frequencies above 2 kHz for T-foam and 200 ppi SiC. The resistance is under predicted at lower frequencies (below 4 kHz) for these samples. Thus, the use of such predicted data in acoustic suppression is likely to introduce inaccuracies. It should be noted that the impedance prediction methods developed recently under liner technology program are not utilized in the studies described in this report due to the program closeout. Acoustic suppression prediction is based on the uniform flow and temperature conditions in a two-sided treated constant area rectangular duct. In addition, assumptions of equal energy per mode noise field and interaction of all frequencies with the treated surface for the entire ejector length may not be accurate. While, the use of acoustic transfer factor minimizes the inaccuracies associated with the prediction for a known test case, the assumption of the same factor for other liner designs and with different linear scale factor ejectors seems to be very optimistic. As illustrated in appendix D that the predicted noise suppression for LSM-1 is lower compared to the measured data is an indication of the above argument. However, the process seems to be more reliable when used for the same scale models for different liner designs as demonstrated for Gen. 1 mixer-ejectors.

  18. Low-Cost Avoidance Behaviors are Resistant to Fear Extinction in Humans

    PubMed Central

    Vervliet, Bram; Indekeu, Ellen

    2015-01-01

    Elevated levels of fear and avoidance are core symptoms across the anxiety disorders. It has long been known that fear serves to motivate avoidance. Consequently, fear extinction has been the primary focus in pre-clinical anxiety research for decades, under the implicit assumption that removing the motivator of avoidance (fear) would automatically mitigate the avoidance behaviors as well. Although this assumption has intuitive appeal, it has received little scientific scrutiny. The scarce evidence from animal studies is mixed, while the assumption remains untested in humans. The current study applied an avoidance conditioning protocol in humans to investigate the effects of fear extinction on the persistence of low-cost avoidance. Online danger-safety ratings and skin conductance responses documented the dynamics of conditioned fear across avoidance and extinction phases. Anxiety- and avoidance-related questionnaires explored individual differences in rates of avoidance. Participants first learned to click a button during a predictive danger signal, in order to cancel an upcoming aversive electrical shock (avoidance conditioning). Next, fear extinction was induced by presenting the signal in the absence of shocks while button-clicks were prevented (by removing the button in Experiment 1, or by instructing not to click the button in Experiment 2). Most importantly, post-extinction availability of the button caused a significant return of avoidant button-clicks. In addition, trait-anxiety levels correlated positively with rates of avoidance during a predictive safety signal, and with the rate of pre- to post-extinction decrease during this signal. Fear measures gradually decreased during avoidance conditioning, as participants learned that button-clicks effectively canceled the shock. Preventing button-clicks elicited a sharp increase in fear, which subsequently extinguished. Fear remained low during avoidance testing, but danger-safety ratings increased again when button-clicks were subsequently prevented. Together, these results show that low-cost avoidance behaviors can persist following fear extinction and induce increased threat appraisal. On the other hand, fear extinction did reduce augmented rates of unnecessary avoidance during safety in trait-anxious individuals, and instruction-based response prevention was more effective than removal of response cues. More research is needed to characterize the conditions under which fear extinction might mitigate avoidance. PMID:26733837

  19. Testing Surrogacy Assumptions: Can Threatened and Endangered Plants Be Grouped by Biological Similarity and Abundances?

    PubMed Central

    Che-Castaldo, Judy P.; Neel, Maile C.

    2012-01-01

    There is renewed interest in implementing surrogate species approaches in conservation planning due to the large number of species in need of management but limited resources and data. One type of surrogate approach involves selection of one or a few species to represent a larger group of species requiring similar management actions, so that protection and persistence of the selected species would result in conservation of the group of species. However, among the criticisms of surrogate approaches is the need to test underlying assumptions, which remain rarely examined. In this study, we tested one of the fundamental assumptions underlying use of surrogate species in recovery planning: that there exist groups of threatened and endangered species that are sufficiently similar to warrant similar management or recovery criteria. Using a comprehensive database of all plant species listed under the U.S. Endangered Species Act and tree-based random forest analysis, we found no evidence of species groups based on a set of distributional and biological traits or by abundances and patterns of decline. Our results suggested that application of surrogate approaches for endangered species recovery would be unjustified. Thus, conservation planning focused on individual species and their patterns of decline will likely be required to recover listed species. PMID:23240051

  20. Testing surrogacy assumptions: can threatened and endangered plants be grouped by biological similarity and abundances?

    PubMed

    Che-Castaldo, Judy P; Neel, Maile C

    2012-01-01

    There is renewed interest in implementing surrogate species approaches in conservation planning due to the large number of species in need of management but limited resources and data. One type of surrogate approach involves selection of one or a few species to represent a larger group of species requiring similar management actions, so that protection and persistence of the selected species would result in conservation of the group of species. However, among the criticisms of surrogate approaches is the need to test underlying assumptions, which remain rarely examined. In this study, we tested one of the fundamental assumptions underlying use of surrogate species in recovery planning: that there exist groups of threatened and endangered species that are sufficiently similar to warrant similar management or recovery criteria. Using a comprehensive database of all plant species listed under the U.S. Endangered Species Act and tree-based random forest analysis, we found no evidence of species groups based on a set of distributional and biological traits or by abundances and patterns of decline. Our results suggested that application of surrogate approaches for endangered species recovery would be unjustified. Thus, conservation planning focused on individual species and their patterns of decline will likely be required to recover listed species.

  1. Checking distributional assumptions for pharmacokinetic summary statistics based on simulations with compartmental models.

    PubMed

    Shen, Meiyu; Russek-Cohen, Estelle; Slud, Eric V

    2016-08-12

    Bioequivalence (BE) studies are an essential part of the evaluation of generic drugs. The most common in vivo BE study design is the two-period two-treatment crossover design. AUC (area under the concentration-time curve) and Cmax (maximum concentration) are obtained from the observed concentration-time profiles for each subject from each treatment under each sequence. In the BE evaluation of pharmacokinetic crossover studies, the normality of the univariate response variable, e.g. log(AUC) 1 or log(Cmax), is often assumed in the literature without much evidence. Therefore, we investigate the distributional assumption of the normality of response variables, log(AUC) and log(Cmax), by simulating concentration-time profiles from two-stage pharmacokinetic models (commonly used in pharmacokinetic research) for a wide range of pharmacokinetic parameters and measurement error structures. Our simulations show that, under reasonable distributional assumptions on the pharmacokinetic parameters, log(AUC) has heavy tails and log(Cmax) is skewed. Sensitivity analyses are conducted to investigate how the distribution of the standardized log(AUC) (or the standardized log(Cmax)) for a large number of simulated subjects deviates from normality if distributions of errors in the pharmacokinetic model for plasma concentrations deviate from normality and if the plasma concentration can be described by different compartmental models.

  2. Ocean Tidal Dynamics and Dissipation in the Thick Shell Worlds

    NASA Astrophysics Data System (ADS)

    Hay, H.; Matsuyama, I.

    2017-12-01

    Tidal dissipation in the subsurface oceans of icy satellites has so far only been explored in the limit of a free-surface ocean or under the assumption of a thin ice shell. Here we consider ocean tides in the opposite limit, under the assumption of an infinitely rigid, immovable, ice shell. This assumption forces the surface displacement of the ocean to remain zero, and requires the solution of a pressure correction to ensure that the ocean is mass conserving (divergence-free) at all times. This work investigates the effect of an infinitely rigid lid on ocean dynamics and dissipation, focusing on implications for the thick shell worlds Ganymede and Callisto. We perform simulations using a modified version of the numerical model Ocean Dissipation in Icy Satellites (ODIS), solving the momentum equations for incompressible shallow water flow under a degree-2 tidal forcing. The velocity solution to the momentum equations is updated iteratively at each time-step using a pressure correction to guarantee mass conservation everywhere, following a standard solution procedure originally used in solving the incompressible Navier-Stokes equations. We reason that any model that investigates ocean dynamics beneath a global ice layer should be tested in the limit of an immovable ice shell and must yield solutions that exhibit divergence-free flow at all times.

  3. Food supply and bioenergy production within the global cropland planetary boundary.

    PubMed

    Henry, R C; Engström, K; Olin, S; Alexander, P; Arneth, A; Rounsevell, M D A

    2018-01-01

    Supplying food for the anticipated global population of over 9 billion in 2050 under changing climate conditions is one of the major challenges of the 21st century. Agricultural expansion and intensification contributes to global environmental change and risks the long-term sustainability of the planet. It has been proposed that no more than 15% of the global ice-free land surface should be converted to cropland. Bioenergy production for land-based climate mitigation places additional pressure on limited land resources. Here we test normative targets of food supply and bioenergy production within the cropland planetary boundary using a global land-use model. The results suggest supplying the global population with adequate food is possible without cropland expansion exceeding the planetary boundary. Yet this requires an increase in food production, especially in developing countries, as well as a decrease in global crop yield gaps. However, under current assumptions of future food requirements, it was not possible to also produce significant amounts of first generation bioenergy without cropland expansion. These results suggest that meeting food and bioenergy demands within the planetary boundaries would need a shift away from current trends, for example, requiring major change in the demand-side of the food system or advancing biotechnologies.

  4. Food supply and bioenergy production within the global cropland planetary boundary

    PubMed Central

    Olin, S.; Alexander, P.; Arneth, A.; Rounsevell, M. D. A.

    2018-01-01

    Supplying food for the anticipated global population of over 9 billion in 2050 under changing climate conditions is one of the major challenges of the 21st century. Agricultural expansion and intensification contributes to global environmental change and risks the long-term sustainability of the planet. It has been proposed that no more than 15% of the global ice-free land surface should be converted to cropland. Bioenergy production for land-based climate mitigation places additional pressure on limited land resources. Here we test normative targets of food supply and bioenergy production within the cropland planetary boundary using a global land-use model. The results suggest supplying the global population with adequate food is possible without cropland expansion exceeding the planetary boundary. Yet this requires an increase in food production, especially in developing countries, as well as a decrease in global crop yield gaps. However, under current assumptions of future food requirements, it was not possible to also produce significant amounts of first generation bioenergy without cropland expansion. These results suggest that meeting food and bioenergy demands within the planetary boundaries would need a shift away from current trends, for example, requiring major change in the demand-side of the food system or advancing biotechnologies. PMID:29566091

  5. A Space Object Detection Algorithm using Fourier Domain Likelihood Ratio Test

    NASA Astrophysics Data System (ADS)

    Becker, D.; Cain, S.

    Space object detection is of great importance in the highly dependent yet competitive and congested space domain. Detection algorithms employed play a crucial role in fulfilling the detection component in the situational awareness mission to detect, track, characterize and catalog unknown space objects. Many current space detection algorithms use a matched filter or a spatial correlator to make a detection decision at a single pixel point of a spatial image based on the assumption that the data follows a Gaussian distribution. This paper explores the potential for detection performance advantages when operating in the Fourier domain of long exposure images of small and/or dim space objects from ground based telescopes. A binary hypothesis test is developed based on the joint probability distribution function of the image under the hypothesis that an object is present and under the hypothesis that the image only contains background noise. The detection algorithm tests each pixel point of the Fourier transformed images to make the determination if an object is present based on the criteria threshold found in the likelihood ratio test. Using simulated data, the performance of the Fourier domain detection algorithm is compared to the current algorithm used in space situational awareness applications to evaluate its value.

  6. Hydroxyl radical-PLIF measurements and accuracy investigation in high pressure gaseous hydrogen/gaseous oxygen combustion

    NASA Astrophysics Data System (ADS)

    Vaidyanathan, Aravind

    In-flow species concentration measurements in reacting flows at high pressures are needed both to improve the current understanding of the physical processes taking place and to validate predictive tools that are under development, for application to the design and optimization of a range of power plants from diesel to rocket engines. To date, non intrusive measurements have been based on calibrations determined from assumptions that were not sufficiently quantified to provide a clear understanding of the range of uncertainty associated with these measurements. The purpose of this work is to quantify the uncertainties associated with OH measurement in a oxygen-hydrogen system produced by a shear, coaxial injector typical of those used in rocket engines. Planar OH distributions are obtained providing instantaneous and averaged distribution that are required for both LES and RANS codes currently under development. This study has evaluated the uncertainties associated with OH measurement at 10, 27, 37 and 53 bar respectively. The total rms error for OH-PLIF measurements from eighteen different parameters was quantified and found as 21.9, 22.8, 22.5, and 22.9% at 10, 27, 37 and 53 bar respectively. These results are used by collaborators at Georgia Institute of Technology (LES), Pennsylvania State University (LES), University of Michigan (RANS) and NASA Marshall (RANS).

  7. A curative regimen would decrease HIV prevalence but not HIV incidence unless targeted to an ART-naïve population.

    PubMed

    Dimitrov, Dobromir T; Kiem, Hans-Peter; Jerome, Keith R; Johnston, Christine; Schiffer, Joshua T

    2016-02-24

    HIV curative strategies currently under development aim to eradicate latent provirus, or prevent viral replication, progression to AIDS, and transmission. The impact of implementing curative programs on HIV epidemics has not been considered. We developed a mathematical model of heterosexual HIV transmission to evaluate the independent and synergistic impact of ART, HIV prevention interventions and cure on HIV prevalence and incidence. The basic reproduction number was calculated to study the potential for the epidemic to be eliminated. We explored scenarios with and without the assumption that patients enrolled into HIV cure programs need to be on antiretroviral treatment (ART). In our simulations, curative regimes had limited impact on HIV incidence if only ART patients were eligible for cure. Cure implementation had a significant impact on HIV incidence if ART-untreated patients were enrolled directly into cure programs. Concurrent HIV prevention programs moderately decreased the percent of ART treated or cured patients needed to achieve elimination. We project that widespread implementation of HIV cure would decrease HIV prevalence under all scenarios but would only lower rate of new infections if ART-untreated patients were targeted. Current efforts to identify untreated HIV patients will gain even further relevance upon availability of an HIV cure.

  8. The impact of registration accuracy on imaging validation study design: A novel statistical power calculation.

    PubMed

    Gibson, Eli; Fenster, Aaron; Ward, Aaron D

    2013-10-01

    Novel imaging modalities are pushing the boundaries of what is possible in medical imaging, but their signal properties are not always well understood. The evaluation of these novel imaging modalities is critical to achieving their research and clinical potential. Image registration of novel modalities to accepted reference standard modalities is an important part of characterizing the modalities and elucidating the effect of underlying focal disease on the imaging signal. The strengths of the conclusions drawn from these analyses are limited by statistical power. Based on the observation that in this context, statistical power depends in part on uncertainty arising from registration error, we derive a power calculation formula relating registration error, number of subjects, and the minimum detectable difference between normal and pathologic regions on imaging, for an imaging validation study design that accommodates signal correlations within image regions. Monte Carlo simulations were used to evaluate the derived models and test the strength of their assumptions, showing that the model yielded predictions of the power, the number of subjects, and the minimum detectable difference of simulated experiments accurate to within a maximum error of 1% when the assumptions of the derivation were met, and characterizing sensitivities of the model to violations of the assumptions. The use of these formulae is illustrated through a calculation of the number of subjects required for a case study, modeled closely after a prostate cancer imaging validation study currently taking place at our institution. The power calculation formulae address three central questions in the design of imaging validation studies: (1) What is the maximum acceptable registration error? (2) How many subjects are needed? (3) What is the minimum detectable difference between normal and pathologic image regions? Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Comparing different kinds of words and word-word relations to test an habituation model of priming.

    PubMed

    Rieth, Cory A; Huber, David E

    2017-06-01

    Huber and O'Reilly (2003) proposed that neural habituation exists to solve a temporal parsing problem, minimizing blending between one word and the next when words are visually presented in rapid succession. They developed a neural dynamics habituation model, explaining the finding that short duration primes produce positive priming whereas long duration primes produce negative repetition priming. The model contains three layers of processing, including a visual input layer, an orthographic layer, and a lexical-semantic layer. The predicted effect of prime duration depends both on this assumed representational hierarchy and the assumption that synaptic depression underlies habituation. The current study tested these assumptions by comparing different kinds of words (e.g., words versus non-words) and different kinds of word-word relations (e.g., associative versus repetition). For each experiment, the predictions of the original model were compared to an alternative model with different representational assumptions. Experiment 1 confirmed the prediction that non-words and inverted words require longer prime durations to eliminate positive repetition priming (i.e., a slower transition from positive to negative priming). Experiment 2 confirmed the prediction that associative priming increases and then decreases with increasing prime duration, but remains positive even with long duration primes. Experiment 3 replicated the effects of repetition and associative priming using a within-subjects design and combined these effects by examining target words that were expected to repeat (e.g., viewing the target word 'BACK' after the prime phrase 'back to'). These results support the originally assumed representational hierarchy and more generally the role of habituation in temporal parsing and priming. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Free choice of healthcare providers in the Netherlands is both a goal in itself and a precondition: modelling the policy assumptions underlying the promotion of patient choice through documentary analysis and interviews.

    PubMed

    Victoor, Aafke; Friele, Roland D; Delnoij, Diana M J; Rademakers, Jany J D J M

    2012-12-03

    In the Netherlands in 2006, a health insurance system reform took place in which regulated competition between insurers and providers is key. In this context, the government placed greater emphasis on patients being able to choose health insurers and providers as a precondition for competition. Patient choice became an instrument instead of solely a goal in itself. In the current study, we investigated the concept of 'patient choice' of healthcare providers, as postulated in the supporting documentation for this reform, because we wanted to try to understand the assumptions policy makers had regarding patient choice of healthcare providers. We searched policy documents for assumptions made by policy makers about patient choice of healthcare providers that underlie the health insurance system reform. Additionally, we held interviews with people who were involved in or closely followed the reform. Our study shows that the government paid much more attention to the instrumental goal of patient choice. Patients are assumed to be able to choose a provider rationally if a number of conditions are satisfied, e.g. the availability of enough comparative information. To help ensure those conditions were met, the Dutch government and other parties implemented a variety of supporting instruments. Various instruments have been put in place to ensure that patients can act as consumers on the healthcare market. Much less attention has been paid to the willingness and ability of patients to choose, i.e. choice as a value. There was also relatively little attention paid to the consequences on equity of outcomes if some patient groups are less inclined or able to choose actively.

  11. Fundamentally Flawed: Extension Administrative Practice (Part 1).

    ERIC Educational Resources Information Center

    Patterson, Thomas F., Jr.

    1997-01-01

    Extension's current administrative techniques are based on the assumptions of classical management from the early 20th century. They are fundamentally flawed and inappropriate for the contemporary workplace. (SK)

  12. Detecting and accounting for violations of the constancy assumption in non-inferiority clinical trials.

    PubMed

    Koopmeiners, Joseph S; Hobbs, Brian P

    2018-05-01

    Randomized, placebo-controlled clinical trials are the gold standard for evaluating a novel therapeutic agent. In some instances, it may not be considered ethical or desirable to complete a placebo-controlled clinical trial and, instead, the placebo is replaced by an active comparator with the objective of showing either superiority or non-inferiority to the active comparator. In a non-inferiority trial, the experimental treatment is considered non-inferior if it retains a pre-specified proportion of the effect of the active comparator as represented by the non-inferiority margin. A key assumption required for valid inference in the non-inferiority setting is the constancy assumption, which requires that the effect of the active comparator in the non-inferiority trial is consistent with the effect that was observed in previous trials. It has been shown that violations of the constancy assumption can result in a dramatic increase in the rate of incorrectly concluding non-inferiority in the presence of ineffective or even harmful treatment. In this paper, we illustrate how Bayesian hierarchical modeling can be used to facilitate multi-source smoothing of the data from the current trial with the data from historical studies, enabling direct probabilistic evaluation of the constancy assumption. We then show how this result can be used to adapt the non-inferiority margin when the constancy assumption is violated and present simulation results illustrating that our method controls the type-I error rate when the constancy assumption is violated, while retaining the power of the standard approach when the constancy assumption holds. We illustrate our adaptive procedure using a non-inferiority trial of raltegravir, an antiretroviral drug for the treatment of HIV.

  13. Detecting and Accounting for Violations of the Constancy Assumption in Non-Inferiority Clinical Trials

    PubMed Central

    Koopmeiners, Joseph S.; Hobbs, Brian P.

    2016-01-01

    Randomized, placebo-controlled clinical trials are the gold standard for evaluating a novel therapeutic agent. In some instances, it may not be considered ethical or desirable to complete a placebo-controlled clinical trial and, instead, the placebo is replaced by an active comparator (AC) with the objective of showing either superiority or non-inferiority to the AC. In a non-inferiority trial, the experimental treatment is considered non-inferior if it retains a pre-specified proportion of the effect of the AC as represented by the non-inferiority margin. A key assumption required for valid inference in the non-inferiority setting is the constancy assumption, which requires that the effect of the AC in the non-inferiority trial is consistent with the effect that was observed in previous trials. It has been shown that violations of the constancy assumption can result in a dramatic increase in the rate of incorrectly concluding non-inferiority in the presence of ineffective or even harmful treatment. In this paper, we illustrate how Bayesian hierarchical modeling can be used to facilitate multi-source smoothing of the data from the current trial with the data from historical studies, enabling direct probabilistic evaluation of the constancy assumption. We then show how this result can be used to adapt the non-inferiority margin when the constancy assumption is violated and present simulation results illustrating that our method controls the type-I error rate when the constancy assumption is violated, while retaining the power of the standard approach when the constancy assumption holds. We illustrate our adaptive procedure using a non-inferiority trial of raltegravir, an antiretroviral drug for the treatment of HIV. PMID:27587591

  14. The role of ethics in data governance of large neuro-ICT projects.

    PubMed

    Stahl, Bernd Carsten; Rainey, Stephen; Harris, Emma; Fothergill, B Tyr

    2018-05-14

    We describe current practices of ethics-related data governance in large neuro-ICT projects, identify gaps in current practice, and put forward recommendations on how to collaborate ethically in complex regulatory and normative contexts. We undertake a survey of published principles of data governance of large neuro-ICT projects. This grounds an approach to a normative analysis of current data governance approaches. Several ethical issues are well covered in the data governance policies of neuro-ICT projects, notably data protection and attribution of work. Projects use a set of similar policies to ensure users behave appropriately. However, many ethical issues are not covered at all. Implementation and enforcement of policies remain vague. The data governance policies we investigated indicate that the neuro-ICT research community is currently close-knit and that shared assumptions are reflected in infrastructural aspects. This explains why many ethical issues are not explicitly included in data governance policies at present. With neuro-ICT research growing in scale, scope, and international involvement, these shared assumptions should be made explicit and reflected in data governance.

  15. Magnetosphere - Ionosphere - Thermosphere (MIT) Coupling at Jupiter

    NASA Astrophysics Data System (ADS)

    Yates, J. N.; Ray, L. C.; Achilleos, N.

    2017-12-01

    Jupiter's upper atmospheric temperature is considerably higher than that predicted by Solar Extreme Ultraviolet (EUV) heating alone. Simulations incorporating magnetosphere-ionosphere coupling effects into general circulation models have, to date, struggled to reproduce the observed atmospheric temperatures under simplifying assumptions such as azimuthal symmetry and a spin-aligned dipole magnetic field. Here we present the development of a full three-dimensional thermosphere model coupled in both hemispheres to an axisymmetric magnetosphere model. This new coupled model is based on the two-dimensional MIT model presented in Yates et al., 2014. This coupled model is a critical step towards to the development of a fully coupled 3D MIT model. We discuss and compare the resulting thermospheric flows, energy balance and MI coupling currents to those presented in previous 2D MIT models.

  16. Are cost differences between specialist and general hospitals compensated by the prospective payment system?

    PubMed

    Longo, Francesco; Siciliani, Luigi; Street, Andrew

    2017-10-23

    Prospective payment systems fund hospitals based on a fixed-price regime that does not directly distinguish between specialist and general hospitals. We investigate whether current prospective payments in England compensate for differences in costs between specialist orthopaedic hospitals and trauma and orthopaedics departments in general hospitals. We employ reference cost data for a sample of hospitals providing services in the trauma and orthopaedics specialty. Our regression results suggest that specialist orthopaedic hospitals have on average 13% lower profit margins. Under the assumption of break-even for the average trauma and orthopaedics department, two of the three specialist orthopaedic hospitals appear to make a loss on their activity. The same holds true for 33% of departments in our sample. Patient age and severity are the main drivers of such differences.

  17. Dependence of average inter-particle distance upon the temperature of neutrals in dusty plasma crystals

    NASA Astrophysics Data System (ADS)

    Nikolaev, V. S.; Timofeev, A. V.

    2018-01-01

    It is often suggested that inter-particle distance in stable dusty plasma structures decreases with cooling as a square root of neutral gas temperature. Deviations from this dependence (up to the increase at cryogenic temperatures) found in the experimental results for the pressures range 0.1-8.0 mbar and for the currents range 0.1-1.0 mA are given. Inter-particle distance dependences on the charge of particles, parameter of the trap and the screening length in surrounding plasma are obtained for different conditions from molecular dynamics simulations. They are well approximated by power functions in the mentioned range of parameters. It is found that under certain assumptions thermophoretical force is responsible for inter-particle distance increase at cryogenic temperatures.

  18. Extending the Li&Ma method to include PSF information

    NASA Astrophysics Data System (ADS)

    Nievas-Rosillo, M.; Contreras, J. L.

    2016-02-01

    The so called Li&Ma formula is still the most frequently used method for estimating the significance of observations carried out by Imaging Atmospheric Cherenkov Telescopes. In this work a straightforward extension of the method for point sources that profits from the good imaging capabilities of current instruments is proposed. It is based on a likelihood ratio under the assumption of a well-known PSF and a smooth background. Its performance is tested with Monte Carlo simulations based on real observations and its sensitivity is compared to standard methods which do not incorporate PSF information. The gain of significance that can be attributed to the inclusion of the PSF is around 10% and can be boosted if a background model is assumed or a finer binning is used.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nomura, Yasunori; Salzetta, Nico; Sanches, Fabio

    We study the Hilbert space structure of classical spacetimes under the assumption that entanglement in holographic theories determines semiclassical geometry. We show that this simple assumption has profound implications; for example, a superposition of classical spacetimes may lead to another classical spacetime. Despite its unconventional nature, this picture admits the standard interpretation of superpositions of well-defined semiclassical spacetimes in the limit that the number of holographic degrees of freedom becomes large. We illustrate these ideas using a model for the holographic theory of cosmological spacetimes.

  20. Initial-boundary value problem to 2D Boussinesq equations for MHD convection with stratification effects

    NASA Astrophysics Data System (ADS)

    Bian, Dongfen; Liu, Jitao

    2017-12-01

    This paper is concerned with the initial-boundary value problem to 2D magnetohydrodynamics-Boussinesq system with the temperature-dependent viscosity, thermal diffusivity and electrical conductivity. First, we establish the global weak solutions under the minimal initial assumption. Then by imposing higher regularity assumption on the initial data, we obtain the global strong solution with uniqueness. Moreover, the exponential decay rates of weak solutions and strong solution are obtained respectively.

  1. A Critical Examination of the DOD’s Business Management Modernization Program

    DTIC Science & Technology

    2005-05-01

    Program (BMMP) is a key element of the DoD’s ongoing efforts to transform itself. This paper argues that the BMMP needs to be fundamentally reoriented...communication role it plays in the defense- transformation effort. Introduction The core assumption underlying the DoD’s Business Management... government activities. That this is a core assumption for the BMMP is borne out by the fact that the program’s primary objective is to produce

  2. Splitting of inviscid fluxes for real gases

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Vanleer, Bram; Shuen, Jian-Shun

    1988-01-01

    Flux-vector and flux-difference splittings for the inviscid terms of the compressible flow equations are derived under the assumption of a general equation of state for a real gas in equilibrium. No necessary assumptions, approximations or auxiliary quantities are introduced. The formulas derived include several particular cases known for ideal gases and readily apply to curvilinear coordinates. Applications of the formulas in a TVD algorithm to one-dimensional shock-tube and nozzle problems show their quality and robustness.

  3. Splitting of inviscid fluxes for real gases

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Van Leer, Bram; Shuen, Jian-Shun

    1990-01-01

    Flux-vector and flux-difference splittings for the inviscid terms of the compressible flow equations are derived under the assumption of a general equation of state for a real gas in equilibrium. No necessary assumptions, approximations for auxiliary quantities are introduced. The formulas derived include several particular cases known for ideal gases and readily apply to curvilinear coordinates. Applications of the formulas in a TVD algorithm to one-dimensional shock-tube and nozzle problems show their quality and robustness.

  4. Joint Test and Evaluation Procedures Manual.

    DTIC Science & Technology

    1980-09-01

    offices within 0oD such as ODDTE may allot funds to individual JTFs for purchasing goods and services. 1-12 Service support to JT&E is usually drawn...on underlying assumptions about the "real world," and that a good operational scenario may conflict with the assumptions for a specific statistical...Learned Since a good Data Management Plan is critical to the success of a joint test, some situations which have occurred in previous tests are listed

  5. Latent class instrumental variables: a clinical and biostatistical perspective.

    PubMed

    Baker, Stuart G; Kramer, Barnett S; Lindeman, Karen S

    2016-01-15

    In some two-arm randomized trials, some participants receive the treatment assigned to the other arm as a result of technical problems, refusal of a treatment invitation, or a choice of treatment in an encouragement design. In some before-and-after studies, the availability of a new treatment changes from one time period to this next. Under assumptions that are often reasonable, the latent class instrumental variable (IV) method estimates the effect of treatment received in the aforementioned scenarios involving all-or-none compliance and all-or-none availability. Key aspects are four initial latent classes (sometimes called principal strata) based on treatment received if in each randomization group or time period, the exclusion restriction assumption (in which randomization group or time period is an instrumental variable), the monotonicity assumption (which drops an implausible latent class from the analysis), and the estimated effect of receiving treatment in one latent class (sometimes called efficacy, the local average treatment effect, or the complier average causal effect). Since its independent formulations in the biostatistics and econometrics literatures, the latent class IV method (which has no well-established name) has gained increasing popularity. We review the latent class IV method from a clinical and biostatistical perspective, focusing on underlying assumptions, methodological extensions, and applications in our fields of obstetrics and cancer research. Copyright © 2015 John Wiley & Sons, Ltd.

  6. Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning.

    PubMed

    Hsu, Anne; Griffiths, Thomas L

    2016-01-01

    A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning.

  7. Bayesian learning and the psychology of rule induction

    PubMed Central

    Endress, Ansgar D.

    2014-01-01

    In recent years, Bayesian learning models have been applied to an increasing variety of domains. While such models have been criticized on theoretical grounds, the underlying assumptions and predictions are rarely made concrete and tested experimentally. Here, I use Frank and Tenenbaum's (2011) Bayesian model of rule-learning as a case study to spell out the underlying assumptions, and to confront them with the empirical results Frank and Tenenbaum (2011) propose to simulate, as well as with novel experiments. While rule-learning is arguably well suited to rational Bayesian approaches, I show that their models are neither psychologically plausible nor ideal observer models. Further, I show that their central assumption is unfounded: humans do not always preferentially learn more specific rules, but, at least in some situations, those rules that happen to be more salient. Even when granting the unsupported assumptions, I show that all of the experiments modeled by Frank and Tenenbaum (2011) either contradict their models, or have a large number of more plausible interpretations. I provide an alternative account of the experimental data based on simple psychological mechanisms, and show that this account both describes the data better, and is easier to falsify. I conclude that, despite the recent surge in Bayesian models of cognitive phenomena, psychological phenomena are best understood by developing and testing psychological theories rather than models that can be fit to virtually any data. PMID:23454791

  8. Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning

    PubMed Central

    2016-01-01

    A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning. PMID:27310576

  9. Linear distributed source modeling of local field potentials recorded with intra-cortical electrode arrays.

    PubMed

    Hindriks, Rikkert; Schmiedt, Joscha; Arsiwalla, Xerxes D; Peter, Alina; Verschure, Paul F M J; Fries, Pascal; Schmid, Michael C; Deco, Gustavo

    2017-01-01

    Planar intra-cortical electrode (Utah) arrays provide a unique window into the spatial organization of cortical activity. Reconstruction of the current source density (CSD) underlying such recordings, however, requires "inverting" Poisson's equation. For inter-laminar recordings, this is commonly done by the CSD method, which consists in taking the second-order spatial derivative of the recorded local field potentials (LFPs). Although the CSD method has been tremendously successful in mapping the current generators underlying inter-laminar LFPs, its application to planar recordings is more challenging. While for inter-laminar recordings the CSD method seems reasonably robust against violations of its assumptions, is it unclear as to what extent this holds for planar recordings. One of the objectives of this study is to characterize the conditions under which the CSD method can be successfully applied to Utah array data. Using forward modeling, we find that for spatially coherent CSDs, the CSD method yields inaccurate reconstructions due to volume-conducted contamination from currents in deeper cortical layers. An alternative approach is to "invert" a constructed forward model. The advantage of this approach is that any a priori knowledge about the geometrical and electrical properties of the tissue can be taken into account. Although several inverse methods have been proposed for LFP data, the applicability of existing electroencephalographic (EEG) and magnetoencephalographic (MEG) inverse methods to LFP data is largely unexplored. Another objective of our study therefore, is to assess the applicability of the most commonly used EEG/MEG inverse methods to Utah array data. Our main conclusion is that these inverse methods provide more accurate CSD reconstructions than the CSD method. We illustrate the inverse methods using event-related potentials recorded from primary visual cortex of a macaque monkey during a motion discrimination task.

  10. Linear distributed source modeling of local field potentials recorded with intra-cortical electrode arrays

    PubMed Central

    Schmiedt, Joscha; Arsiwalla, Xerxes D.; Peter, Alina; Verschure, Paul F. M. J.; Fries, Pascal; Schmid, Michael C.; Deco, Gustavo

    2017-01-01

    Planar intra-cortical electrode (Utah) arrays provide a unique window into the spatial organization of cortical activity. Reconstruction of the current source density (CSD) underlying such recordings, however, requires “inverting” Poisson’s equation. For inter-laminar recordings, this is commonly done by the CSD method, which consists in taking the second-order spatial derivative of the recorded local field potentials (LFPs). Although the CSD method has been tremendously successful in mapping the current generators underlying inter-laminar LFPs, its application to planar recordings is more challenging. While for inter-laminar recordings the CSD method seems reasonably robust against violations of its assumptions, is it unclear as to what extent this holds for planar recordings. One of the objectives of this study is to characterize the conditions under which the CSD method can be successfully applied to Utah array data. Using forward modeling, we find that for spatially coherent CSDs, the CSD method yields inaccurate reconstructions due to volume-conducted contamination from currents in deeper cortical layers. An alternative approach is to “invert” a constructed forward model. The advantage of this approach is that any a priori knowledge about the geometrical and electrical properties of the tissue can be taken into account. Although several inverse methods have been proposed for LFP data, the applicability of existing electroencephalographic (EEG) and magnetoencephalographic (MEG) inverse methods to LFP data is largely unexplored. Another objective of our study therefore, is to assess the applicability of the most commonly used EEG/MEG inverse methods to Utah array data. Our main conclusion is that these inverse methods provide more accurate CSD reconstructions than the CSD method. We illustrate the inverse methods using event-related potentials recorded from primary visual cortex of a macaque monkey during a motion discrimination task. PMID:29253006

  11. Methods for a longitudinal quantitative outcome with a multivariate Gaussian distribution multi-dimensionally censored by therapeutic intervention.

    PubMed

    Sun, Wanjie; Larsen, Michael D; Lachin, John M

    2014-04-15

    In longitudinal studies, a quantitative outcome (such as blood pressure) may be altered during follow-up by the administration of a non-randomized, non-trial intervention (such as anti-hypertensive medication) that may seriously bias the study results. Current methods mainly address this issue for cross-sectional studies. For longitudinal data, the current methods are either restricted to a specific longitudinal data structure or are valid only under special circumstances. We propose two new methods for estimation of covariate effects on the underlying (untreated) general longitudinal outcomes: a single imputation method employing a modified expectation-maximization (EM)-type algorithm and a multiple imputation (MI) method utilizing a modified Monte Carlo EM-MI algorithm. Each method can be implemented as one-step, two-step, and full-iteration algorithms. They combine the advantages of the current statistical methods while reducing their restrictive assumptions and generalizing them to realistic scenarios. The proposed methods replace intractable numerical integration of a multi-dimensionally censored MVN posterior distribution with a simplified, sufficiently accurate approximation. It is particularly attractive when outcomes reach a plateau after intervention due to various reasons. Methods are studied via simulation and applied to data from the Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications study of treatment for type 1 diabetes. Methods proved to be robust to high dimensions, large amounts of censored data, low within-subject correlation, and when subjects receive non-trial intervention to treat the underlying condition only (with high Y), or for treatment in the majority of subjects (with high Y) in combination with prevention for a small fraction of subjects (with normal Y). Copyright © 2013 John Wiley & Sons, Ltd.

  12. Polymer flammability

    DOT National Transportation Integrated Search

    2005-05-01

    This report provides an overview of polymer flammability from a material science perspective and describes currently accepted test methods to quantify burning behavior. Simplifying assumptions about the gas and condensed phase processes of flaming co...

  13. The Average Hazard Ratio - A Good Effect Measure for Time-to-event Endpoints when the Proportional Hazard Assumption is Violated?

    PubMed

    Rauch, Geraldine; Brannath, Werner; Brückner, Matthias; Kieser, Meinhard

    2018-05-01

    In many clinical trial applications, the endpoint of interest corresponds to a time-to-event endpoint. In this case, group differences are usually expressed by the hazard ratio. Group differences are commonly assessed by the logrank test, which is optimal under the proportional hazard assumption. However, there are many situations in which this assumption is violated. Especially in applications were a full population and several subgroups or a composite time-to-first-event endpoint and several components are considered, the proportional hazard assumption usually does not simultaneously hold true for all test problems under investigation. As an alternative effect measure, Kalbfleisch and Prentice proposed the so-called 'average hazard ratio'. The average hazard ratio is based on a flexible weighting function to modify the influence of time and has a meaningful interpretation even in the case of non-proportional hazards. Despite this favorable property, it is hardly ever used in practice, whereas the standard hazard ratio is commonly reported in clinical trials regardless of whether the proportional hazard assumption holds true or not. There exist two main approaches to construct corresponding estimators and tests for the average hazard ratio where the first relies on weighted Cox regression and the second on a simple plug-in estimator. The aim of this work is to give a systematic comparison of these two approaches and the standard logrank test for different time-toevent settings with proportional and nonproportional hazards and to illustrate the pros and cons in application. We conduct a systematic comparative study based on Monte-Carlo simulations and by a real clinical trial example. Our results suggest that the properties of the average hazard ratio depend on the underlying weighting function. The two approaches to construct estimators and related tests show very similar performance for adequately chosen weights. In general, the average hazard ratio defines a more valid effect measure than the standard hazard ratio under non-proportional hazards and the corresponding tests provide a power advantage over the common logrank test. As non-proportional hazards are often met in clinical practice and the average hazard ratio tests often outperform the common logrank test, this approach should be used more routinely in applications. Schattauer GmbH.

  14. Notes on SAW Tag Interrogation Techniques

    NASA Technical Reports Server (NTRS)

    Barton, Richard J.

    2010-01-01

    We consider the problem of interrogating a single SAW RFID tag with a known ID and known range in the presence of multiple interfering tags under the following assumptions: (1) The RF propagation environment is well approximated as a simple delay channel with geometric power-decay constant alpha >/= 2. (2) The interfering tag IDs are unknown but well approximated as independent, identically distributed random samples from a probability distribution of tag ID waveforms with known second-order properties, and the tag of interest is drawn independently from the same distribution. (3) The ranges of the interfering tags are unknown but well approximated as independent, identically distributed realizations of a random variable rho with a known probability distribution f(sub rho) , and the tag ranges are independent of the tag ID waveforms. In particular, we model the tag waveforms as random impulse responses from a wide-sense-stationary, uncorrelated-scattering (WSSUS) fading channel with known bandwidth and scattering function. A brief discussion of the properties of such channels and the notation used to describe them in this document is given in the Appendix. Under these assumptions, we derive the expression for the output signal-to-noise ratio (SNR) for an arbitrary combination of transmitted interrogation signal and linear receiver filter. Based on this expression, we derive the optimal interrogator configuration (i.e., transmitted signal/receiver filter combination) in the two extreme noise/interference regimes, i.e., noise-limited and interference-limited, under the additional assumption that the coherence bandwidth of the tags is much smaller than the total tag bandwidth. Finally, we evaluate the performance of both optimal interrogators over a broad range of operating scenarios using both numerical simulation based on the assumed model and Monte Carlo simulation based on a small sample of measured tag waveforms. The performance evaluation results not only provide guidelines for proper interrogator design, but also provide some insight on the validity of the assumed signal model. It should be noted that the assumption that the impulse response of the tag of interest is known precisely implies that the temperature and range of the tag are also known precisely, which is generally not the case in practice. However, analyzing interrogator performance under this simplifying assumption is much more straightforward and still provides a great deal of insight into the nature of the problem.

  15. Influence of model assumptions about HIV disease progression after initiating or stopping treatment on estimates of infections and deaths averted by scaling up antiretroviral therapy

    PubMed Central

    Sucharitakul, Kanes; Boily, Marie-Claude; Dimitrov, Dobromir

    2018-01-01

    Background Many mathematical models have investigated the population-level impact of expanding antiretroviral therapy (ART), using different assumptions about HIV disease progression on ART and among ART dropouts. We evaluated the influence of these assumptions on model projections of the number of infections and deaths prevented by expanded ART. Methods A new dynamic model of HIV transmission among men who have sex with men (MSM) was developed, which incorporated each of four alternative assumptions about disease progression used in previous models: (A) ART slows disease progression; (B) ART halts disease progression; (C) ART reverses disease progression by increasing CD4 count; (D) ART reverses disease progression, but disease progresses rapidly once treatment is stopped. The model was independently calibrated to HIV prevalence and ART coverage data from the United States under each progression assumption in turn. New HIV infections and HIV-related deaths averted over 10 years were compared for fixed ART coverage increases. Results Little absolute difference (<7 percentage points (pp)) in HIV infections averted over 10 years was seen between progression assumptions for the same increases in ART coverage (varied between 33% and 90%) if ART dropouts reinitiated ART at the same rate as ART-naïve MSM. Larger differences in the predicted fraction of HIV-related deaths averted were observed (up to 15pp). However, if ART dropouts could only reinitiate ART at CD4<200 cells/μl, assumption C predicted substantially larger fractions of HIV infections and deaths averted than other assumptions (up to 20pp and 37pp larger, respectively). Conclusion Different disease progression assumptions on and post-ART interruption did not affect the fraction of HIV infections averted with expanded ART, unless ART dropouts only re-initiated ART at low CD4 counts. Different disease progression assumptions had a larger influence on the fraction of HIV-related deaths averted with expanded ART. PMID:29554136

  16. A pattern-mixture model approach for handling missing continuous outcome data in longitudinal cluster randomized trials.

    PubMed

    Fiero, Mallorie H; Hsu, Chiu-Hsieh; Bell, Melanie L

    2017-11-20

    We extend the pattern-mixture approach to handle missing continuous outcome data in longitudinal cluster randomized trials, which randomize groups of individuals to treatment arms, rather than the individuals themselves. Individuals who drop out at the same time point are grouped into the same dropout pattern. We approach extrapolation of the pattern-mixture model by applying multilevel multiple imputation, which imputes missing values while appropriately accounting for the hierarchical data structure found in cluster randomized trials. To assess parameters of interest under various missing data assumptions, imputed values are multiplied by a sensitivity parameter, k, which increases or decreases imputed values. Using simulated data, we show that estimates of parameters of interest can vary widely under differing missing data assumptions. We conduct a sensitivity analysis using real data from a cluster randomized trial by increasing k until the treatment effect inference changes. By performing a sensitivity analysis for missing data, researchers can assess whether certain missing data assumptions are reasonable for their cluster randomized trial. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Can we predict ectotherm responses to climate change using thermal performance curves and body temperatures?

    PubMed

    Sinclair, Brent J; Marshall, Katie E; Sewell, Mary A; Levesque, Danielle L; Willett, Christopher S; Slotsbo, Stine; Dong, Yunwei; Harley, Christopher D G; Marshall, David J; Helmuth, Brian S; Huey, Raymond B

    2016-11-01

    Thermal performance curves (TPCs), which quantify how an ectotherm's body temperature (T b ) affects its performance or fitness, are often used in an attempt to predict organismal responses to climate change. Here, we examine the key - but often biologically unreasonable - assumptions underlying this approach; for example, that physiology and thermal regimes are invariant over ontogeny, space and time, and also that TPCs are independent of previously experienced T b. We show how a critical consideration of these assumptions can lead to biologically useful hypotheses and experimental designs. For example, rather than assuming that TPCs are fixed during ontogeny, one can measure TPCs for each major life stage and incorporate these into stage-specific ecological models to reveal the life stage most likely to be vulnerable to climate change. Our overall goal is to explicitly examine the assumptions underlying the integration of TPCs with T b , to develop a framework within which empiricists can place their work within these limitations, and to facilitate the application of thermal physiology to understanding the biological implications of climate change. © 2016 John Wiley & Sons Ltd/CNRS.

  18. Chemical library subset selection algorithms: a unified derivation using spatial statistics.

    PubMed

    Hamprecht, Fred A; Thiel, Walter; van Gunsteren, Wilfred F

    2002-01-01

    If similar compounds have similar activity, rational subset selection becomes superior to random selection in screening for pharmacological lead discovery programs. Traditional approaches to this experimental design problem fall into two classes: (i) a linear or quadratic response function is assumed (ii) some space filling criterion is optimized. The assumptions underlying the first approach are clear but not always defendable; the second approach yields more intuitive designs but lacks a clear theoretical foundation. We model activity in a bioassay as realization of a stochastic process and use the best linear unbiased estimator to construct spatial sampling designs that optimize the integrated mean square prediction error, the maximum mean square prediction error, or the entropy. We argue that our approach constitutes a unifying framework encompassing most proposed techniques as limiting cases and sheds light on their underlying assumptions. In particular, vector quantization is obtained, in dimensions up to eight, in the limiting case of very smooth response surfaces for the integrated mean square error criterion. Closest packing is obtained for very rough surfaces under the integrated mean square error and entropy criteria. We suggest to use either the integrated mean square prediction error or the entropy as optimization criteria rather than approximations thereof and propose a scheme for direct iterative minimization of the integrated mean square prediction error. Finally, we discuss how the quality of chemical descriptors manifests itself and clarify the assumptions underlying the selection of diverse or representative subsets.

  19. In Vitro, In Vivo and Post Explantation Testing of Glucose-Detecting Biosensors: Current Methods and Recommendations

    PubMed Central

    Koschwanez, Heidi E.; Reichert, W. Monty

    2007-01-01

    To date, there have been a number of cases where glucose sensors have performed well over long periods of implantation; however, it remains difficult to predict whether a given sensor will perform reliably, will exhibit gradual degradation of performance, or will fail outright soon after implantation. Typically, the literature emphasizes the sensor that performed well, while only briefly (if at all) mentioning the failed devices. This leaves open the question of whether current sensor designs are adequate for the hostile in vivo environment, and whether these sensors have been assessed by the proper regimen of testing protocols. This paper reviews the current in vitro and in vivo testing procedures used to evaluate the functionality and biocompatibility of implantable glucose sensors. An overview of the standards and regulatory bodies that govern biomaterials and end-product device testing precedes a discussion of up-to-date invasive and non-invasive technologies for diabetes management. Analysis of current in vitro, in vivo, and then post implantation testing is presented. Given the underlying assumption that the success of the sensor in vivo foreshadows the long-term reliability of the sensor in the human body, the relative merits of these testing methods are evaluated with respect to how representative they are of human models. PMID:17524479

  20. Calculation of effective transport properties of partially saturated gas diffusion layers

    NASA Astrophysics Data System (ADS)

    Bednarek, Tomasz; Tsotridis, Georgios

    2017-02-01

    A large number of currently available Computational Fluid Dynamics numerical models of Polymer Electrolyte Membrane Fuel Cells (PEMFC) are based on the assumption that porous structures are mainly considered as thin and homogenous layers, hence the mass transport equations in structures such as Gas Diffusion Layers (GDL) are usually modelled according to the Darcy assumptions. Application of homogenous models implies that the effects of porous structures are taken into consideration via the effective transport properties of porosity, tortuosity, permeability (or flow resistance), diffusivity, electric and thermal conductivity. Therefore, reliable values of those effective properties of GDL play a significant role for PEMFC modelling when employing Computational Fluid Dynamics, since these parameters are required as input values for performing the numerical calculations. The objective of the current study is to calculate the effective transport properties of GDL, namely gas permeability, diffusivity and thermal conductivity, as a function of liquid water saturation by using the Lattice-Boltzmann approach. The study proposes a method of uniform water impregnation of the GDL based on the "Fine-Mist" assumption by taking into account the surface tension of water droplets and the actual shape of GDL pores.

Top